blog.langchain.com/content/files/abs/2210.xml

Preview meta tags from the blog.langchain.com website.

Linked Hostnames

24

Thumbnail

Search Engine Appearance

Google

https://blog.langchain.com/content/files/abs/2210.xml

ReAct: Synergizing Reasoning and Acting in Language Models

While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. action plan generation) have primarily been studied as separate topics. In this paper, we explore the use of LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information. We apply our approach, named ReAct, to a diverse set of language and decision making tasks and demonstrate its effectiveness over state-of-the-art baselines, as well as improved human interpretability and trustworthiness over methods without reasoning or acting components. Concretely, on question answering (HotpotQA) and fact verification (Fever), ReAct overcomes issues of hallucination and error propagation prevalent in chain-of-thought reasoning by interacting with a simple Wikipedia API, and generates human-like task-solving trajectories that are more interpretable than baselines without reasoning traces. On two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted with only one or two in-context examples. Project site with code: https://react-lm.github.io



Bing

ReAct: Synergizing Reasoning and Acting in Language Models

https://blog.langchain.com/content/files/abs/2210.xml

While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. action plan generation) have primarily been studied as separate topics. In this paper, we explore the use of LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information. We apply our approach, named ReAct, to a diverse set of language and decision making tasks and demonstrate its effectiveness over state-of-the-art baselines, as well as improved human interpretability and trustworthiness over methods without reasoning or acting components. Concretely, on question answering (HotpotQA) and fact verification (Fever), ReAct overcomes issues of hallucination and error propagation prevalent in chain-of-thought reasoning by interacting with a simple Wikipedia API, and generates human-like task-solving trajectories that are more interpretable than baselines without reasoning traces. On two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted with only one or two in-context examples. Project site with code: https://react-lm.github.io



DuckDuckGo

https://blog.langchain.com/content/files/abs/2210.xml

ReAct: Synergizing Reasoning and Acting in Language Models

While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. action plan generation) have primarily been studied as separate topics. In this paper, we explore the use of LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information. We apply our approach, named ReAct, to a diverse set of language and decision making tasks and demonstrate its effectiveness over state-of-the-art baselines, as well as improved human interpretability and trustworthiness over methods without reasoning or acting components. Concretely, on question answering (HotpotQA) and fact verification (Fever), ReAct overcomes issues of hallucination and error propagation prevalent in chain-of-thought reasoning by interacting with a simple Wikipedia API, and generates human-like task-solving trajectories that are more interpretable than baselines without reasoning traces. On two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted with only one or two in-context examples. Project site with code: https://react-lm.github.io

  • General Meta Tags

    21
    • title
      [2210.03629] ReAct: Synergizing Reasoning and Acting in Language Models
    • title
      open search
    • title
      open navigation menu
    • title
      contact arXiv
    • title
      subscribe to arXiv mailings
  • Open Graph Meta Tags

    10
    • og:type
      website
    • og:site_name
      arXiv.org
    • og:title
      ReAct: Synergizing Reasoning and Acting in Language Models
    • og:url
      https://arxiv.org/abs/2210.03629v3
    • og:image
      https://static.arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png
  • Twitter Meta Tags

    6
    • twitter:site
      @arxiv
    • twitter:card
      summary
    • twitter:title
      ReAct: Synergizing Reasoning and Acting in Language Models
    • twitter:description
      While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g....
    • twitter:image
      https://static.arxiv.org/icons/twitter/arxiv-logo-twitter-square.png
  • Link Tags

    12
    • apple-touch-icon
      https://static.arxiv.org/static/browse/0.3.4/images/icons/apple-touch-icon.png
    • canonical
      /abs/2210.03629
    • icon
      https://static.arxiv.org/static/browse/0.3.4/images/icons/favicon-32x32.png
    • icon
      https://static.arxiv.org/static/browse/0.3.4/images/icons/favicon-16x16.png
    • manifest
      https://static.arxiv.org/static/browse/0.3.4/images/icons/site.webmanifest

Links

67