AI Summary Hub

Reasoning patterns

How LLMs and agents structure reasoning and action.

Definition

Reasoning patterns are structured ways to elicit or organize model reasoning: chain-of-thought (step-by-step), tree-of-thoughts (explore branches), ReAct (reason + act), and RDD (retrieval-decision-design), among others. Using a clear pattern improves reliability (more consistent reasoning) and debuggability (you can inspect steps or actions).

They are used in prompt engineering (e.g. CoT) and inside agents (e.g. ReAct, RDD). Without a reasoning pattern, models tend to produce flat, unstructured responses that skip steps — a reasoning pattern acts as scaffolding that makes the model's thought process explicit, inspectable, and correctable. Patterns can also be combined: CoT can run inside a ReAct agent's thought step, and ToT can feed candidates into an RDD decision loop.

Choosing a pattern depends on the task complexity, available compute, and whether the system has access to external tools or knowledge. CoT is the lowest-cost starting point; ReAct adds tool use; ToT adds search over multiple paths; RDD adds spec-grounded compliance. Most production systems combine at least two patterns.

How it works

Pattern selection

Generic reasoning loop

You feed input (question, task) into a pattern: the pattern constrains how the model reasons or acts (e.g. "think step by step", or thought–action–observation loops). The model produces an output (answer, action sequence). Prompts or system design encourage the model to show reasoning (e.g. "Think step by step") or to interleave thought and action. Patterns can be combined (e.g. CoT inside an agent loop). See the linked pages for each pattern's details.

When to use / When NOT to use

ScenarioUse reasoning patternsDon't use
Multi-step math, logic, or codingYes — CoT improves accuracy significantlyNo — single-shot prompting often fails on complex reasoning
Tool-using agentsYes — ReAct structures each action with a thoughtNo — direct tool calling without reasoning increases errors
Planning over many solution branchesYes — ToT explores and scores alternativesNo — CoT is cheaper if one path is usually correct
Tasks requiring spec complianceYes — RDD enforces retrieved specificationsNo — freeform generation for creative open-ended tasks
Simple factual lookupsNo — reasoning patterns add unnecessary costYes — direct retrieval or lookup is faster

Comparisons

PatternCore mechanismCostBest task typeComposable with
Chain-of-Thought (CoT)Sequential reasoning stepsLow (1 call)Math, logic, deductionReAct, ToT, RDD
Tree of Thoughts (ToT)Branch, score, expandHigh (N calls)Planning, search, creativeCoT per branch
ReActThought–action–observation loopMedium (1 call + tools)Tool-using agentsCoT, RDD
RDDRetrieve spec → decide → generate → validateMedium–highCompliance, spec-driven genReAct, RAG

Pros and cons

ProsCons
Makes model reasoning explicit and inspectableAdds tokens (cost and latency)
Significantly improves accuracy on structured tasksWrong reasoning pattern for the task can hurt quality
Enables debugging by inspecting intermediate stepsNot all models follow patterns reliably
Composable — patterns can be nested or combinedComplex combinations increase prompt engineering effort

Code examples

from openai import OpenAI

client = OpenAI()

def chain_of_thought(question: str) -> str:
    """Zero-shot CoT: append 'Let's think step by step' to elicit reasoning."""
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {
                "role": "user",
                "content": f"{question}\n\nLet's think step by step.",
            }
        ],
    )
    return response.choices[0].message.content

answer = chain_of_thought("If a train travels 60 km/h for 2.5 hours, how far does it go?")
print(answer)

Practical resources

See also