Reasoning
Reasoning in AI is a model's ability to work through a problem step by step — not just generate plausible text, but actually analyze, deduce, and arrive at a conclusion. In LLMs this is implemented through techniques like:
- Chain-of-Thought — the model verbalizes its intermediate steps before the final answer.
- Tree-of-Thoughts — explore multiple reasoning paths and pick the best.
- Self-Consistency — sample several chains and vote on the answer.
Classic taxonomy: deductive (general → specific), inductive (specific → general), abductive (best explanation), causal (cause-and-effect). Reasoning is what lets AI agents plan, self-correct, and tackle multi-step problems — it's the difference between a chatbot and an agent.