Before 2022, if you asked a language model 'Sarah has 3 apples, buys 4 more, gives 2 away, how many does she have?' it would often just guess a plausible-sounding number. Then researchers at Google discovered something striking: just adding 'Let's think step by step' to the prompt caused models to break the problem into intermediate steps — and accuracy jumped dramatically. That's chain-of-thought (CoT) prompting. The intuition is simple. Language models generate one token at a time based on everything that came before. Forcing the model to first generate reasoning steps means those steps become context for the final answer — so the final answer is conditioned on deliberate reasoning rather than pattern-matched intuition. Chain-of-thought works particularly well for arithmetic, logical puzzles, multi-step word problems, and any task where the answer depends on intermediate calculations. Modern reasoning models like OpenAI's o1, o3, and DeepSeek-R1 have baked CoT directly into their training — they automatically generate extensive internal reasoning before responding, often thousands of tokens of thought for a single answer. This represents a fundamental shift: inference compute (thinking time) can now substitute for training compute (model size) on hard reasoning tasks. For everyday prompting, simply adding 'think step by step' or 'show your reasoning' remains one of the highest-leverage prompt tweaks available.
BeginnerAI & MLPrompting TechniquesKnowledge
What is Chain-of-Thought Reasoning in AI?
Chain-of-thought reasoning is a prompting technique that asks an AI model to work through a problem step by step before giving a final answer — much like showing your work in a math problem. This simple change dramatically improves model accuracy on reasoning, math, and logic tasks.
chain-of-thought-reasoningprompt-engineeringreasoning-models
Want more like this?
WeeBytes delivers 25 cards like this every day — personalised to your interests.
Start learning for free