Large language models are great at tasks like writing and translation, but they often struggle with complex problems like math and logical reasoning. That’s because they don’t naturally think step by step. In this article, I’ll show you how you can make them reason step by step—with a surprisingly simple trick.
For a long time, we assumed that AI models just needed more data and more computing power to improve. But even with major advances in language understanding, models kept struggling with complex reasoning—math, logic puzzles, you name it. They often produced answers that sounded right but turned out to be complete nonsense.
The Chain of Thought
This began to change when researchers discovered a clever trick: Chain-of-Thought (CoT) prompting. Instead of asking the model to give a direct answer, they simply added the phrase: “Let’s solve this step by step.” Suddenly, the AI started breaking problems down logically and producing more accurate answers.
I previously wrote about the Chinese challenger to GPT, Llama, and Gemini: DeepSeek. In my view, CoT became a real hype when DeepSeek showcased how powerful CoT prompting can be—because the model is trained to apply this step-by-step reasoning by default.
This made it easier for everyone to get AI to truly think, instead of just guessing. CoT prompting is now seen as one of the most effective ways to make AI models smarter and more reliable. Whether it’s for math, customer service, or business analysis—AI can now reason, all thanks to a simple but brilliant prompting technique.
The Art of Prompting AI
We’ve now seen various prompting styles emerge.
Zero-shot prompting
You give the model a task with no examples. This works well for simple tasks, but not for complex problems.
Example: “Write a poem about AI.”
The model generates a poem without further guidance.
Few-shot prompting
You give a few examples to help the model understand the structure of the task. This is useful for more structured outputs like summaries or translations.
Example: “Here are two article summaries. Use the same style to summarize this next one.”
Active prompting
You evaluate the model’s output and provide feedback so it can improve. I’ve sometimes spent an hour having this kind of back-and-forth.
Example: “This answer isn’t precise enough. Please give a more detailed explanation and rewrite the conclusion.”
How Can You Use Chain-of-Thought Prompting Yourself?
Lately, I’ve been experimenting a lot with CoT, and these tips work really well for me:
1. Use a step-by-step prompt
Add “Let’s think through this step by step” to your prompt to encourage logical reasoning.
Example: “What’s the square root of 144? Let’s solve this step by step.”
2. Provide a good example
Let the model learn from a carefully worked-out reasoning process.
Example: “Here’s how to do a budget analysis: first, list all income sources, then subtract expenses…”
3. Let the model generate multiple answers
Compare the outputs and choose the most consistent one.
Example: “Give three different summaries of this text and select the best one.”
4. Use active prompting
Give feedback and let the model correct its mistake.
Example: “You skipped the third step. Try again and include that step.”
Not All Models Handle Chain-of-Thought Prompting Well
In my experience, not all models respond well to CoT. Research also shows that CoT prompting works best with large models (100+ billion parameters) like GPT-4 and DeepSeek. Smaller models struggle with long, logical reasoning chains.
Here are a few other important factors in using CoT effectively:
- Self-consistency: Let the model solve the same problem multiple times and pick the most logical answer. This helps reduce errors and leads to more reliable responses.
- Robustness: CoT prompting works even if your examples aren’t perfectly worded. You don’t need flawless language to get results.
- Prompt sensitivity: A poorly written prompt can ruin your CoT attempt. Make sure your instructions are clear and your question is well-defined.
- Coherence: The steps should logically follow one another. If a step is missing or flawed, the final conclusion may be incorrect.
Chain-of-Thought Prompting is a Game-Changer
In my view, Chain-of-Thought prompting is truly a game-changer for AI. I’ve seen firsthand how much it improves the quality of output. With the right prompts and techniques, you enable AI to think better, provide more accurate answers, and solve complex problems.
Start with simple tasks and gradually introduce step-by-step reasoning. You’ll soon notice that AI not only responds more intelligently, but also reveals insights that would otherwise remain hidden.
Got your own tip? Share it in the comments!