](https://deep-paper.org/en/paper/2305.15408/images/cover.png)
Unlocking the Black Box: The Theory Behind Chain-of-Thought in LLMs
Unlocking the Black Box: The Theory Behind Chain-of-Thought in LLMs If you’ve used modern large language models (LLMs) on hard problems, you know the trick: append a prompt like “Let’s think step-by-step” and—often—the model produces intermediate reasoning and gets the answer right. That simple change, called Chain-of-Thought (CoT) prompting, has become a practical staple for eliciting better performance on math, logic, and reasoning tasks. But why does CoT help so much? Is it just coaxing the model to reveal what it already knows, or does it fundamentally change what the model can compute? ...