](https://deep-paper.org/en/paper/file-3431/images/cover.png)
The Pinocchio Strategy: Boosting LLM Performance by Encouraging Hallucination
In the world of Large Language Models (LLMs), “hallucination” is usually a dirty word. It refers to the moment an AI confidently asserts that the moon is made of green cheese or invents a historical event that never happened. Researchers spend millions of dollars and countless hours trying to stop models from hallucinating. But what if hallucination isn’t just a bug? What if it’s a feature that, when manipulated correctly, can actually make a model smarter? ...
](https://deep-paper.org/en/paper/2402.15343/images/cover.png)
](https://deep-paper.org/en/paper/2406.12606/images/cover.png)
](https://deep-paper.org/en/paper/2404.06809/images/cover.png)
](https://deep-paper.org/en/paper/2405.07609/images/cover.png)
](https://deep-paper.org/en/paper/file-3426/images/cover.png)
](https://deep-paper.org/en/paper/2411.03769/images/cover.png)
](https://deep-paper.org/en/paper/2312.12141/images/cover.png)
](https://deep-paper.org/en/paper/2404.11201/images/cover.png)
](https://deep-paper.org/en/paper/file-3422/images/cover.png)
](https://deep-paper.org/en/paper/2402.13717/images/cover.png)
](https://deep-paper.org/en/paper/2407.07099/images/cover.png)
](https://deep-paper.org/en/paper/2312.01314/images/cover.png)
](https://deep-paper.org/en/paper/2406.11085/images/cover.png)
](https://deep-paper.org/en/paper/2407.07053/images/cover.png)
](https://deep-paper.org/en/paper/2410.07673/images/cover.png)
](https://deep-paper.org/en/paper/2410.03075/images/cover.png)
](https://deep-paper.org/en/paper/file-3413/images/cover.png)
](https://deep-paper.org/en/paper/2411.00492/images/cover.png)
](https://deep-paper.org/en/paper/2404.09682/images/cover.png)