](https://deep-paper.org/en/paper/2406.16264/images/cover.png)
Beyond the Haystack: Why Long-Context LLMs Struggle to Read a Novel
Introduction In the rapid evolution of Large Language Models (LLMs), one metric has become a major bragging right: the context window. We have moved from models that could remember a few paragraphs to behemoths like Gemini 1.5 Pro and GPT-4o, which claim to process hundreds of thousands, if not millions, of tokens at once. In theory, you can now feed an entire novel into an AI and ask questions about it. ...
](https://deep-paper.org/en/paper/2407.08582/images/cover.png)
](https://deep-paper.org/en/paper/2409.08160/images/cover.png)
](https://deep-paper.org/en/paper/2402.05827/images/cover.png)
](https://deep-paper.org/en/paper/2409.05283/images/cover.png)
](https://deep-paper.org/en/paper/2410.02691/images/cover.png)
](https://deep-paper.org/en/paper/2410.03996/images/cover.png)
](https://deep-paper.org/en/paper/file-3444/images/cover.png)
](https://deep-paper.org/en/paper/2403.15744/images/cover.png)
](https://deep-paper.org/en/paper/2404.07840/images/cover.png)
](https://deep-paper.org/en/paper/2402.12817/images/cover.png)
](https://deep-paper.org/en/paper/file-3440/images/cover.png)
](https://deep-paper.org/en/paper/file-3439/images/cover.png)
](https://deep-paper.org/en/paper/2410.04074/images/cover.png)
](https://deep-paper.org/en/paper/2406.11823/images/cover.png)
](https://deep-paper.org/en/paper/2406.16620/images/cover.png)
](https://deep-paper.org/en/paper/file-3435/images/cover.png)
](https://deep-paper.org/en/paper/2403.07691/images/cover.png)
](https://deep-paper.org/en/paper/2406.14883/images/cover.png)
](https://deep-paper.org/en/paper/2404.00459/images/cover.png)