](https://deep-paper.org/en/paper/2503.17579/images/cover.png)
Do LLMs Think Like Us? Probing the Production-Interpretation Gap
There is a central question currently dominating the field of Natural Language Processing (NLP): Are Large Language Models (LLMs) simply “stochastic parrots” mimicking patterns, or do they possess cognitive mechanisms similar to humans? Much of the current evaluation of LLMs focuses on the end result. If a model answers a question correctly or writes a coherent story, we assume it “understands.” However, cognitive plausibility isn’t just about the output; it is about the process. To truly test if an LLM is cognitively plausible, we need to see if it makes the same distinct mental moves that humans do when processing language. ...
](https://deep-paper.org/en/paper/2506.01319/images/cover.png)
](https://deep-paper.org/en/paper/2505.12265/images/cover.png)
](https://deep-paper.org/en/paper/2503.07457/images/cover.png)
](https://deep-paper.org/en/paper/2406.18403/images/cover.png)
](https://deep-paper.org/en/paper/file-2355/images/cover.png)
](https://deep-paper.org/en/paper/file-2354/images/cover.png)
](https://deep-paper.org/en/paper/2412.08985/images/cover.png)
](https://deep-paper.org/en/paper/2505.16061/images/cover.png)
](https://deep-paper.org/en/paper/2505.19599/images/cover.png)
](https://deep-paper.org/en/paper/2506.00637/images/cover.png)
](https://deep-paper.org/en/paper/file-2348/images/cover.png)
](https://deep-paper.org/en/paper/2506.07479/images/cover.png)
](https://deep-paper.org/en/paper/file-2346/images/cover.png)
](https://deep-paper.org/en/paper/2506.19571/images/cover.png)
](https://deep-paper.org/en/paper/file-2344/images/cover.png)
](https://deep-paper.org/en/paper/2505.10939/images/cover.png)
](https://deep-paper.org/en/paper/2501.06645/images/cover.png)
](https://deep-paper.org/en/paper/2506.00806/images/cover.png)
](https://deep-paper.org/en/paper/2506.19325/images/cover.png)