](https://deep-paper.org/en/paper/file-3568/images/cover.png)
The Hidden Cost of Prompting: Why We Need a New Standard for In-Context Learning
If you have ever played around with Large Language Models (LLMs) like GPT-4 or Llama, you have likely encountered In-Context Learning (ICL). It is the fascinating ability of these models to learn a new task simply by seeing a few examples in the prompt, without any gradient updates or weight changes. For instance, if you want a model to classify movie reviews, you might provide three examples of reviews and their sentiment (Positive/Negative) before asking it to classify a fourth one. This process seems magical and, crucially, it seems “free” compared to fine-tuning a model. ...
](https://deep-paper.org/en/paper/2410.14725/images/cover.png)
](https://deep-paper.org/en/paper/2406.15524/images/cover.png)
](https://deep-paper.org/en/paper/file-3565/images/cover.png)
](https://deep-paper.org/en/paper/2407.03623/images/cover.png)
](https://deep-paper.org/en/paper/2409.05448/images/cover.png)
](https://deep-paper.org/en/paper/2409.14247/images/cover.png)
](https://deep-paper.org/en/paper/2410.09642/images/cover.png)
](https://deep-paper.org/en/paper/2404.19563/images/cover.png)
](https://deep-paper.org/en/paper/file-3559/images/cover.png)
](https://deep-paper.org/en/paper/2404.11588/images/cover.png)
](https://deep-paper.org/en/paper/2405.10128/images/cover.png)
](https://deep-paper.org/en/paper/2402.08874/images/cover.png)
](https://deep-paper.org/en/paper/2402.02987/images/cover.png)
](https://deep-paper.org/en/paper/2406.11049/images/cover.png)
](https://deep-paper.org/en/paper/2403.07175/images/cover.png)
](https://deep-paper.org/en/paper/2410.20200/images/cover.png)
](https://deep-paper.org/en/paper/2406.06461/images/cover.png)
](https://deep-paper.org/en/paper/2409.09788/images/cover.png)
](https://deep-paper.org/en/paper/2410.07573/images/cover.png)