](https://deep-paper.org/en/paper/2406.14739/images/cover.png)
Beyond Top-K: Building Better Prompts with Iterative Retrieval and Reinforcement Learning
Introduction In the era of Large Language Models (LLMs), In-Context Learning (ICL) has become a dominant paradigm. The idea is deceptively simple: instead of fine-tuning a model’s weights, you simply provide a few examples (exemplars) in the prompt, and the model learns the pattern. For example, if you want an LLM to translate English to SQL, your prompt might look like this: Input: Show me users over 20. Output: SELECT * FROM users WHERE age > 20; ...
](https://deep-paper.org/en/paper/2411.00324/images/cover.png)
](https://deep-paper.org/en/paper/2402.04437/images/cover.png)
](https://deep-paper.org/en/paper/2406.18695/images/cover.png)
](https://deep-paper.org/en/paper/2406.09330/images/cover.png)
](https://deep-paper.org/en/paper/2402.00658/images/cover.png)
](https://deep-paper.org/en/paper/2310.03304/images/cover.png)
](https://deep-paper.org/en/paper/2406.19760/images/cover.png)
](https://deep-paper.org/en/paper/2311.01041/images/cover.png)
](https://deep-paper.org/en/paper/2406.12050/images/cover.png)
](https://deep-paper.org/en/paper/2410.20008/images/cover.png)
](https://deep-paper.org/en/paper/2309.16289/images/cover.png)
](https://deep-paper.org/en/paper/2404.12545/images/cover.png)
](https://deep-paper.org/en/paper/2402.13446/images/cover.png)
](https://deep-paper.org/en/paper/2408.12194/images/cover.png)
](https://deep-paper.org/en/paper/file-3283/images/cover.png)
](https://deep-paper.org/en/paper/2405.14092/images/cover.png)
](https://deep-paper.org/en/paper/2310.02469/images/cover.png)
](https://deep-paper.org/en/paper/file-3280/images/cover.png)
](https://deep-paper.org/en/paper/2407.00869/images/cover.png)