](https://deep-paper.org/en/paper/2402.17497/images/cover.png)
Trust Issues in AI: How REAR Teaches LLMs to Ignore Irrelevant Data
Introduction We often think of Large Language Models (LLMs) as vast repositories of knowledge, but they have a significant weakness: they cannot memorize everything, especially real-time events or niche domain knowledge. To solve this, the AI community widely adopted Retrieval-Augmented Generation (RAG). The concept is simple: when an LLM is asked a question, it first searches an external database (like Wikipedia) for relevant documents, then uses those documents to generate an answer. ...
](https://deep-paper.org/en/paper/2406.05794/images/cover.png)
](https://deep-paper.org/en/paper/file-3531/images/cover.png)
](https://deep-paper.org/en/paper/file-3530/images/cover.png)
](https://deep-paper.org/en/paper/2407.13998/images/cover.png)
](https://deep-paper.org/en/paper/file-3528/images/cover.png)
](https://deep-paper.org/en/paper/file-3527/images/cover.png)
](https://deep-paper.org/en/paper/file-3526/images/cover.png)
](https://deep-paper.org/en/paper/2410.02027/images/cover.png)
](https://deep-paper.org/en/paper/2409.16341/images/cover.png)
](https://deep-paper.org/en/paper/file-3523/images/cover.png)
](https://deep-paper.org/en/paper/2410.10449/images/cover.png)
](https://deep-paper.org/en/paper/2310.09259/images/cover.png)
](https://deep-paper.org/en/paper/2408.01046/images/cover.png)
](https://deep-paper.org/en/paper/2406.05707/images/cover.png)
](https://deep-paper.org/en/paper/2402.11291/images/cover.png)
](https://deep-paper.org/en/paper/2409.20243/images/cover.png)
](https://deep-paper.org/en/paper/2410.04075/images/cover.png)
](https://deep-paper.org/en/paper/2406.16330/images/cover.png)
](https://deep-paper.org/en/paper/2410.22642/images/cover.png)