](https://deep-paper.org/en/paper/2402.12817/images/cover.png)
The Butterfly Effect in NLP: Disentangling Randomness in Few-Shot Learning
In the world of Machine Learning, particularly Natural Language Processing (NLP), we often chase the highest accuracy score on a benchmark. But there is a ghost in the machine: randomness. Imagine you are training a model with very limited data—perhaps a few-shot classification task. You run the experiment and get an F1 score of 85%. You are ecstatic. But then, you change the “random seed”—a simple integer that controls how data is shuffled or how weights are initialized—and run it again. This time, the score drops to 60%. ...
](https://deep-paper.org/en/paper/file-3440/images/cover.png)
](https://deep-paper.org/en/paper/file-3439/images/cover.png)
](https://deep-paper.org/en/paper/2410.04074/images/cover.png)
](https://deep-paper.org/en/paper/2406.11823/images/cover.png)
](https://deep-paper.org/en/paper/2406.16620/images/cover.png)
](https://deep-paper.org/en/paper/file-3435/images/cover.png)
](https://deep-paper.org/en/paper/2403.07691/images/cover.png)
](https://deep-paper.org/en/paper/2406.14883/images/cover.png)
](https://deep-paper.org/en/paper/2404.00459/images/cover.png)
](https://deep-paper.org/en/paper/file-3431/images/cover.png)
](https://deep-paper.org/en/paper/2402.15343/images/cover.png)
](https://deep-paper.org/en/paper/2406.12606/images/cover.png)
](https://deep-paper.org/en/paper/2404.06809/images/cover.png)
](https://deep-paper.org/en/paper/2405.07609/images/cover.png)
](https://deep-paper.org/en/paper/file-3426/images/cover.png)
](https://deep-paper.org/en/paper/2411.03769/images/cover.png)
](https://deep-paper.org/en/paper/2312.12141/images/cover.png)
](https://deep-paper.org/en/paper/2404.11201/images/cover.png)
](https://deep-paper.org/en/paper/file-3422/images/cover.png)