](https://deep-paper.org/en/paper/14743_emergence_in_non_neural_-1791/images/cover.png)
Demystifying Grokking: It’s Not Just for Neural Networks
In the landscape of modern artificial intelligence, few phenomena are as puzzling as “grokking.” Imagine training a neural network on a difficult math problem. For a long time—thousands of training steps—the model seems to memorize the training data perfectly, yet it fails miserably on any new, unseen test data. Its test accuracy sits stubbornly at 0%. Then, suddenly, often long after you might have given up and stopped the training, the test accuracy rockets upward, snapping from 0% to 100%. The model has suddenly “grokked” the underlying logic. ...
](https://deep-paper.org/en/paper/2412.06329/images/cover.png)
](https://deep-paper.org/en/paper/3814_going_deeper_into_locally-1789/images/cover.png)
](https://deep-paper.org/en/paper/2501.19334/images/cover.png)
](https://deep-paper.org/en/paper/2505.18300/images/cover.png)
](https://deep-paper.org/en/paper/2502.15588/images/cover.png)
](https://deep-paper.org/en/paper/2410.21465/images/cover.png)
](https://deep-paper.org/en/paper/2502.12292/images/cover.png)
](https://deep-paper.org/en/paper/2502.09609/images/cover.png)
](https://deep-paper.org/en/paper/13328_from_mechanistic_interpr-1780/images/cover.png)
](https://deep-paper.org/en/paper/121_on_the_tension_between_byz-1779/images/cover.png)
](https://deep-paper.org/en/paper/2502.02990/images/cover.png)
](https://deep-paper.org/en/paper/2407.11784/images/cover.png)
](https://deep-paper.org/en/paper/4192_p_all_atom_is_unlocking_n-1776/images/cover.png)
](https://deep-paper.org/en/paper/2310.08731/images/cover.png)
](https://deep-paper.org/en/paper/496_relational_invariant_learn-1773/images/cover.png)
](https://deep-paper.org/en/paper/450_flashtp_fused_sparsity_awa-1772/images/cover.png)
](https://deep-paper.org/en/paper/2505.14170/images/cover.png)
](https://deep-paper.org/en/paper/2407.04620/images/cover.png)
](https://deep-paper.org/en/paper/2502.00264/images/cover.png)