From birds migrating across continents to London cab drivers navigating 26,000 streets, the ability to find one’s way through the world is one of nature’s most extraordinary feats. At the heart of spatial navigation in mammals lies a specialized group of neurons in the hippocampus known as place cells. These neurons act like an internal “You Are Here” marker, firing only when an animal occupies a specific location in its environment.
Yet the brain’s navigation system is far more sophisticated than any ordinary map. It’s dynamic, adaptable, and deeply context-aware. Walk into a familiar room that’s been rearranged, and you’ll experience a brief moment of disorientation until your brain updates its internal map. Place cells do something similar—a process called remapping—in which their firing patterns shift dramatically in response to new environments or contextual cues like different colors, odors, or geometries.
This raises two fundamental questions in neuroscience:
- How do place cells learn where to fire in the first place?
- What mechanism enables them to create new maps—or “remap”—for different contexts?
A recent NeurIPS 2024 paper, Learning Place Cell Representations and Context-Dependent Remapping, proposes an elegant answer. The authors suggest that these complex behaviors can emerge from a single, simple rule: neural representations should mirror the structure of the world itself. Put differently, things that are close together in physical space should evoke similar patterns of neural activity.
In this article, we’ll explore how that idea—formalized as a similarity-based objective—can teach an artificial neural network to develop its own place cells, exhibit context-dependent remapping, and even reveal a surprisingly efficient mechanism for generating new mental maps on the fly.
The Brain’s Internal World: A Quick Background
Before examining the model, let’s revisit two key ideas.
- Place cells become active in a small region of space called a place field. Collectively, their activity forms a neural map of the environment.
- Remapping refers to the change in these place fields when the environment or context changes. In global remapping, the entire map reorganizes—neurons that fired in one location in Room A may become inactive in Room B, while others fire elsewhere.
For decades, neuroscientists have proposed differing mechanisms—from inheritance of patterns via grid cells or border cells to complex attractor network dynamics. More recently, normative models have reframed the question: rather than duplicating biological circuitry, define an objective function that represents the goal of the system, train a network to satisfy that objective, and see if brain-like behaviors emerge.
This new work follows that tradition with a strikingly minimal and biologically plausible objective.
The Core Idea: Learning by Comparing Similarities
The authors argue that a spatial representation should preserve relative distances: points close together in the real world should be close together in neural space, and distant points should be far apart. This intuition is encoded in a similarity-based objective function.
Figure 1: Matching similarity in the external world—space and context—with similarity in neural representations yields place-like cells in artificial networks.
Imagine a neural network whose output for a location \( \mathbf{x} \) is a vector of firing rates, the population vector \( \mathbf{p}(\mathbf{x}) \). The loss function \( \mathcal{L} \) encourages two simple relationships:
Target Similarity:
\( \beta + (1 - \beta) e^{ -\frac{1}{2\sigma^2}|\mathbf{x}_t - \mathbf{x}_{t'}|^2 } \)
This Gaussian term defines how similar two physical locations should be. Nearby points yield high similarity; distant points approach a baseline similarity \( \beta \). The scale parameter \( \sigma \) controls what counts as “close.” A nonzero \( \beta \) prevents distant points from being fully orthogonal, adding representational flexibility akin to high-dimensional random vectors.Learned Similarity:
\( e^{ -|\mathbf{p}(\mathbf{z}_t) - \mathbf{p}(\mathbf{z}_{t'})|^2 } \)
This measures how similar the network’s activity patterns are for the two locations. The network minimizes \( \mathcal{L} \) by aligning learned similarity with target similarity across thousands of pairs of points.
A regularization term \( \lambda|\mathbf{p}(\mathbf{z}_t)|^2 \) prevents uncontrolled firing, ensuring efficient coding.
Extending to Context
Place cells don’t just encode geometry—they respond to environmental context. The authors extended their objective to include a scalar context variable \( c \), requiring similarity in both space and context. Two points are considered similar only if they are simultaneously close in position and context. Training a network with this joint objective allowed it to learn distinct maps for distinct contexts—the essence of remapping.
From Simple Rule to Complex Brain-like Behavior
The researchers trained two neural architectures on this principle:
- a feedforward (FF) network that directly receives Cartesian coordinates as input, and
- a recurrent (RNN) network that infers position by integrating velocity signals over time (analogous to biological path integration).
Learning Place Cells
When the FF network was trained on the purely spatial objective, it quickly minimized the loss and developed units that fired at distinct spatial locations—artificial place cells.
Figure 2: The feedforward network learns place cell–like firing fields and forms an accurate internal map of space.
By adjusting \( \sigma \) and \( \beta \), the researchers could tune field size and multi-field behavior. Importantly, decoding tests confirmed that the network’s activity could accurately reconstruct spatial position, demonstrating a functional internal map.
The Emergence of Global Remapping
Introducing a context signal produced dramatic changes. Neurons shifted their firing fields unpredictably across contexts, just as hippocampal neurons do during global remapping.
Figure 3: Adding context causes distinct, uncorrelated spatial maps. Each context evokes a new “world,” analogous to hippocampal global remapping.
Some units acquired multiple place fields, others switched off entirely, and spatial correlations between contexts dropped to near zero — behavior strongly reminiscent of biological data.
Recurrent Networks and Path Integration
To test biological plausibility further, the authors trained an RNN that received only velocity inputs and had to infer its position through accumulated motion.
Figure 4: The recurrent network learns both place-like and band-like cells, achieves path integration, and organizes representations into structured cognitive maps.
The RNN learned the task as effectively as the FF model, generating localized place cells and elongated band cells—another known hippocampal pattern. It could successfully path integrate, maintaining accurate position estimates over long trajectories. Dimensionality reduction with UMAP revealed structured manifolds representing distinct contexts—essentially cognitive maps encoded directly in neural activity.
A New Twist: Remapping as High-Dimensional Rotation
The final insight is perhaps the most compelling. Because the similarity objective depends only on distances between population vectors, it is invariant to orthogonal transformations. Thus, a learned representation can be rotated, reflected, or permuted in high-dimensional space without violating the objective’s constraints.
Figure 5: Remapping interpreted as orthogonal transformation. Rotating or permuting a learned map produces a new, valid but uncorrelated map—offering a mechanism for adaptive reuse.
The authors applied random orthogonal transformations to trained representations. The new maps preserved similarity structure but were uncorrelated with the originals—effectively achieving global remapping. Moreover, when they computed a best-fit orthogonal transform to align representations between two contexts, the transformed and learned maps matched nearly perfectly.
This suggests that biological remapping could arise not from re-learning maps, but from simply rotating existing ones in neural space—a highly efficient mechanism for representational reuse. The same principle can even extend a learned map into unexplored regions without further training.
Conclusion: A Simple Principle for a Complex Brain
This study offers a unified and parsimonious explanation for key features of the hippocampal spatial system. By enforcing that neural similarity mirrors environmental similarity, the model naturally learns to:
- Develop place cells with localized firing fields.
- Exhibit global remapping across contexts.
- Perform path integration using recurrent dynamics.
- Organize cognition into structured, low-dimensional maps.
Even more striking, the model reveals that remapping might be implemented through orthogonal transformations—a computationally efficient way to reuse knowledge and generate new representations. While simplified and rate-based, this framework illuminates how a minimal constraint can yield a rich repertoire of adaptive, brain-like behaviors.
The brain’s GPS may not rely on a complex hierarchy of rules but on a single elegant idea: similarity shapes our mental maps.