I’m a research scientist in the Neuroscience team at DeepMind, where I’m working on building better memory systems for artificial agents (see below).

Previously, I was a postdoc with Andrew Davison at the Dyson Robotics Lab at Imperial College London, where I worked on real-time distributed inference and abstraction in spatial / visual perception systems.

Before that, I did a PhD with Neil Burgess in the Space and Memory Group, which is a neuroscience lab at University College London. The lab is interested in how the brain builds useful representations ("maps") of the world, and how those representations can be used for decision making. I wrote my thesis on how the brain might construct these representations by “replaying” memories during sleep.

### A Memory Manifesto

I’m interested in building memory systems that are:

• Fast: If you show humans a new thing $x$, they can immediately tell you whether it is like another thing $y$.
• Flexible: We can quickly restructure our knowledge given new data. You take a wrong turn on your route to work and after some time arrive at some familiar (but unexpected) landmark. You don’t just update your belief over where you are now - you also correct the entire trajectory that in retrospect, you must have taken to get there. You might even be able to figure out which wrong turn you took.
• Attentive: We cache (store) and retrospectively make sense of data. If data doesn’t fit into our current understanding of the world, we can still later revisit and integrate that data if something in our understanding has changed. If I told you that Memphis was the capital of Kemet, it might not mean a lot to you. But you might still remember that I told you this fact. If you later learned that Kemet is the old world name for Egypt, you could retroactively integrate this as Memphis was the capital of what is now Egypt.
• Sample-efficient: You or I can get the gist of some concept $c$ from very few examples, sometimes only one. Most existing AI systems are very data hungry.
• Robust: If you teach me a lot about dogs over a concentrated period of time, then teach me a lot about cats, I won’t “overwrite” the stuff I learned about dogs.
• Hierarchical: Oaks and pines are both types of tree. Sons and daughters are both types of child. Oaks are to pines as sons are to daughters.
• Probabilistic: If you ask me a question, I can give you an answer, tell you how sure I am, and give you my $2^{nd}$, $3^{rd}…n^{th}$ next best guesses. If there are several strong candidates, this order might change depending on the context.