Research Interests

The human brain is the most powerful piece of computing machinery in the known universe. Its role can be broken down into four general tasks: our brains acquire information from (and about) our environments, process the incoming information, store the processed information (e.g. for later use), and retrieve stored information (e.g. when deemed relevant). My research examines the neural mechanisms that enable us to acquire, process, store, and retrieve information, as these processes pertain to cognition. To study these mechanisms I use (and develop) computational models and machine learning algorithms that jointly consider neural and behavioral data. I use intracranial EEG, ECoG, and fMRI to examine neural patterns recorded during a wide array of experimental paradigms. Summaries of some of my projects may be found below.

Putting our memories into context

Knowing how our brains organize and spontaneously retrieve memories is at the heart of understanding how our brains support the ongoing internal dialog of our conscious thoughts. Context-based models of episodic (event-based) memory posit that our brains contain a gradually evolving mental context representation that reflects a recency weighted average of the thoughts and stimuli we experience. According to these models, our memory of an event contains a representation of the event itself bundled together with a representation of the context in which the event was experienced. When we retrieve a memory, we cue recall by reinstating the context in which the associated event was experienced. Although these models elegantly explain many trends in the behavioral data, my doctoral research with Michael Kahana was the first to show that the contextual reinstatement processes hypothesized by context-based memory models could be observed in the living human brain during episodic memory experiments in the laboratory. You can read more about the details of this project here, here, and here. You can also read high-level descriptions of the project from the University of Pennsylvania (press release 1, press release 2), the New York Times, the Los Angeles Times, and New Scientist magazine.

Tracking our thoughts

Although my doctoral work provided a means of identifying the processes underlying contextual drift and reinstatement, the work left open the question of how our mental representation of context evolves over time as the stimuli we encounter are encoded and retrieved from memory. I am studying these processes in my postdoctoral work with Kenneth Norman and David Blei. To do so, I am developing model-based methods for tracking the neural representation of context as participants in an fMRI scanner study and freely recall lists of words. Tracking the representation of context using fMRI requires both developing new machine learning techniques for inferring latent thoughts from neural data (e.g. Manning et al., 2014c, Manning et al., 2014d) and leveraging existing techniques (such as topic modeling algorithms, e.g. Blei et al., 2003) for interpreting and exploring the data. Whereas the overt rehearsal procedure (Rundus & Atkinson, 1970) has participants narrate their rehearsals as they study a list of words, my approach allows one to track participants' evolving mental states by decoding their BOLD activity as they study and recall words in the experiment. In this way, my approach allows one to "covertly" gain insights into participants' strategies and into the neural mechanisms underlying episodic memory encoding and retrieval. This allows us to test nuanced predictions of memory models at a level of detail not possible using other techniques.

Understanding how our brain networks represent information

Although a snapshot of neural activity (like an EEG power spectrum or an fMRI volume) can be informative about ongoing cognitive processes, recent work suggests that additional information can be extracted by considering the broader network context in which that snapshot is situated. For example, knowing that a brain structure has just increased its activation during an experiment reflects not only the computations being done by that structure, but also the ways in which other brain structures are communicating with it. Studying connectivity patterns is much more computationally intensitve than studying snapshots of activity. For example, a typical volume in an fMRI sequence has around 50,000 voxels. Storing the patterns of connections between every pair of voxels requires 2.5 billion numbers (occupying several GB of memory). Training pattern classifiers to map connectivity patterns to cognitive states (which requires O(n2) memory-- in this case, amounting to several billion GB!) is impractical on most commonly available computer systems. This indicates that a new approach to studying brain connectivity is needed. I have been developing a Bayesian model that allows researchers to compute a much more compact and computationally efficient representation full-brain connectivity patterns than standard voxel-based approaches provide. You can read more about this work here, here, and here.

Understanding how our brains compute

Examining fundamental patterns in the brain's electrical activity can yield valuable insights into how the brain does its computations. I'm working on this problem by analyzing intracranial EEG recorded from neurosurgical patients who participate in various memory-related experiments. Whereas standard approaches to EEG often focus on brain oscillations, my research has shown that non-oscillatory broadband changes in the brain's large-scale electrical patterns can be used to predict when the underlying individual neurons are firing (Manning et al., 2009, Jacobs et al., 2010, Ramayya et al., 2014).

Discovering how we see

In order to store memories, we first need to acquire information about what we will want to remember later (e.g., what's going on around us). One way we do this is by seeing. I'm interested in basic questions that get at why our visual system is designed in the way it is and what information it can provide to the rest of the brain. Together with David Brainard, I'm working on a Bayesian model for vision. We know that photoreceptors cannot capture all photons that hit the retina, and so the activations of our photoreceptors under-sample the light reflected off objects in our environment. In addition, blurring and other sources of noise make our photoreceptor measurements unreliable. This means that in order to estimate what's actually "out there" in the world, the brain needs to do some guesswork. We've implemented a statistical model that explicitly represents the photoreceptor layout on the retina -- this allows us to test how "good" different retinal designs are at guessing about the visual world. So far, we're using this model to answer questions about how and why the visual system might have evolved as it did. You can read more about the details of this project here.

The next question I'm working on in this line of research is to ask whether details about the types of each photoreceptor need to be genetically encoded, or whether the types can be learned by observing receptor responses. You can read more about the details of this project here, or check out this article in The Scientist.

Learning to navigate efficiently in new environments

"Getting around" (i.e., navigating) is one of the most important things we do as humans. In order to navigate efficiently, we need to represent our environment somehow. I'm interested in how we build up cognitive maps of new environments. To study this process I'm developing a computational model (Manning et al., 2014a) to explain the behaviors of human participants in a virtual reality taxicab driving game. You can read about the high-level ideas of this project in a Brandeis press release (page 12).

Other research interests

My other research interests include machine learning, neural networks, knowledge representation, artificial intelligence, database systems, and computer graphics.