Our lab conducts research on the cognitive neuroscience of human spatial navigation. We use functional and structural magnetic resonance imaging, virtual reality, and behavioral approaches from cognitive psychology to study how people navigate and why people vary in their navigation abilities, including across ages (healthy younger and older adults) and in the course of Alzheimer’s disease.
See our publications and resources pages for more. Or see below for a rundown of our current projects.
Measuring variability in spatial navigation
How do we know where we are going, beyond where we can see, feel, or smell?
We investigate the cues people use to solve the problem of spatial navigation, and how the brain integrates this information into spatial representations. What makes some people excellent navigators – seemingly never lost and always oriented – while others struggle even in familiar environments? Using virtual reality and real world environments combined with psychological testing and neuroimaging, we investigate the representations and processes underlying these differences.
Ongoing Projects (click each heading to expand)
Spatial navigation is an important skill, and it is common for spatial navigation abilities to degrade during normal aging, with extreme deficits being prevalent symptoms of neurodegenerative diseases, such as Alzheimer’s. Spatial navigation behavior is often supported by navigation cues (e.g., maps, arrows, and verbal instructions) that show the way to go. But, before these cues can be converted into routes, people need to find them, a task requiring visual attention. Indeed, individuals with Alzheimer’s disease also exhibit impairments in visual attention, which may underlie impairments in spatial navigation. Thus, the overarching goal of this research is to investigate the relation between visual attention capacity and spatial navigation abilities in patients with Alzheimer’s disease and healthy age-matched controls. A better understanding of the interaction between visual attention and spatial navigation will enhance mobility, improve safety, and increase quality of life for older adults. Led by Adam Barnas.
Establishing a connection between the structural properties of the brain and the behavior exhibited is a fundamental question of neuroscience. Spatial navigation and episodic memory are two higher order abilities whose degradation linked to old age and neurodegenerative diseases such as Alzheimer’s disease. We use Graph Convolutional Neural Networks (GCNNs) to extract structural and connectivity across brain regions in large fMRI-based datasets and understand their impact on spatial navigation and episodic memory. Such models may pave way for potential early diagnosis of Alzheimer’s and reveal new associations between structure of the brain and higher order behavior that goes beyond volumetry. Led by Ashish Sahoo, in collaboration with Alina Zare.
Spatial navigation ultimately requires moving through your environment. In this line of research, we compare judgments of spatial affordances in images of scenes when we ask people to indicate directions using words (e.g., ‘left’ vs. ‘right’) or have them draw paths through scenes. Using images from the SUN dataset allows us to compare these representations of spatial directions in scenes to neural data from the BOLD5000 project. Led by Hoorish Abid.
Supporting spatial navigation
How can we improve spatial navigation?
People have designed many different tools to help us find our way around. These include language, arrows, maps, and of course global positioning systems. But how do we convert these abstract representations into something we can use to actually move through the world? Might these tools provide a way forward to support people who lose their ability to navigate? Are they crutches that, when used in excess, deplete our spatial mental resources?
Ongoing Projects (click each heading to expand)
Can we improve spatial navigation ability? In this project, healthy young adults are ongoing navigation (or episodic memory) training to see whether their navigation skills can be improved. We are also relating the behavioral changes we observe after training to structural and functional changes to the human brain (measured with fMRI). Led by Lucia Cherep.
Navigation tools, like maps, compasses, and GPS devices are common in daily life. But do they help us learn large-scale spaces more easily? Given that self-report data are not fully ecologically valid in spatial cognition studies, we use cutting-edge technology to create more plausible and controlled environments to investigate our spatial abilities. In this line of research, we are investigating how effective navigation tools can be and ways to improve them. For instance, although a compass provides the same sort of directional information as other distant visual cues (like a mountain range to the West), is either actually used to learn an environment more easily? To answer these questions, we are using a virtual reality head-mounted display (HTC Vive) and an omnidirectional treadmill (Virtuix Omni VR) to create an immersive virtual environment (VE). Our participants walk on the treadmill and learn a widely-used virtual environment (Virtual Silcton) with or without various navigational cues. We then test whether they use these cues to aid in learning their surroundings and perform better on tasks that tap their large-scale spatial knowledge. This work will inform future approaches to improve and support spatial navigation behavior across individuals with vastly different spatial navigation abilities. Led by Ece Yuksel.