Scientists are finding better ways to “see” what happens in the brains of lab mice as the animals learn new skills, and the techniques are leading to new insights about how learning works.
Just last week, researchers at the Johns Hopkins University School of Medicine published findings from an unusual study that used a laser-assisted tool to monitor brain-cell receptors in real time while a mouse was learning new motor skills.
Specifically: how to use its paws to grab a piece of food, rather than simply nibbling at it.
“This is a classic motor learning task where you present a little pellet of food just outside the cage—in a little chamber that you build, and the mouse is trying to get the food,” says Richard Huganir, director of the department of neuroscience at the medical school. “They don’t usually reach out and get the food, they usually just chew on things. But you can teach them to do it, and they’ll learn the fine motor skills of their fingers and their paws, and they learn to grab the pellet and bring it in.”
Beforehand, the researchers injected a special dye into the mouse, and then set up infrared lasers to observe which synapse receptors are at work during the task. The light does not harm the mouse’s brain tissue, the researchers say.
“We can actually watch this learning occur at the molecular level, which is incredible,” says Huganir. “If you had said I was going to be able to do this 10 years ago, I would have said you’re crazy.”
In the newly-published study, the research team noticed something surprising. When the mouse learned the task, the motor cortex wasn’t the only active part of the brain—the visual cortex was hard at work as well, unless the activity was done in the dark. That means that the learning was lighting up many parts of the brain at a molecular level, not just the areas for motor movement.
“It’s like looking at the stars in the galaxy and trying to say when are you seeing a supernova. How are you going to pick that out?”
—Richard Huganir, director of the department of neuroscience at Johns Hopkins Medical School.
“This was a big surprise—nobody had seen this before,” says Huganir. “It doesn’t negate that there are very localized areas of the brain that are important for learning language and motor learning. But it shows that there are lots of things changing all over the brain.”
He points out that other research in recent years has shown the importance of using multiple senses for learning, and plenty of teachers already use techniques based on that research. But he says that the techniques he and other scientists are using promise to show an even more high-definition picture of how different brain regions play a role in different types of learning tasks.
“We have a new mouse where we’ve labeled every synapse in the brain, so we can actually visually image every synapse in the brain,” he adds.
Since there are billions of neurons, scientists are turning to artificial intelligence to help them sort and manage all of that data.
“It’s like looking at the stars in the galaxy and trying to say when are you seeing a supernova. How are you going to pick that out?” he says. The medical research team is working with some of the same computer scientists at Johns Hopkins that work on sorting through astronomy data, as it turns out.
The hope is that such research will lead to a better understanding of how the brain works, and can help develop treatments for diseases that impact the human brain, such as Alzheimer's disease.