FREEDOM AND SAFETY
2019 was nuts for neuroscience. I said this last year too, but that’s the nature of accelerating technologies: the advances just keep coming.
There’re the theoretical showdowns: a mano a mano battle of where consciousness arises in the brain, wildly creative theories of why our brains are so powerful, and the first complete brain wiring diagram of any species. This year also saw the the birth of “hybrid” brain atlases that seek to interrogate brain function from multiple levels - genetic, molecular, and wiring, synthesizing individual maps into multiple comprehensive layers.
Brain organoids also had a wild year. These lab-grown nuggets of brain tissue, not much larger than a lentil, sparked with activity similar to preterm babies, made isolated muscles twitch, and can now be cloned into armies of near-identical “siblings” for experimentation - prompting a new round of debate on whether they’ll ever gain consciousness.
Then of course, there’s the boom in neurotech. Fostered by insight into how neurons and circuits communicate with each other through a complex “neural code,” we’ve gotten ever closer to decoding the brain. Mind-controlled prosthetics are old news; the frontier now is engineering robotic limbs that can truly feel. Insight into our sensory cortices are inspiring light-based nervous systems that give robots multitudes of sensations. Elon Musk’s Neuralink finally came out after years of speculation, and a Wild West of brain-computer interfaces have sprung up, with the hope of one day restoring broken brain circuits without the need for surgery.
That’s already achievement-a-plenty. But as we wrap up the year, there are four mind-bending stories that still stick with me - by asking about the nature of death, the promise of mind-reading, and new paths that may finally help us beat Alzheimer’s. These are the ones I’ll leave you with.
The brain is a powerful but ultra-sensitive organ that’s prone to injury. Once deprived of oxygen and nutrients, cells can begin to die within the hour. That’s why, zombie lore aside, scientists once thought it’s near impossible to resuscitate a brain to any sort of function hours after death.
Not true. In April, a team at Yale University reported that they successfully detected electrical activity in pig brains four hours after death. The results were a surprise: the team originally set out to develop a system that helps the brain maintain its integrity after removal for experimental purposes. How well it worked went beyond the team’s expectations. It’s impossible to say if the brains were conscious; that is, whether they were “aware” of being revived, though it’s highly (and I mean highly) unlikely. When the team saw signs of widespread, coordinated electrical activity - which underlies consciousness - in their initial experiments, they anesthetized future experimental brains to block this sort of united firing, drastically reducing the chance consciousness could emerge in these brains.
Nevertheless, the study suggests that the brain is much more resilient to injuries such as stroke or trauma than previously thought. In the long term, it asks whether we might one day have a sort of CPR for the brain. And if so, how long can brains maintain their health after being separated from the body? We might have just taken the first step into the uncharted territories of death.
A few years ago, Dr. Miguel Nicolelis linked up animals brains into an internet that allowed each member to work collaboratively on a common problem. When connected to each other through implanted electrodes, the animals synced up their brain’s electrical activity in a way reminiscent of a single “hive” brain.
Nicolelis has now done the same experiment in humans, minus surgery. In a feat of neural engineering, the team used non-invasive electroencephalographs (EEGs) to read brain waves from two individuals and “sent” these signals to a third person by zapping their brain with magnetic pulses - a technology called transcranial magnetic stimulation, or TMS. Together, five triad groups solved a Tetris-like game using their brain waves alone, with an accuracy of over 80 percent, even when the researchers introduced noise.
One caveat: the system was rigged so that the neurotech wasn’t detecting “thought,” for example, rotate the block or don’t rotate. That decision was encoded as the presence or absence of light flashes, which are much easier for the EEG to read and for the TMS to deliver to the visual cortex. But it’s still a powerful proof-of-concept, in that even with our rudimentary brain reading and writing tech, it’s possible to link up human minds into a hive mind to solve problems. Nicolelis imagines a biological supercomputer made from networked human brains, which could conceivably cross language barriers and even enhance cognitive performance. The question is, if we open the sanctuary of our minds to others for gains in computing power, what do we stand to lose in privacy and autonomy?
Playing a collaborative game of Tetris isn’t the only way scientists advanced mind reading technology. In January, one team combined deep learning with speech synthesis technology to translate what a person is hearing into reconstructed speech. The system captured electrical signals from the auditory cortex while a person listened to recordings of people speaking. These activity patterns were then decoded by an AI-based speech synthesizer and produced intelligible, if somewhat robotic, speech. Unfortunately, the system couldn’t decode someone’s own internal thoughts.
But that changed three months later.
Another team engineered a “neural decoder” that decodes electrical signals measured from the cortex, the outermost layer of the brain. Rather than containing information about semantics, these signals represent movement of the lips, tongue, larynx, and jaw. Different movement patterns are associated with different sounds, which the decoder can identify and synthesize into actual comprehensible sentences. For the first time, it’s possible to know what someone is trying to say by reading their brain activity alone, and the tech was further validated in a Q&A conversation. Earlier this month, yet another team found it’s possible to decode words and syllables based on recordings from the brain’s motor cortex - the part usually responsible for hand and arm movements. This opens another avenue of reading “speech” directly from the brain.
Not to be outdone, a team at Russian firm Neurobotics found they could use AI to decode what video clips people are watching based on their brainwaves alone. In contrast to the speech-decoding studies, which use implanted electrodes, here non-invasive EEG was sufficient to reconstruct nature scenes, sports, and human faces.
For now, our private thoughts are still private, and the tech mainly helps those who can’t speak reconnect with the world. But think about this: if someday a tech giant offers you the ability to text or post using your mind only, would (and should) you go for it?
Dementia is one of the most frustrating neurological disorders of our time. Despite decades of research, nearly every single Alzheimer’s drug that targets toxic protein clumps - called beta-amyloid - thought responsible for the disease has failed. Generally, these drugs are proteins that break up clumps or neutralize their toxic effects.
This year saw an explosion in alternative potential treatments and theories.
One that especially gained steam suggests flashing lights and clicking sound could potentially break up toxic protein clumps and improve brain function, at least in mice. The treatment, cheap, non-invasive, and dramatically effective, offers new hope to the long-struggling field. Others suggest that mutations to DNA in brain cells “scrambles” certain genes and could be a root cause. Yet others are taking a gene therapy approach to the Alzheimer’s dilemma, adding in a dose of a protective gene variant in high-risk individuals.
Although it’s impossible to say if any of these new routes will lead anywhere, one thing is clear: the more scientific treatment ideas we have, the higher the chance we’ll finally tame Alzheimer’s in the near future.