Mind transfer: human brains in different materials
HUMAN brains and the minds that emerge from them have allowed us to create culture and civilisation. But ensuring the survival of those marvels (not to mention of our species) in the face of technological and environmental onslaughts will depend on how well those minds adapt. We have always augmented ourselves in the face of challenges, creating artefacts from clothing to cellphones to cochlear implants. As ever, human survival will depend on us being ever more adaptable.
Fortunately, we may be on the brink of fundamentally surpassing our limits: there is no reason why the complex information processing at the core of human experience should continue to be unique to one biological implementation. Moving the functions of minds from brains to other types of materials, other substrates, to become substrate-independent minds (SIMs), would be an extraordinary adaptation.
At a survival level, a SIM could be embodied in a variety of ways, and so would perhaps be better able to survive potential societal collapse. At a human level, the goal would be continued existence of personality, individual characteristics, a manner of experiencing and a personal way of processing experiences. Continuity of self could be assured, despite minds having novel embodiments.
Some years ago, I set up carboncopies.org, a nonprofit organisation. Its work is to keep the big picture of SIM and its key problems clear, with different ways of solving them – routes on a road map – discussed among researchers. It also provides funding where there is a gap.
So how might SIM be realised? For the past 100 years, neuroscientists have been learning how to identify neuroanatomy, to measure neural responses to stimuli, and what regulates the responses. Most SIM research builds on this approach. We call it “whole brain emulation”, a term I coined in 2000.
We use the word “emulation” because it indicates a precise copying of a specific brain, compared with “simulation”, by which people try to build a general model of how some piece of a brain (or a whole brain) of a human or animal could work. The Blue Brain project is an example of simulation. It is run by Henry Markram at the École Polytechnique Fédérale de Lausanne, Switzerland. There, researchers are trying to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level, drawing on statistical data from many animals.
At present, most SIM researchers aim to emulate the basic computational functions carried out by elements of the brain and then faithfully re-implement them in other substrates – at the same time also faithfully re-implementing the neural connectivity. Such a vast undertaking has to be broken down into much smaller pieces: there are many things we need to know. For example, can we get good enough resolution of neurons – individual electrically spiking neurons, morphologically detailed neurons, or the molecular processes going on in synapses – to make emulation truly feasible?
Exploring such questions is already paying off in real products, such as the cochlear implant, or the hippocampal chip pioneered by Ted Berger at the University of Southern California, Los Angeles. Berger is trying to build artificial neural cells, initially to act as an implanted prosthesis for people who have lost brain cells to diseases such as Alzheimer’s.
We are still left with a mountain to climb. Much of what we need to understand relates to neurons or pieces of a neuron. For example, the time at which each neuron generates a spike in electrical activity – called an action potential – appears to be a key currency of the brain. That timing determines whether a synapse will be modified to create a memory, when a muscle will contract (enabling movement or speech), and perception of sensory input such as sight. In other words, the timing determines all our interactions with the environment.
There are four parts to the Carboncopies road map, each representing a consensus of all those involved in whole brain emulation. The parts all work in parallel, and are all equally essential. We must test our hypotheses about what to include in the emulation and at what level of detail. We need to devise suitable hardware and software to run emulations. We need data about how the neurons and synapses interconnect – the kind of work of various ongoing “connectome” projects. And we need to record the shape of the electrical responses during activity throughout the brain so that the parameters of the emulation can be tuned correctly – we call these “reference responses”.
Hypothesis testing is being carried out by various researchers. For example, David Dalrymple is now on leave from Harvard University to work on emulating the brain of Caenorhabditis elegans, a nematode worm which has only 302 neurons. He wants to determine the function, behaviour, and biophysics of each neuron, and aims to build a complete simulation of the creature’s nervous system. This should provide valuable information about what to include in the worm emulation, and at what level of detail.
As for the hardware, the human brain uses a highly parallel network of billions of mostly inactive, low-power processors, or neurons. A good emulation will use a similar substrate, such as brain-like hardware. One example of such “neuromorphic” hardware is the neuron-like chip developed as part of the multimillion-dollar SyNAPSE project by the US Defense Advanced Research Projects Agency.
If we tried to fine-tune and correct the parameters of the billions of neurons in the human brain without a high-resolution map of the “shape” of how they fire, we would probably be computing until the end of time. Instead, we must break the problem down, which is why our map combines both brain structure and function measurements at large scale and high resolution. By the way, in this field, millimetres of tissue or anything beyond a few hundred neurons is considered large.
As for the connectome, the answer is to look at the morphology of brain cells and fibres. Electron microscopy provides the right resolution, while automated brain-sectioning and imaging helps us cope with the vast amount of data needed to map a brain.
Last year, Kevin Briggman and colleagues at the Max Planck Institute for Medical Research in Heidelberg, Germany, and Davi Bock and colleagues at Harvard Medical School, separately provided proofs of principle for whole brain emulation. They showed it was possible to reconstruct neural circuitry from a brain scan and use it to predict function. (They validated their reconstructions using earlier recordings of the scanned tissue.)
And what about those reference responses? We can get a whole-brain idea of electrical activity at a lower resolution using devices like MRI. And there are new technologies being developed, such as a “molecular ticker tape” pioneered by Konrad Kording at Northwestern University in Evanston, Illinois, and his colleagues. This should simultaneously allow us to register brain activity at a much higher resolution, and register activity from many more neurons.
So where are we now? Clearly, there is a lot to a whole brain, and we extend beyond our brains by continuously interacting with the world. But we do not need a complete understanding of all that in order to emulate a whole brain. Instead, we need to describe the behaviour of functional brain components and work out how they communicate – using today’s knowledge and technology. Quite amazingly, a programme to achieve whole brain emulation is emerging. Just follow the science.
This article is inspired by two papers by Randal A. Koene in a special issue of the International Journal of Machine Consciousness on mind uploading. Many of the projects cited were presented and discussed at the Whole Brain Circuit Reconstruction symposium, a satellite to the Annual Meeting of the Society for Neuroscience in New Orleans earlier this month, or will be in 2013 at the Global Future 2045 Congress. Koene is a former professor at Boston University and co-founder of the Neural Engineering Corporation of Massachusetts