Although my primary background is in math and computing, I’ll touch on a lot of biological topics in these pages. I’ve been thinking about biology a great deal these days, and I’ll venture that a lot more computer scientists should be doing similarly. Biology provides unparalleled inspiration for the construction of innovative computer systems, and it also, more and more, is going to be a primary application of computer technology. Over the course of the 21’st century, the computer and bio revolutions are going to fuse together, yielding a new kind of biodigital science that we can barely grasp the outlines of today.

In addition to these general motivations, my current preoccupation with the biology/computing interface came about as a consequence of very practical considerations. When Webmind Inc. folded, and my collaborators and I formed the successor firm Novamente LLC, one of the choices we had to make was as to an application area for our software. We were burned out on financial prediction and document management, the domains our Webmind Inc. products had dealt with. We wanted to do something new, something exciting, and something where there was plenty of money to be made by applying advanced AI technology. After a month or so of brainstorming and vacillation, my latent interest in biology surged up, and it occurred to me that perhaps the right direction for us to go in was the application of AI to biology.

Biology had never been my most central area of interest. A deep futurist since my sci-fi-loving youth, I had long been convinced that humans were virtually obsolete, in the sense that the title of “most intelligent creature on Earth” was not much longer to be held by the human race. Even before the PC appeared on the scene and Moore’s Law become common knowledge, it seemed clear to me that, given the exponentially increasing quality of technology on the whole, it was only a matter of decades before machines would overtake us somehow. This intuition is commoner now than it was in my childhood in the 70’s, though even now it’s far from a mainstream view. At any rate, it was because of this feeling that I chose artificial intelligence for a career. I figured intelligent computers were going to outpace human beings during my lifetime.

The idea of somehow transforming myself into an AI – perhaps uploading my brain-state into a computer program – appealed to me tremendously as an alternative to the unwelcome prospect of death (stories of reincarnation or an afterlife never struck me as convincing). Of course, I realized that making uploading work would require massive advances both in AI and computer technology, and in brain science. But I had little patience for the tremendous number of seemingly unrelated details one had to memorize to become expert in brain science, so I decided to devote myself to the AI side of things.

What I didn’t realize as a child, however, was that biology was going to advance every bit as rapidly as computing. The biology I studied in college (through reading on my own; I never took any bio classes) is mostly still valid, but it’s no longer at the heart of modern bioscience research. Advances in genetics have been amazing and tremendous – we’ve sequenced the human genome, and can now study the dynamic activity of genes and proteins as they build up cells. And the same can be said of brain science, with PET and fMRI scans allowing us to watch the movement of energy through the brain as it thinks. Sure, there are still limitations, which we’ll discuss in the following pages, but the rate of progress is every bit as explosive as in computing – and is in substantial part due to advances in computing technology. Analysis of gene and protein dynamics, or blood flow in the brain, wouldn’t be possible without fast computers and sophisticated software to crunch the data.

I still believe Real AI is going to revolutionize everything. But now I can see that it’s not going to be the only revolution of early 21’st century science. As computers gain more and more intelligence, genetic engineering and gene therapy will allow us to improve boring old human beings far beyond our current condition. Freed from psychological pain and emotional disability by advanced neuroactive pharmaceuticals, we will achieve heights of experience that visit us all too rarely today. The ability to jack our sensorimotor cortices directly into computer-generated realities will give us access to simulated realities, and also new ways to communicate with each other, with consequences far more profound than those of chat or e-mail. Genetic engineering may be used to make human-computer synthesis more effective.

There is nothing new about these grandiose visions. What fascinated me in early 2001, on studying biotechnology with a view toward applying our AI system to it, was the specific concrete progress bioscientists were making toward making these visions realities. What I’ve written about in this book is the interface between today and tomorrow. The bioscience of today does not liberate us from psychological pain, nor does it allow us to upload our brains into computers or jack into simulated realities – but it is pushing in that direction, surely and not all that slowly.

The two main bioscience topics I’ll touch in this book are neuroscience and genetics. These are both areas that I’ve been led to by my own research. Neuroscience because anyone seeking to build a Real AI would be foolish not to learn everything possible about the only generally intelligent system known to man: man’s own brain. And genetics because, after careful study, my team and I chose to focus our AI application efforts on the analysis of “gene expression data”, information about how much of a gene is being produced in a cell at a given time – a kind of data produced by new biological tools called “microarrays,” which is very resistant to analysis by traditional computer programs. What I’m writing about here are not just ideas I’ve read about, they’re ideas I’ve lived, and I hope this passion and connection shows through.




The first time I saw a human brain in a medical laboratory, I was overwhelmed by a very strange feeling. How, I wondered mutely, can this three-pound gray mass of nerve cells possibly lead to so much? How can all the social and psychological and cultural complexity we see around us and feel within us, come out of this little lump of meat? This lump of meat, not so different from the similar lumps of meat found in the heads of apes and monkeys – or, for that matter, sheep and cows.

The brain is a puzzle, and obviously, one with significance far beyond the ivory tower.

First of all, understanding the brain is important as part of the human race’s quest for self-understanding. The contradictions and dilemmas and great moral questions that follow us from one culture to the next – aggression, sexuality, gender differences, freedom and bondage, death and suffering – all have their roots in the patterns by which electricity courses through these three-pound masses in our heads.

But that’s not all -- the significance of the brain extends beyond even the human condition. For the human brain is, at the moment, our unique example of a highly intelligent system. Some people think whales and dolphins are highly intelligent – but this remains a speculation. Stanislaw Lem, in his famous book Solaris, hypothesized an intelligent ocean on another planet, whose intelligence was revealed to humans in the complexity of its wave patterns and its strange effects on peoples’ minds. But even if Solaris’s real-world equivalent is out there somewhere, we haven’t voyaged there yet. For the moment, the human brain, flawed as it is, is our only incontrovertible example of an intelligent system, so if we want to understand the general nature of this mysterious thing called intelligence – or, more practically, create intelligent devices to improve our lives -- the brain is the only really concrete source of inspiration we have.

Neuroscientists try to understand the workings of this amazing three-pound meat hunk, one cell at a time, one specialized region at a time. The basic principles are simple, but the complexities are astounding. As of now, they know a lot about how the brain is structured – which parts do which sorts of things. And they know a lot about cells and chemicals in the brain. But they have remarkably few hard facts about the dynamics of the brain – how the brain changes its state over time, a process that’s more colloquially referred to with terms like “thinking” and “feeling.”

The main cause of this situation is that there’s no way, right now, to monitor the details of what goes on in the brain all day, how it does its business. Brain scanning equipment, PET and fMRI scanners, have come a long way, but are still far too crude to do much beyond tell us which general regions of the brain a person uses to do which general kinds of thinking. They don’t yet let us monitor the course of thinking itself. A few bold neuroscientists have made their guesses as to the big picture of how the brain works, but the more timid 99.9% remain focused on their own small corner of the puzzle, all too aware of how much more there is to be understood before general theories about brain function can be systematically verified or falsified.

And the difficulty of understanding brain and mind is not restricted to neuroscience. Other disciplines concerned with other aspects of intelligence – psychology, artificial intelligence, philosophy of mind, linguistics – have run up against equally tough dilemmas as the ones that face neuroscience. Psychologists bend over backwards to create complex experiments that will test even their simplest theories, let alone their subtler ones. AI programmers find current computer hardware inadequate to implement their more ambitious designs. Linguists enumerate rule after rule describing human language use, but fail to encompass its full variety. This cross-disciplinary difficulty has led to the creation of a combined interdisciplinary discipline called cognitive science, which wraps all the difficulties up into one unmanageable but fascinating package. Many universities now offer degrees in cognitive science – the study of the baffling phenomenon of brain-mind from every aspect, the piecing together of diverse fragments of evidence to try to arrive at an holistic understanding of this most baffling three-pound meat hunk that lies at the very center of our selves, and holds the key to untold future technologies.

Not surprisingly, cognitive science itself is a rather diverse endeavor, encompassing a number of different approaches to understanding mind-brain. Each approach has its own merits. Among the most fascinating conceptual frameworks under the cognitive science umbrella, however, is the field of artificial neural networks. Scientists and engineers working in this area are creating computer programs that, in some sense, work like the brain does – not just on the overall conceptual level of being intelligent, but by simulating the dynamics of interaction between brain cells. So far none of these neural net programs is anywhere near as intelligent as the brain. But some of them have shed light on various aspects of brain function (memory, learning, disorders like dyslexia), and others are serving very useful functions in the world right now, embedded in other software programs doing everything from translating handwriting into typescript to filtering out porn on the Internet to helping diagnose problems in auto engines.

There is a lot to be learned from these neural network programs’ successes – and from their limitations. We can see just how much, and how little, of what makes the human brain so powerful and wonderful can be captured in simplified, small-scale models of its underlying low-level mechanism. Neural networks model the brain as being somewhat similar to computer algorithms and computer networks, and in this sense they form an excellent foundation for work bringing together brain and computer, human and digital intelligence.

The brain is made of cells called neurons. There are many other cells there too – glia, for example, that fill up the space between the neurons. But all the evidence – dating back to Ramon y Cajal at the end of the 19’th century -- shows that neurons are the most important ones. Neuron in groups do amazing things but neurons individually seem to be relatively simple. A neuron is a nerve cell. Nerve cells in your skin send information to the brain about what you're touching; nerve cells in your eye send information to the brain about what you're seeing. Nerve cells in the brain send information to the brain about what the brain is doing. The brain monitors itself -- that's what makes it such a complex and useful organ!

 
 

  Snail neuron as photographed with a scanning electron microscope

 
 

How the neuron works is by storing and transmitting electricity. We're all running on electric! To get a concrete sense of this, look at how electroshock therapy affects the brain – or look at people who have been struck by lightning. One many who was struck by lightning was never again was able to feel in the slightest bit cold. He'd go outside in his underwear on a freezing, snowy winter day, and it wouldn't bother him one bit. The incredible jolt of electricity had done something weird to the part of his nervous system that experienced cold....

The neuron can be envisioned as an odd sort of electrical machine, which takes charge in through certain "input connections" and puts charge out through its "output wire." Some of the wires give positive charge -- these are "excitatory" connections. Some give negative charge -- these are "inhibitory." But the trick is that, until enough charge has built up in the neuron, it doesn't fire at all. When the magic "threshold" value of charge is reached, all of a sudden it shoots its load. From a low-level, mechanistic view, this "threshold dynamic" is the basis of the incredibly complexity of what happens in the brain.

Of course, the connections between neurons aren’t really as simple as electrical wires – that’s just a metaphor. In reality, each inter-neuron connection is mediated by a certain chemical called a neurotransmitter. There are hundreds of types of neurotransmitters. When we take drugs, the neurotransmitters in our brain are affected, and neurons may fire when they otherwise wouldn’t, or be suppressed from firing when they otherwise would.

And the near-consensus among neuroscientists is that most learning takes place via modification of the connections between neurons. The idea is simple: Not all connections between neurons conduct charge with equal facility. Some let more charge through than others; they have a "higher conductance." If these conductances can be modified even a little then the behavior of the overall neural network can be modified drastically. This leads to a picture of the brain as being full of "self-supporting circuits," circuits that reverberate and reverberate, keeping themselves going. This concept, now called “Hebbian learning,” goes back to Donald Hebb in the 1940's, and it's held up since then through all the advances in neuroscience.

Christof Koch, a well-known neuroscientist, believes there is some fairly subtle electrochemical information processing going on inside each neuron, and the extracellular space between neurons. If this is so, no one knows exactly what role this has in intelligence. A few mavericks go even further, and argue that the neuron itself is a complex molecular computer. Anesthesiologist Stuart Hameroff has suggested that the essence of intelligence lies in the dynamics of the neural cytoskeleton -- in the molecular biology in the walls of the neurons. Roger Penrose, the famous British mathematician, has taken this one step further, arguing that one needs a fancy theory of quantum gravity to explain what's going on in the cytoskeletons of neurons, and tying this in with the idea that quantum theory and consciousness are somehow interrelated. There’s no real evidence for these theories, but it’s an indicator of how little we understand about the brain that these outlier theories can’t be 100% convincingly empirically refuted at this stage.

Neurons are the stuff of the brain; but they live at a much lower level than thoughts, feelings, experiences, knowledge. Above the level of interesting mind-stuff, on the other hand, we have the various regions of the brain, with their various specialized purposes. In between is the interesting part – and the least known.

One hears a lot about left brain versus right brain, but in fact the really critical distinction is forebrain versus hindbrain and midbrain. The hindbrain is situated right around the top of the neck. What it does is mostly to regulate the heart and lungs -- and control the sense of taste. The midbrain, resting on top of the hindbrain, integrates information from the hindbrain and from the ears and eyes, and it appears to play a crucial role in consciousness. Collectively, the hindbrain and midbrain are referred to as the brainstem.

The hindbrain is old – it’s basically the same in humans as in reptiles, amphibians, fish, and so forth. The forebrain, on the other hand, is much more complex in mammals than in other animals. It’s where our smarts live, and it has its own highly complex and variegated structure. The mammalian forebrain is subdivided into three parts: the hypothalamus, the thalamus and the cerebral cortex. And then the cortex itself is divided into several parts, the largest of which are the cerebellum and the cerebrum. The cerebellum has a three-layered structure and serves mainly to evaluate data regarding motor functions. The cerebrum is the seat of intelligence -- the integrative center in which complex thinking, perceiving and planning functions occur. How it works we have only a very vague idea.

 
 

  MRI brain scan indicating a “chiari i malformation”

 
 

And all this is just the coarsest level of high-level brain structure. With PET and fMRI brain scan equipment, scientists can go one level deeper. By getting a rough picture of which parts of the brain are getting more attention at what times, they can see what parts of the brain are involved in what activities. So, for example, attention seems to involve three different systems: one for vigilance or alertness, one for sort of executive attention control, and one for disengaging attention from whatever it was focused on before. Three totally different systems, all coming together to let us be attentive to something in front of us.... Depending on which of the three systems is damaged, you get different kinds of awareness deficits, different neurological disorders.

All these different parts of the brain are made of neurons, passing electricity around via neurotransmitters, in reverberating cycles and complex patterns. But the different kinds of neurons and neurotransmitters in the different parts of the brain, and the different patterns of arrangement and connectivity of the neurons in the different parts of the brain, obviously make a huge practical difference.

Atoms and molecules are a reasonable metaphor here. All brain regions are made of neurons, just as all molecules are made of atoms. Different atoms have very different properties, as to different types of neurons. And different molecules, formed from different types of atoms, can do very different things – just as different brain regions are specialized for different functions. Only the very simplest molecules can, in practice, be analyzed in terms of atomic physics – otherwise we have to use the crude heuristics of theoretical chemistry. Complex molecules like proteins can barely even be studied with computer simulations; the physical chemistry is just too tricky. Similarly, so far, only the very simplest brain regions and functions can be understood in terms of neurons and their interactions. And complex brain functions can’t even be understood via computer simulations, yet. There are too many neurons, and the interconnectivity patterns and ensuing dynamics are just too tricky.

The model of the brain as a network of neurons, passing charge amongst each other, is a crude approximation to the teeming complexity of the brain as it actually is. But it has a number of advantages. Simplicity and comprehensibility, for instance. And, just as critically, amenability to mathematical analysis and computer simulation. Modeling all the molecules in the brain as a set of equations is an impossible task, in practice. It might be necessary for a real understanding of brain function, but I certainly hope not. Even modeling the chemical dynamics inside a single neuron is a huge exercise, to which some excellent scientists devote their entire lives, and which tells you extremely little about the functioning of the brain as a whole and the emergence of intelligence from matter. On the other hand, if one is willing to view the brain as a neural network, then one can construct mathematical and computational models of the brain quite easily. It’s true that the brain contains roughly a hundred billion neurons, and no mathematician can write down that many equations, and very few computers can simulate a system that large with reasonable efficiency. But at least one can simulate small neural networks, to get some kind of feel for how brain dynamics works.

This is the raison d’etre of the “neural networks” field, which was launched by the pioneering work of cyberneticists Warren McCullough and Walter Pitts, in the early 1940's. What McCullough and Pitts did in their first research paper on neural networks was to prove that a simple neural network model of the brain could do anything – could serve as a "universal computer." At this stage computers were still basically an idea, but Alan Turing and others had worked out the theory of how computers could work, and McCullough and Pitts' work, right there at the beginning, linked these ideas with brain science.

Artificial neural nets have come a long way since McCullough and Pitts. But even after all these years, the main thing this kind of work does is to expose our ignorance.

For instance, to decide which artificial neurons should connect to which, in an artificial brain-simulator program, requires detailed biological knowledge that is hard to come by. Deciding what types of neurons to have in one’s toy neural net requires similarly elusive knowledge, if simulating the brain accurately is really one’s goal. A few neural net specialists focus on acquiring and utilizing this data – Stephen Grossberg of Boston University is the chief example.

Gerald Edelman, who won a Nobel for his work in immunology, has taken a different approach, trying to abstract beyond these difficult issues while remaining within the computational brain modeling domain, by modeling networks of neuronal groups rather than networks of individual neurons. He has proposed some detailed and fascinating theories about the connectivity patterns of neuronal groups, and the dynamics of neural group networks, and embodied these theories in computer simulations of vision processing. Edelman doesn’t consider himself a neural net theorist per se, but the basic line of thinking and mathematical modeling is not far off.

I have done some research in this area myself, tying in the “neuronal group” idea with the notion of Hebbian learning mentioned above, in which synaptic conductances modify themselves adaptively based on the patterns of electricity flowing through them. I recently wrote a paper called “Hebbian Logic,” describing how artificial neural networks embodying a specific kind of variant of Hebbian learning can give rise to logical reasoning behavior, on an emergent level. Suppose one posits that concepts like “cat” and “dog” and “beauty”, as well as procedures like “pick my nose” or “calm my child down”, are represented by networks of neuronal groups. What I have shown is that, if one assumes that learning is carried out by the right type of variant of Hebbian learning, then the dynamics of interaction between networks of neuronal groups will automatically follow the rules of logical inference, to within a decent degree of approximation. The details are subtle, but the overall message is simple to understand. Surely and not all that slowly, one mathematics paper and neurophysiological experiment at a time, we are building a bridge from brain to mind.

But, okay – all this theory is interesting, but what are these artificial neural networks supposed to do? As even the most overintellectual of us have surely noticed at some time during our lives, the brain is connected to a body. On the other hand, most artificial neural networks are not attached to any real sensors or actuators. There are exceptions, such as neural nets used in vision processing or robotics. But other than these cases, neural network programs must be carefully fed data, and must be carefully studied to decode their dynamical patterns into meaningful results in the given application context. It often turns out that figuring out how to represent data to feed to a neural network program requires more intelligence than the neural network program itself embodies.

In practice, the biological-modeling motivated research carried out by neural net researchers like Grossberg and Edelman is fairly separate from more pragmatic, engineering-oriented neural network research. Most engineering applications of neural nets aren’t based on serious brain modeling at all; rather, the brain’s neural network is taken as general conceptual inspiration for the creation of an artificial neural network in software, and then computationally efficient learning algorithms are applied to this artificial neural network. It seems likely that the standard learning algorithms for artificial neural nets are actually more efficient at learning than the brain’s learning algorithms – in the context of very small neural networks, say hundreds to thousands of neurons. On the other hand, they don’t scale up well to billions of neurons. The brain’s learning algorithms, on the other hand, work badly in the small scale but remarkably well in the large. This holds as well for the special “logic-friendly” Hebbian learning rules I’ve been studying in my own work. They cause neural nets to give rise to emergent logical behavior – but only when the neural nets are really big, minimally hundreds of thousands of neurons, and preferably millions or hundreds of millions.

The brain has at least a hundred billion neurons, but for current practical applications of artificial neural nets, we’re usually talking hundreds to tens of thousands of neurons. This limitation is hard to get around, because it’s imposed by the inefficiency of implementing neural networks on contemporary computers, which can only do one thing at a time (e.g. let one simulated neuron fire simulated charge at a time), unlike the brain whose distributed physical embodiment allows all neurons to chemically and electrically act and interact at all time. The solution is to use supercomputers, or distributed Internet-based computing, but neither of these is economically practical for more real-world industry neural net applications today.

For instance, Rulespace (www.rulespace.com) uses a neural network program to recognize Internet pornography. America Online uses it to filter out porn for customers who request this feature. But the neural net inside their program can’t actually read text. Rather, there’s other code that recognizes key words in Web pages, and then uses the presence or absence of each key word in a Web page to control whether a certain neuron in a neural network gets activated (given simulated electricity) or not. The pornographicness or not of the Web page is then determined by whether or not a special neuron (the “output neuron”) is active or not, after activation has been given a while to spread through the network.

In an application like Rulespace, a neural network is being used as a simple mathematical widget. Simulation of the brain in any serious sense is not even being attempted. The connectivity pattern of the neurons, and the inputs and outputs of the neurons, are engineered to yield intelligent performance on the particular problem the neural net was design to solve. This is pretty typical in the neural net engineering world.

Real-world neural net engineering gets quite complex. For instance, to get optimal performance for OCR, instead of one neural net, researchers have constructed modular nets, with numerous subnetworks. Each subnetwork learns something very specific, then the subnetworks are linked together into an overall meta-network. One can train a single network to recognize a given feature of a character – say a descender, or an ascender coupled with a great deal of whitespace, or a collection of letters with little whitespace and no ascenders or descenders. But it is hard to train a single network to do several different things – say, to recognize letters with ascenders only, letters with descenders only, letters with both ascenders and descenders, and letters with neither. Thus, instead of one large network, it pays to break things up into a collection of smaller networks in a hierarchical architecture. If the network learned how to break itself up into smaller pieces, one would have a very impressive system; but currently this is not the case, the subnets are carefully engineered by humans.

Some experiments with fairly simple neural nets, such as the ones used in these practical applications, have had fascinating parallels in human psychology. In the mid-80’s, for instance, Rumelhart and McLelland created an uproar with simple neural networks that learn to conjugate verbs. They showed that the networks, in the early stages of learning, made the same kinds of errors as small children. Other people, a few years, trained neural nets to translate written words into sounds. If they destroyed some of the neurons and connections in the network, they showed, they obtained a dyslexic neural net. Which is the same thing that happens if you lesion someone's brain in the right area. You can do the same sort of thing for epilepsy: by twiddling the parameters of a neural network you can get it to have an epileptic seizure. Of course, none of this proves that these neural nets are good brain models in any detailed sense, but it shows that they belong to a general class of “brain-like systems.”

 

 
 
 

 
 
 

Perhaps the most fascinating neural net project around is Hugo de Garis’s Artificial Brain project, originally conducted at ATR in Japan, then pursued for a while at Starlab, in Brussels … and now, following Starlab’s bankruptcy, occupying Hugo’s non-teaching hours as he serves as a professor at the University of Utah in Logan. U Utah Logan is not MIT, but the fact that de Garis wound up there is no big shock – since when have the most radical, exciting, maverick scientific approaches originated in the most established institutions? Sometimes they do, but as often as not, radical insights come from way outside the establishment, where thinking is freer and funding is scarcer.

The Artificial Brain Project is de Garis’s attempt to create a hardware platform for advanced artificial intelligence, using special chips called Field-Programmable Gate Arrays to implement neural networks. As de Garis says, his CBM platform “will allow the assemblage of 10,000’s of evolved neural net modules. Humanity can then truly start building artificial brains. I hope to put 10,000 modules into a kitten robot brain….” The CBM is listed in the Guinness Book of World Records as the “World’s Largest Artificial Brain.” This research remains unfinished, but it will be interesting to see where it leads.

This is mad scientist type stuff, and in fact de Garis plays the role quite well. Many have told him he looks the part of a mad scientist. He doesn’t mind.

Recently a computer science researcher, a colleague of mine, asked me if I knew him. I said yes, a little. The next question: “So is he truly insane or not?”

My answer: “Why are you asking me? Do think it takes one crazy man to recognize another?”

Indeed, in some ways Hugo is a little further out even than I am, and I’m a pretty good mad scientist myself. But the future will be pretty far out compared to the present, so one wouldn’t expect the most average everyday people to have the best futuristic insights, let alone the deepest technical ideas. I recall once reading something about the great sci-fi writer Olaf Stapledon, something roughly along the lines of “It’s true, Stapledon had a few eccentricities. But then, your ordinary everyday Joe doesn’t sit around in his spare time and compose vast poetic novels recounting the entire history of life across the universe.”

De Garis is a bit less of an optimist than I am, however. He believes that, sometime during the next century, there’s going to be a world war between advocates of intelligent computers and those who want to extinguish them to save humanity. On the other hand, he doesn’t see this as a reason to halt his AI work. He reckons this kind of conflict is inevitable, whether or not he works on it, so he’s going to put his time into making sure AI is created as responsibly as possible – without being quite sure that “responsibly as possible” is very responsible at all.

Born in Australia, 54, with two adult children, de Garis has lived in 6 countries (Australia, England, Holland, Belgium, America, Japan) and now says he feels like a foreigner wherever he goes. Like Turchin, he started off his career as a theoretical physicist, working with quantum pioneer and philosophical maverick David Bohm -- but whereas Turchin turned to theoretical computer science, de Garis shifted toward artificial intelligence

 
 

  Two great brains together (Hugo de Garis & his CBM)

 
 

The CBM project was initiated during the 8 years he spent working in the research lab of ATR, a large Japanese telecommunications firm. He left Japan in frustration with its culture, and with his brain-building project only half done. As he puts it on his website, “I lived in Japan … because I felt that country offered me the best opportunities to achieve my long term dream of building artificial brains…. However, Japan's suppression of big egoed individualism, its utter intolerance of western criticisms of its third world social, political and intellectual values, simply enraged me. I had to leave its intellectual sterility to have a life of the mind…. It’s a culture that is first world materially, but third world socially, and is quite unsuited for the vast majority of westerners to live in - too socially backward, too insular, too uncosmopolitan, too closed, too racist, too chauvinist, too passively obedient, too feudal, too fascist for westerners to tolerate. Japan needs two generations of heavy social engineering to catch up with the west socially.”

Whoa! My Japanese colleague Takuo Henmi’s comment on reading this was: “There’s some truth to these things, but he pushes it way too far.” I also noticed, when I last saw Hugo, that he was very much in love with his Japanese girlfriend… who didn’t seem to mind his own special brand of “big-egoed individualism” at all!

In ATR’s Human Information Processing lab, run by Katsunori Shimohara – a first-rate techno-visionary in his own right – de Garis developed the theory and designed the details of the CBM, and then contracted out the actual engineering of the machine to some American hardware experts (Genobyte Inc., www.genobyte.com). It’s a device of incredible power. Unlike the general-purpose computers we use everyday, it’s built specifically to do one thing. It combines genetic algorithms, a computational simulation of evolution by natural selection, with neural networks, a computational simulation of the brain. It holds a huge number of little neural networks, computer programs roughly emulating the structure of the brain. It can pass information between these neural networks, and create new neural networks by evolving them according to specified “fitness criteria.” If one wants a little neural net to solve a certain problem, one casts this problem as a fitness criterion, gives the problem to the CBM, and the CBM will make a neural net that solves your problem. The idea is that if one hooks together a lot of little neural nets solving relevant, interrelated problems, then one has a brain-mind.

This is one lean, mean, evolving-neural-nets-with-the-genetic-algorithm machine. It performs as fast as 10,000 Pentium III computers would, if they were turned to this particular task. So far, 4 of them have been built. The current pricetag is $400,000. Only a few more can be built using the exact current design, because one of the components (a special Xilinx field programmable gate array) is not being produced by the manufacturer anymore. But other similar components could be substituted if more customers were found.

The main limitation of the system seems to be the artificial way that you have to set up the “fitness criterion” in order to have the thing evolve a neural net for you. You have to specify exactly what outputs the neural net is supposed to provide, when given various inputs. The CBM then uses simulated natural selection to “evolve” a neural net that produces the specified output given the specified inputs. It maintains a whole population of neural nets, evaluates how suitable the input/output behavior of each one is, and then takes the best ones and mutates them and combines them with each other to get a new population of neural nets, which are evaluated all over again…. This “genetic algorithm” methodology is great when it’s applicable, but not all learning problems are easily cast in this artificial way – for instance, learning how to interact with other minds isn’t about producing the exact right output for each given input, there’s a lot more subtlety to it. One suspects that when the time finally comes to integrate the CBM with a fully-featured AI system, with long-term memory, perception, action and the whole kit and kaboodle, some substantial modifications will be required. But even as it is, the CBM is surely a huge boon to AI research – and a powerful reminder of the lack of imagination of the mainstream AI community. If one maverick researcher can get this amazing AI hardware created, imagine what could be done with a concerted effort to get real AI working, by the governments, universities and corporations of the world.

De Garis himself understands that the CBM has some fairly serious limitation, but he reckons it’s already pushing the limits of what can be done with current science and technology. “Real AI,” he says, “is still many decades away. We still haven’t a clue how the brain works. What is an idea? How is memory stored? …. I think humanity will have to wait until we can ‘scan’ the brain, which is probably a decade or two away. Then we can store the scanned results and analyze them with massively parallel computers that future technology will give us. Circuits keep doubling their speed and densities every year or two, so 20 years from now our circuits will be on a molecular scale, with trillions of trillions of them. Then I think humanity will be able to tackle real AI. Our present tools are too primitive.”

 
 
 

 
 
 

One of his goals, in the short run, is to create a robot kitten, Robokoneko, with a billion artificial neurons. This project was being carried out, for quite some time, by Dr. Michael Korkin of Genobyte, the one who actually built the CBM hardware to de Garis’s specifications. (See http://foobar.starlab.net/~degaris/ for details on the CBM and Robokoneko.) Building the kitten will be a tremendous learning experience, one step on the path to creating an artificial human brain. Right now the project seems to be slowed down by some financial difficulties to do with the bankruptcy of Starlab and its financial relationship with Genobyte, but let’s hope these are resolved soon and things can get back fully on track.

I regret that I have not yet seen the CBM run, although I did see the machine itself. When I visited de Garis at Starlab in summer 2001, the machine was locked in a room at one end of the huge and beautiful Starlab building – a palatial construct that once was the Czech embassy to Belgium, situated in the semi-rural outskirts of Brussels. Starlab was rather surreal in appearance due to the fact that it had just gone bankrupt two weeks earlier, and the only person still using the building was de Garis, who was one of a handful of scientists who had not only worked in the building but lived there in an apartment directly attached to the research labs. It was an empty palace full of half-cleaned-out desks and strange machinery like the CBM. When I came back to his apartment late at night and found he wasn’t home yet, I became very familiar with the whole building as I searched for a way to sneak in. A basement door that was unlocked gave me entry to the building, and then a butter knife applied to the lock on de Garis’s apartment door got me into the apartment and safely into bed. We had a good laugh about the excellent security being used to guard the incredibly valuable equipment in the building.

Starlab seems to have been an amazing place for the few years it lasted; I wish I’d had the chance to work there myself. We at Webmind Inc., constantly struggling to balance our long-term AI R&D goals with short-term product development goals, would sometimes laugh jealously about Starlab’s motto: “Where 100 years means nothing.” “100 years, “ we’d say – “damn. All we need is 5 years more to finish our thinking machine. I wish we had enough money to let us focus on nothing but that for the next five years… let alone 100.” Well, Starlab didn’t last 100 years; in fact it died just a couple months after Webmind Inc. did – 2001 was definitely not the year for radically innovative research.

De Garis, in person, gives an impression of tremendous intensity and intelligence. He has the ambition to see what has to be done on a grand scale, and then set about following a complex long-term plan aimed at achieving his goals. On the other hand, he’s also not afraid of confronting and even embodying the contradictoriness of reality. It doesn’t worry him particularly to push hard toward creating real AI, while at the same time popularizing the dangers of this question, and the possibility that it may indirectly lead to mass destruction.

De Garis’s inner contradictions, it would seem, are on the extreme side even for wild-eyed technopioneers. For instance, Bill Joy, the Chief Scientist of Sun Microsystems, and Jaron Lanier, the virtual reality pioneer, have come to the media recently with strong anti-technology statements, and in spite of this they continue to pursue high-technology work. But Joy and Lanier are working on particular pieces of technology that are only indirectly related to the technologies they’re warning us about. They’re warning us about AI and nanotechnology and genetic engineering, and then working on Internet distributed computing and computer vision. Lanier told me openly when we first met: “I’m your ideological arch-enemy.” He believes AI is impossible, and even if possible probably dangerous; yet he works on computer vision research, modeling the brain’s perceptual algorithms in software that does computer graphics tricks with 3D faces. But compared to the mild conflicts between belief and action presented by people like Joy and Lanier, de Garis’s life would seem to pose a far more acute paradox. De Garis is working directly on building brains, and then telling us that brain building may destroy the world.

Quite simply, strikingly, and seriously, he predicts a late 21’st century world war between two human groups, whom he terms the Cosmists and the Terrans. The Cosmists will be in favor of creating “artilects,” superhuman artificial brain-minds, the next phase in the evolution of intelligence. The Terrans will be radically opposed to this kind of technology development, and willing to kill billions in order to prevent the advent of artilects – because, after all, the artilects will have the power to destroy humanity altogether.

He is well aware of the contradictory nature of his roles as artificial brain builder and visionary pessimist. ” I feel I am part of the problem,” he says … “the problem being, "Who or what should be dominant species in the 21st century?…. I am helping to pioneer this brain building field, so I feel a strong moral obligation to stimulate discussion on this enormous question. It is for this reason that I try to ‘raise the alarm’ in the world media, by making the general public conscious that next century's global politics will be dominated by the ‘Artilect Question’, i.e. do we allow the "artilects" (artificial intellects) to take over, or not.”

Crazy? Certainly not. Out of the ordinary? Well, your average ordinary Joe doesn’t go around creating artificial brain machines, now does he. And even if Robokoneko never comes to fruition, because of funding problems, de Garis’s work has advanced our understanding of the brain building problem considerably. He has shown us what can be done with highly specialized hardware, oriented specifically toward one key aspect of computational intelligence. And I’m sure he will teach us much more in years to come.

Personally, I find de Garis’s political prognostications much less convincing than his scientific work. If there is another world war, which I doubt, I suspect it will be centered around old fashioned concerns like religion and money and national pride rather than being focused on artilects in any direct way. But even so, the dilemma that de Garis points out is real and inescapable. This contradiction between AI boosters and AI detractors is going to be a huge part of the human dialogue over the next century, though probably in a more complex manner than de Garis envisions, mixed up with the whole mess of other, more familiar human issues and conflicts.

In the end, even if one doesn’t agree with all his theories and predictions, one has to admire the man for his courage to confront large scientific and moral issues directly, instead of, like most of his colleagues, hiding in a little tiny corner of the world, working on narrowly-defined research problems and letting the big issues evolve of their own accord. We could use more mad scientists like this one.

The essence of the brain lies not in what it is at any particular time, but rather in what it does – how it learns, adapts, and chance – in short, its dynamics. De Garis has realized this well, and the essence of his CBM lies in how it uses genetic algorithms to evolve neural networks carrying out useful functions. Furthermore, his genetic algorithm doesn’t even create fully featured neural nets: it creates “initial conditions”, baby neural nets that have to evolve and grow into useful neural network structures.

On the other hand, most of the artificial neural networks used in practical applications these days are pretty simple dynamically. In order to guarantee reliable functionality in their particular domains, they’re restricted to very limited behavior regimes. It’s easy to predict what they’ll do overall, although the details of the activation spreading inside them may be wildly fluctuating. Neural nets constructed for biological modeling are usually allowed a freer rein to evolve and grow, but this is rarely the focus of research. In general, the potential for really complex and subtle dynamics in neural networks has hardly been explored at all.

And this is a shame, because, the “threshold” behavior of a neuron conceals the potential for immense dynamical complexity, of a very psychologically relevant nature. Think about it: let's say a neuron holds almost enough charge to meet the threshold requirement, but not quite. Then a chance fluctuation increases its charge just a little bit. Its total store of charge will be pushed over the threshold, and it will shoot its load. A tiny change in input leads to a tremendous change in output -- the hallmark of chaos. But this is just the beginning. This neuron, which a tiny fluctuation has caused to fire, is going to send its input to other neurons. Maybe some of these are also near the threshold, in which case extra input will likely influence their behavior. And these may set yet other neurons off -- et cetera. Eventually some of these indirectly triggered neurons may feed back to the original neuron, setting it off yet again, and starting the whole cycle from the beginning. The whole network, in this way, can be set alive by the smallest fluke of chance! In this way you get chaos and complexity out of these simple formal networks.

The one biological researcher who has really grabbed ahold of this aspect of neural nets is Walter Freeman, who has shown, in his work with the olfactory part of the brain, that real neural networks display the same complex and crazy dynamics as artificial neural nets, and that this complex dynamics is critical to how brains solve the problem of identifying smells. Now, if we use complex, near-chaotic dynamics to identify smells, it’s hardly likely that the neural nets in our frontal lobes use simple dynamics like those embodied in current neural network based software products. We have a long way to go before our toy neural net model catch up with the neural nets in our heads!

Biologists and pragmatic computer scientists each have their own use for neural network models, their own neural network learning schemes, and so forth. My own personal interest in neural nets, on the other hand, has been mainly oriented neither toward brain modeling, nor towards immediate practical engineering applications. Rather, it’s been oriented toward real AI – toward the creation of truly intelligent computer programs, programs that, like humans, know who they are and behave autonomously in the world. Viewed in this light, current neural network research comes up rather lacking. Now, this isn’t a tremendously shocking conclusion, since with the exception of deGaris’s brain machine, researchers are playing around with vastly smaller neural networks than the one the brain contains. But it’s interesting to delve into the precise reasons why current neural net work isn’t terribly relevant to the task of artificial mind creation.

To understand the role of neural nets in the history of AI, one also has to understand their opposition. When I first started studying AI in the mid-1980’s, it seemed that AI researchers were fairly clearly divided into two camps, the neural net camp and the logic-based or rule-based camp. This isn’t quite so true anymore, but it’s still a decent first order approximation.

Whereas neural nets try to achieve intelligence by simulating the brain, rule-based models take a totally different approach. They try to simulate the mind's ability to make logical, rational decisions, without asking how the brain does this biologically. They trace back to a century of revolutionary developments in mathematical logic, culminating in the realization that Leibniz’s dream of a complete logical formalization of all knowledge is actually achievable in principle, although very difficult in practice.

Rule-based AI programs aren’t based on self-organizing networks of autonomous elements like neurons, but rather on systems of simple logical rules. Intelligence is reduced to following orders. In spite of some notable successes in areas like medical diagnosis and chess playing and financial analysis, the biggest thing this approach has taught us is that it’s really hard to boil down intelligent behaviors into sets of rules – the sets of rules are huge and variegated, and the crux of intelligence become the dynamic learning of rules rather than the particular rules themselves.

Now, to most any observer not hopelessly, caught up on one or another side of the debate, it’s obvious that both of these ways of looking at the mind – rules or neural nets -- are extremely limited. True intelligence requires more than following carefully defined rules, and it also requires more than random or rigidly laid-out links between a few thousand artificial neurons.

My own attempt at a solution to this problem, in the Novamente software system developed by myself and my colleagues, has been somewhat like Gerald Edelman’s. My AI program is based on entities called nodes, that are roughly of the same granularity as Edelman’s “neuronal groups.” Nodes are a bit like neurons – they have a threshold rule in them, and they’re connected by “links” that are a bit like synapses – connections between neurons – in the brain. But Novamente’s nodes have a lot more information in them than neurons. And the links between nodes have more to them than the links in neural net models – they’re not just conduits for simulated electricity; they have specific meanings, sometimes similar to the meanings of logical rules in a rule-based AI system.

The key intuition underlying Edelman’s and my approaches is to focus on the intermediate level of brain/mind organization: larger than the neuron, smaller than the abstract concept. The idea is to view the brain as a collection of clusters of tens or hundreds of thousands of neurons, each performing individual functions in an integrated way. One module might detect edges of forms in the visual field, another might contribute to the conjugation of verbs. The network of neural modules is a network of primitive mental processes rather than a network of non-psychological, low-level cells (neurons). The key to brain-mind, in this view, lies in the way the modules are connected to each other, and they way they process information collectively. The brain is more than a network of neurons connected according to simple patterns, and the mind is more than an assemblage of clever algorithms or logical transformation rules. Intelligence is not following prescribed deductive or heuristic rules, like IBM’s super-rule-based-chess-player Deep Blue; but nor is intelligence the adaptation of synapses in response to environmental feedback, as in current neural net systems. Intelligence involves these things, but at bottom intelligence is something different: the self-organization and mutual intercreation of a network of processes, embodying perception, action, memory and reasoning in a unified way, and guiding an autonomous system in its interactions with a rich, flexible environment.

 
 
 

 
 
 

Like his close collaborator Valentin Turchin, Francis Heylighen started his career as yet another physicist with a craving to understand the foundations of the universe – the physical and philosophical laws that make everything tick. And also like Turchin -- though unlike most physicists who’ve been sucked into the world of computers -- Francis didn’t give up his previous intellectual ambitions when he got the computer bug. Rather, he became convinced that complex, self-organizing computer networks are just as valid and important a way to understand the universe as physics or metaphysics. Since 1982, he’s used his research position at the VUB’s transdisciplinary “Leo Apostel” research center to pursue precisely this perspective. In particular he’s focused his thinking on the fascinating and futuristic idea of the global brain – the idea that the Internet, as it evolves, will eventually adopt its own unique form of the dynamical and structural complexity and self-organizing intelligence displayed by the brain.

In Heylighen visions, everyday Internet interactions using e-mail and chat and the Web are themselves glimmerings of the birth of the global brain. We are not yet at the point of the metasystem transition where the Net becomes an autonomous, self-organizing intelligence, but each time we send an e-mail or create or follow a hyperlink, we’re getting there.

In 1989, he Valentin Turchin and Cliff Joslyn founded the Principia Cybernetica Project, aimed at marshalling a group of minds together to pursue the application of cybernetic theory to modern computer systems. In 1993, very shortly after Tim Berners-Lee released the HTML/HTTP software framework and thus created the Web, the Principia Cybernetica website (http://pespmc1.vub.ac.be/ ) went online. The Internet, the site claimed boldly, was the ideal medium for the development of the next generation of thinking about life, the universe and everything.

For a while after its 1993 launch, Principia Cybernetica was among the largest and most popular sites on the Web. Today the Web is a whole different kind of place, but Principia Cybernetica remains a rich, sprawling Website, a unique and popular resource for those seeking deep, radical thinking about the future of technology, mind and society. Eschewing the traditional hierarchical structure of most Websites, it is structured more like the “semantic networks” used inside AI programs, with each page linked to the other pages that relate to it in various ways. It doesn’t yet organize itself automatically based on user feedback or AI intuition, but it’s actively improved and updated by the numerous humans involved with the organization. The basic philosophy presented is founded on the thought of Turchin and other mid-century systems theorists, who view the world as a complex self-organizing system in which complex control structures spontaneously evolve and emerge.

The site’s creation and early development was a collaborative effort on the part of its three creators. Today, though, Turchin spends most of his time working on his own investigations in computer science and philosophy, and his start-up company. Joslyn is primarily occupied with practical data analysis and computer system design work inspired by cybernetics. Francis Heylighen, however, remains squarely focused on the Principia Cybernetica vision and all that it entails. He has fleshed out the Internet-brain parallel in some very concrete and interesting ways.

For example, Heylighen and his colleague Johan Bollen have experimented with Web-like systems in which the links between pages are created, destroyed, strengthened or weakened by user feedback. The brain-Internet parallel here is striking and direct. Web pages are neurons; hyperlinks are synapses. Learning in the brain involves modification of synaptic weights; and, Heylighen proposes, learning in the Internet should involve modification of the weights of hyperlinks between Web pages. Currently hyperlinks don’t have weights of course – a hyperlink is just a highlighted word, phrase or picture on one Web page, which when you click on it brings you to another page. But what if each hyperlink had a weight indicating how strong of a relationship it represented? What if these weights were determined by a combination of AI text analysis programs studying the documents at either end of the link, and reinforcement based on human user habits – links followed more frequently get bigger weights?

In the kind of coincidence that is very common in science, I discovered Heylighen’s ideas along these lines when, in 1995, I posted a paper online suggesting a similar idea. I didn’t call it a “global brain”, but my phraseology was similar – my paper was entitled “From World Wide Web to World Wide Brain.” I made the same synapse-hyperlink analogue as Heylighen, but from there I moved in a somewhat different direction. Heylighen focused on the modification of hyperlink weights based on human usage patterns, whereas I proposed to put an AI at each website, analyzing the relationship between that site and other sites, building new hyperlinks and modifying the weights of existing ones. I envisioned Internet intelligence as emerging from the synergetic activity of various AI agents, associated with websites and databases, each one fairly intelligent on its own. The Net in this view would be a kind of hybrid mind/society of AI’s, and humans would enter into this society of sub-minds alongside the AI’s as equals, jacking into the Net first via e-mail and chat, later via virtual reality tech, yet later by advanced bioengineering utilities (the fabled “cranial jack”?). AI agents and humans would modify the weights of hyperlinks, but this would only be part of the story – because, as I saw it, hyperlinks were not a rich enough data structure to store all the types of knowledge a mind requires. Complex relationships, knowledge of procedures, and so forth couldn’t be represented that way. The network of adaptively weighted hyperlinks would just be one “virtual lobe” of the world wide virtual brain.

Heylighen, on the other hand, placed less focus on AI’s and more on the network itself. He had no illusions that weighted hyperlinks were an adequate structure to represent all forms of knowledge, but he figured, every brain has to start somewhere. He has made efforts to get the muck-a-mucks of the Internet world – browser makers, or the W3C, which is the nonprofit Internet advisory board headed by Tim Berners-Lee, the founder of the Internet – to incorporate adaptively weighted hyperlinks into the real live Internet, but so far this hasn’t met with success. However, prototypes of “toy Internets” demonstrating the “hyperlink as synapse” mechanism have been just as successful as envisioned.

What happens, Heylighen and Bollen have found, is that over time – as heavily-used hyperlinks have their weights increases and less-used hyperlinks have their weights decrease -- the structure of the web of documents gradually comes to represent the collective thoughts and beliefs of the users. The philosophical undertones here are rather different from Principia Cybernetica, which reflects Turchin’s more elitist vision of brilliant scientists gradually refining one anothers’ conceptual formalizations, slowly adding one node after another to the emerging network of understanding. Rather, Heylighen and Bollen’s adaptive hyperlink approach suggests that truth can be arrived at through a kind of statistical chaos. Just add together everyone’s opinions – the bad ones will cancel out through destructive interference, and the good ones will reinforce each other through constructive interference; ultimately the true ideas will emerge. Heylighen and Bollen’s experimental systems haven’t been released on the Principia Cybernetica site yet – given the limiting nature of current Web software, there are some implementation difficulties -- but this will no doubt happen in time.

In 1996, Heylighen founded the "Global Brain Group", an international discussion forum that groups most of the scientists who have worked on the concept of emergent Internet intelligence. This group runs an e-mail discussion list, which initially was extremely limited in membership, open only to scientists who had published serious articles on the notion of a global brain. This group numbered about 10, and was not particularly chatty, so eventually it was decided to admit more people, though only people approved by the initial elite group. The group is still fairly quiet, although a few interesting discussions have emerged – the most interesting ones revolving around the notion of “freedom,” and the question of whether the emergence of brain-like complexity in computer and communication networks will take it away from us humans.

For instance, Leor Gruendlinger wrote on the Global Brain e-mail list, in November 1999, the following worried message: “Before I happily agree to become the part of a cyber-brain (and hence die one clear day because of a bug), I would like to retain my autonomy, or at least lose it in stages… What kind of stages? I think about insect colonies as an example: still free to move, to act by themselves, but very much committed to the community, sharing food and resources, caring for the young together, etc. …Perhaps before humans agree that their sight, smell and other senses be manipulated by a chip, they will need this confidence and trust in the system they will be part of. It has to sustain them better, perhaps by seeing farther into the future and preparing in advance for challenges they cannot even grasp….. A neuron-like symbiosis has the flavor of being even more demanding. What levels of autonomy are there to pass through on the way to the global brain? Will such a passage be gradual, or very fast?”

Steve Wishnevsky then pointed out that this vision of the future Net as usurping individual autonomy and rendering us like ants in a colony may be a big exaggeration. After all, he argued, “consciousnesses larger and more permanent than human have

existed for thousands of years, in the form of bureaucracies, churches and empires.”

But I found this argument somewhat lacking. “’Largeness’ and ‘permanence,’” I argued in my reply to him, “are not the most important parameters of consciousness…. Suppose we accept the panpsychic theory that everything is conscious…. Still, some things are more conscious than others… There is something called "intensity" of consciousness (which …has to do with the amplification of information...) I think that a bureaucracy has a much lesser intensity of consciousness than a human.”

The key question isn’t whether the Net is gaining more and more structure, and invading our lives and implicitly directing more and more of our activities. Obviously, it is, and it’s not about to stop. The key question is – how much. How much control will this emergent meta-system have – will it just be like a weird new kind of social institution, or will it be something bigger, something that invades our minds and makes us into some new kind of posthuman human….

Heylighen, with a modesty that is unusual, almost quaint, on the Net today, doesn’t claim to have all the answers. He’s content to study the issues, to broadcast his insights as he makes them, and to organize information and discussions leading progressively toward the truth, which will emerge bit by bit, taking its own good time. His vision of Web pages as neurons and hyperlinks as synapses is an exciting one – not an exact parallel between the Internet and the brain, nor a complete guide to making a distributed, whole-internet-based intelligence, but an important contribution with a simplicity and elegance that is sure to make a big impact someday. The Internet will never be a brain but it will accrete more and more aspects of formal “neural networks,” and will eventually become an intelligent system with which we will communicate in various old and new ways. Just wait and see….

It’s far too early to write a conclusion for the field of neural network research. Right now, things are in all ways extremely primitive. The dynamics of the brain has not been tremendously elucidated by experimentation with neural net models – not yet. But maybe it will be, as these models become larger (due to faster computers and bigger computer networks) and richer in structure (due to greater understanding of the brain, or further development in theoretical AI). So far, though neural nets have proved useful in various areas, there aren’t any major engineering problems that neural nets solve vastly better than other non-brain-inspired algorithms. And so far, nothing close to a fully functioning artificial bug, let alone an artificial human brain, has been produced using neural net software – or any other approach to AI. Although maybe Hugo de Garis will get there in another 5-10 years, if his funding holds up. We’re at the beginnings of this fascinating area of cognitive science research, not the end. If neural net researchers are willing to grow their field to embrace more and more of the multileveled complexity of the three-pound enigma in our skulls, then we can expect more and more wonders to emerge from their work as the years roll by.