written with Stephan Vladimir Bugaj


We live at a crucial point in history – an incredibly exciting and frightening point; a point that is stimulating to the point of excess, intellectually, philosophically, physically and emotionally. A number of really big technologies are brewing. Virtual reality, which lets us create synthetic worlds equal in richness to the physical world, thus making the Buddhist maxim “reality is illusion” a palpable technical fact. Biotechnology, allowing us to modify our bodies in various ways, customizing our genes and jacking our brains, organs and sense organs into computers and other devices. Nanotechnology, allowing us to manipulate molecules directly, creating biological, computational, micromechanical, and other kinds of systems that can barely be imagined today. Artificial intelligence, enabling mind, intelligence and reason to emerge out of computer systems – thinking machines built by humans. And advances in unified field theory in physics will in all likelihood join the party, clarifying the physical foundation of life and mind, and giving the nanotechnologists new tricks no one has even speculated about yet.

Even immortality is not completely out of the question. As Eric Drexler argued in Engines of Creation, nano-scale robots could swarm through your body repairing your aging cells. Or, as Hans Moravec depicted in his classic book Mind Children, brain scan technology combined with AI could have us uploading our minds into computers once our bodies wear down. Or genetic engineering could allow us to fuse with computers, eliminating aging and permitting direct mind-to-mind communication with other humans and computers via complex analysis of the brain’s electromagnetic fields. It sounds like science fiction, but it’s entirely scientifically plausible: these would be immense engineering achievements, but wouldn’t violate any known scientific laws. A lot of seemingly impossible things will soon be possible.

Bill Joy, Chief Scientist of Sun Microsystems, one of the leading companies in the current phase of the tech revolution, recently wrote an article in Wired painting this same kind of future, but with a markedly dystopian bent. He believes all this amazing technological development will happen, and he finds it intensely scary. It’s difficult to blame him, actually. The potential for abuse of such technologies is obvious. We have to hope that an ethical evolution comparable to the technological evolution will occur at the same time, beautifully synchronized.

This evolving network of long-term possibilities is wonderful and amazing to think about, but, on the other hand, it’s almost too big for any one mind or small group of minds to grapple with. Imagine a bunch of pre-linguistic proto-humans trying to comprehend what the advent of language was going to do to them. That’s basically the kind of situation we’re in! Nevertheless, in spite of the difficulty intrinsic in foresight and the impossibility of planning any kind of revolution in advance, least of all the technological and psychocultural kind, there’s something we can do beside sit back and watch as history leads us on. We can focus on particular aspects of the revolution we’re wreaking, understanding these aspects as part of the whole, and also focusing hard on the details, remembering that, historically, some of the subtlest and most profound general humanistic insights have come from dealing with very particular issues. In these pages we’re going to view the ongoing tech revolution through the eyes of biocomputing – not the only perspective, to be sure, but definitely an interesting one.



All these words about the amazing present and future achievements of technology are almost irrelevant these days. The transformative power of technology is now pretty much common sense. Any kid raised on Digimon, Johnny Quest and Dragonball Z knows supercomputers, virtual reality, intelligent robots like the back of his hand (and the only reason he knows the back of his hand is that he sees it sometimes while playing Gameboy!).

Even so, though, the wide acceptance of rapidly improving tech as an everyday phenomenon, has not translated into a widespread deep understanding of where the tech explosion is leading us. Most people would agree, if asked, that technology is improving at a superexponential rate (after you explained to them what “superexponential” meant). But not many people yet seem willing to draw the natural conclusion of this: that, unless some strange unsuspected limits to progress are encountered, within 30-100 years, machines will have outpaced us in nearly all ways, and quite likely transformed us radically in the process.

The renowned inventor Ray Kurzweil has tried to make this point clear to the average American in his recent book The Singularity is Near – “Singularity” being a term coined by sci-fi writer Vernor Vinge to describe the point at which technological advance becomes so rapid that it is essentially instantaneous. He has meticulously charted the progress of technology in various areas of industry, and plotted curves showing the approaching Singularity. Two of his more macro-level observations are given in the following two charts.

This one shows the mass use of inventions as it has accelerated over time – a quantitative depiction of the penetration of technology into everyday life. Of course, there’s a lot of detail underlying the calculation of such figures, but Kurzweil seems to have been reasonably careful and scientific in this regard.

 
 

 
 

A couple more examples of his trend-tracking work are as follows:

 
 

 
 
 

 

The same basic picture comes up again and again, in one technology domain after another. Progress goes faster and faster, and it grows faster at a faster rate each year.

Kurzweil has done a particularly careful analysis of the phenomenon of exponential increase in computing power. His overall conclusion:

 


• We achieve one Human Brain capability (2 * 10^16 cps) for $1,000 around the year 2023.
• We achieve one Human Brain capability (2 * 10^16 cps) for one cent around the year 2037.
• We achieve one Human Race capability (2 * 10^26 cps) for $1,000 around the year 2049.
• We achieve one Human Race capability (2 * 10^26 cps) for one cent around the year 2059.

The comparisons with human brain power are based on a rough neural network model of the brain, as will be described in the following chapter. However, as he points out, an error of a couple orders of magnitude in estimating human brain power, will only throw his time estimates off by a couple years – thus is the power of exponential growth.

Of course, Ray Kurzweil is only one among a large and growing group of trend-tracking Singularity pundits. One of the early works along these lines was Derek DeSolla Price’s 1963 book Little Science, Big Science and Beyond. More recent works include Damien Broderick’s 2001 book The Spike (mostly qualitative, but with some quantitative aspects), and Johansen and Sornette's 2001 article "Finite Time Singularity in the Dynamics of the World Population, Economic, and Financial Indices" (available online at http://xxx.lanl.gov/abs/cond-mat/0002075).

John Smart has started a group called SingularityWatch, and established himself as perhaps the world’s first full-time Singularity pundit. His observations in a recent article on KurzweilAI.net are as follows:

 

Complementary to Kurzweil's data on double exponential growth, these authors note that many other computationally relevant variables, such as economic output, scientific publications, and investment capital, have exhibited recent asymptotic phases. Of course, each particular substrate eventually saturates—population, economies, etc. never do go to infinity—but measuring the asymptotic phases does allow us to better trace the second order computational trend presently unfolding on the planet in a broad range of physical-computational domains.

Certainly my best current projected range of 2020-2060 is voodoo like anyone else's, but I'm satisfied that I've done a good literature search on the topic, and perhaps a deeper polling of the collective intelligence on this issue than I've seen elsewhere to date. To me, estimates much earlier than 2020 are unjustified in their optimism, and likewise, estimates after 2060 seem oblivious to the full scope and power of the… processes in the universe.

I think this kind of overall trend analysis is very valuable thing. It gives some much-needed quantitative concreteness to the qualitative observation of exponential tech acceleration. But one also feels the need for a deeper view, a perspective that gives a sense of what the Singularity means. Each of the extrapolatory graphs produced by Kurzweil and others encapsulates a long, deep story – not just a story of technical engineering improvements, but a story of conceptual breakthroughs, of deepening understanding. It is our progressively deepening understanding of the universe, synergetically evolving with our creation of new technologies, that will bring the Singularity about. My focus here will be on the synergy between deepening understanding and advancing technology – a synergy that spans scientific disciplines, and furthermore spans the domains of science, engineering and philosophy.

 
 
 

 
 
 

While Kurzweil has created an impressively concrete analysis, and Vinge’s original writings on the Singularity are elegant and provocative, perhaps the deepest analysis of what’s going on was provided by the Russian philosopher-scientist Valentin Turchin. We will use Turchin’s notion of the Metasystem Transition as a tool for understanding the future of biocomputing and its general implications.

Valentin Turchin is a fascinating individual, who holds a unique position in the history of the Internet. He was the first of the cyber-gurus: The expansion of computer and communication networks that he foresaw in his 1970 book “The Phenomenon of Science” is now a reality; and the trans-human digital superorganism that he prophesied to emerge from these networks, is rapidly becoming one. But unlike most scientists who turn toward philosophy in mid-life, Turchin was not satisfied to be a grand old theorist. Now in his 70’s, in spite of an increasingly frustrating battle with Parkinsonism, he is playing an active role in making his vision of an Internet superorganism come true, helping lead an Internet start-up company, Supercompilers LLC, which applies the same cybernetic principles he used to study the future of humanity to create computer programs that rewrite other programs to make them dozens of times more efficient – and even rewrite themselves.

No one in Turchin’s generation started off their careers in computer science; rather, it was his generation that started computer science off. He holds three degrees in theoretical physics, obtained in the 50’s and early 60’s; and the first decade of his career was devoted to neutron and solid state physics. But in the 60’s his attention drifted toward computers, far before computers became fashionable – especially in Russia. He created a programming language, REFAL, which became the dominant language for artificial intelligence in the Soviet bloc. Apart from any of his later achievements, his work on REFAL alone would have earned him a position as one of the leaders of 20’th century computer science.

It was the political situation in the Soviet Union that drew him out of the domain of pure science, into the world of philosophy. In the 1960's he became politically active, and in 1968 he authored "Inertia of Fear and the Scientific Worldview", a fascinating document combining a scathing critique of totalitarianism and the rudiments of a new cybernetic theory of man and society. Not surprisingly, following the publication of this book in the underground press, Turchin lost his research laboratory.

His classic "The Phenomenon of Science," published two years later, enlarged on the theoretical portions of “Inertia of Fear,” presenting a unified cybernetic meta-theory of universal evolution. The ideas are deep and powerful, centered on the notion of a metasystem transition, a point in the history of evolution of a system where the whole comes to dominate the parts. Examples are the emergence of life from inanimate matter; and the emergence of multicellular life from single-celled components. He used the metasystem transition concept to provide a global theory of evolution and a coherent social systems theory, to develop a complete cybernetic philosophical and ethical system, and to build a new foundation for mathematics. The future of computer and communication technology, he saw, would bring about a metasystem transition in which our computational tools would lead to a unified emergent artificial mind, going beyond humanity in its capabilities. The Internet and related technologies would spawn a unified global superorganism, acting as a whole with its own desires and wishes, integrating humans to a degree as yet uncertain.

By 1973 he had founded the Moscow chapter of Amnesty International and was working closely with Andrei Sakharov. The Soviet government made it impossible for him to stay in Russia much longer. In 1977, persecuted by the KGB and threatened with imprisonment, he was expelled from the Soviet Union, taking refuge in the US and joining the Computer Science faculty of the City University of New York, where he continued his philosophical and scientific work. Among other projects, in the mid-1980’s he created the concept of supercompilation, a novel technique that uses the meta-system transition concept to rewrite computer programs and make them more efficient.

While Americans tend toward extreme positions about the future of the cyber-world – Bill Joy taking the pessimist’s role; Kurzweil, Moravec and others playing the optimist – Turchin, as he and his team work to advance computer technology, views the situation with a typically Russian philosophical depth. He still wonders, as he did in “The Phenomenon of Science,” whether the human race might be an evolutionary dead-end, like the ant or the kangaroo, unsuitable to lead to new forms of organization and consciousness. As he wrote there, in 1970, “Perhaps life on Earth has followed a false course from the very beginning and the animation and spiritualization of the Cosmos are destined to be realized by some other forms of life.” Digital life, perhaps? Powered by Java, made possible by the supercompiler Turchin’s software company is developing?

“The Phenomenon of Science” closes with the following words: “We have constructed a beautiful and majestic edifice of science. Its fine-laced linguistic constructions soar high into the sky. But direct your gaze to the space between the pillars, arches, and floors, beyond them, off into the void. Look more carefully, and there in the distance, in the black depth, you will see someone's green eyes staring. It is the Secret, looking at you.”

This is the fascination of the Net, and genetic engineering, and artificial intelligence, and all the other new technologies spreading around us and – now psychologically, soon enough physically – within us. It’s something beyond us, yet in some sense containing us – created by us, yet creating us. By writing books and supercompilers, or just sending e-mails and generally living our tech-infused lives, we unravel the secret bit-by-bit -- but we’ll never reveal it entirely.

Meeting Turchin in person, after reading his work and admiring his thinking, was a fascinating experience. He was very down-to-earth, definitely not spouting phrases such as “fine-laced linguistic constructions” and “beautiful and majestic edifice of science” – but of course this didn’t surprise me; after all, my own conversation is a lot more informal than my prose. When I brought up the global brain and the posthuman future, he smiled knowingly, said a few words, and gently changed the subject. Clearly, he hadn’t changed his mind about any of that, but it wasn’t what was occupying most of his time. To him, all this was old hat. Of course, humans will become obsolete; of course computer programs will become superintelligent. Nothing more to say about that really. What was occupying his mind seemed to be the particular technical problems he was working on: making a supercompiler that could rewrite Java programs to make them dozens of times faster, using advanced logical reasoning; and firming up a theory he’d developed years before called the “cybernetic foundations of mathematics,” which grounds mathematical knowledge in the process of acting in and perceiving the world, using his programming language Refal as a tool.

I was happy to be able to help him out: I played an instrumental role in getting his company Supercompilers LLC funded, by introducing him and his Russian colleagues to a major Chicago investor. In late 2001 I became even more intensely involved with this work, joining the Supercompilers team on a part-time basis, helping them out with shaping their business approach and defining their direction of research. But by this point, Turchin is playing only a very “high-level conceptual guru” sort of role in the business; the main torch is being carried by his former student Andrei Klimov, a formidable intellect in his own right. Turchin is working mainly on the cybernetic foundations of mathematics. I asked him recently if he had considered cryonics – having himself frozen after his death, so he could come back 50 or 500 years later and debate with intelligent computers, post-metasystem-transition, about cybernetic math, global brains or whatever. He said he had been thinking about this sort of thing for decades, but hadn’t been aware of any organization seriously doing it; but the group Alcor that I pointed out to him (www.alcor.org) impressed him. We shall see….

I realized, when I met Turchin in person and saw what sorts of things were preoccupying him, that he was specifically not on the kind of path that would lead him to become a well-known “cyber-guru.” To become well-known in the media as the originator or defender of a certain idea, you have to spend all your time pushing the idea, writing book after book about the same thing, giving interviews, training disciples. Turchin’s way is exactly the opposite of this. He stated his philosophical views very clearly in an excellent book and some related articles, and then in large part went back to technical work, albeit technical work informed by his philosophy. His ideas have had a huge influence behind the scenes, through their influence on countless scientists of my generation. But his disciples, if one may call them that, are pushing his technical agenda not his philosophical one. In a way it made me sad to see that, outside Russia, he simply wasn’t going to get the acclaim he deserved. But then I realized it didn’t matter: his goal had never been to get acclaim anyway, it had been to speak the truth and to spread the truth, and this goal had been accomplished.

Turchin’s notion of metaystem transitions provides a powerful overall framework within which to understand the maelstrom of scientific, technological and social change that whirls all around us. It allows us to see the unfolding transitions in humanity as merely one phase in a longer and larger process – similar in many ways to other phases, though with its own unique and fascinating properties, and obviously with a different personal relevance to us homo sapiens.

 
 

 

Kurzweil’s graph depicts the advance toward the Singularity quantitatively. The vertical access is a measure of technical progress. As Kurzweil puts it, “The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially). So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today's rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.”

The metasystem transition notion gives a conceptual picture to go with this graph.

Let’s begin at the beginning: According to current physics theories, there was a metasystem transition in the early universe, when the primordial miasma of disconnected particles cooled down and settled into atoms. All of a sudden the atom was the thing, and individual stray particles weren’t the key tool to use in modeling the universe. Once particles are inside atoms, the way to understand what particles are doing is to understand the structure of the atom. And then there was another transition from atoms to molecules, leading to the emergence, within the first few instants after the Big Bang, of the Periodic Table of Elements.

There was another metasystem transition on earth around four billion years ago, when the steaming primordial seas caused inorganic chemicals to clump together in groups capable of reproduction and metabolism. Unicellular life emerged, and once chemicals are embedded in life-forms, the way to understand them is not in terms of chemistry alone, but rather, in terms of concept like fitness, evolution, sex, and hunger. Concepts like desire and intention are not far off, even with paramecia: Does the paramecium desire its food? Maybe not … but it comes a lot closer than a rock does to desiring to roll down a hill….

And there was another metasystem transition when multicellular life burst forth – suddenly the cell is no longer an autonomous life form, but rather a component in a life form on a higher level. The Cambrian explosion, immediately following this transition, was the most amazing flowering of new patterns and structures ever seen on Earth – even we humans haven’t equaled it yet. 95% of the species that arose at this time are now extinct, and paleontologists are slowly reconstructing them so we can learn their lessons.

Note that the metasystem transition is not an antireductionist concept, not in the strict sense. The idea isn’t that multicellular lifeforms have cosmic emergent properties that can’t be explained from the properties of cells. Of course, if you had enough time and superhuman patience, you could explain what happens in a human body in terms of the component cells. The question is one of naturalness and comprehensibility, or in other words, efficiency of expression. Once you have a multicellular lifeform, it’s much easier to discuss and analyze the properties of this lifeform by reference to the emergent level than by going down to the level of the component cells. In a puddle full of paramecia, on the other hand, the way to explain observed phenomena is usually by reference to the individual cells, rather than the whole population – the population has less wholeness, fewer interesting properties, than the individual cells.

The metasystem transition idea is important in its clear depiction of the overall patterns of system evolution; and I have also found it useful in my own research work on AI. In the domain of mind, there are also a couple levels of metasystem transition. The first one is what one might call the emergence of “mind modules.” The second is the emergence of mind from a conglomeration of interconnected, intensely interacting mind modules. Since 1997 I have been leading a team working towards building a “real AI” – a software program with human-level general intelligence – with a major focus on achieving this second metasystem transition. I’ll talk about this work in a little more detail in Chapter 3.

The first metasystem transition on the way toward mind occurs when a huge collection of basic mind components – cells, in a biological brain; “software objects” in a computer mind – all come together in a unified structure to carry out some complex function. The whole is greater than the sum of the parts: the complex functions that the system performs aren’t really implicit in any of the particular parts of the system, rather they come out of the coordination of the parts into a coherent whole. The various parts of the human visual system are wonderful examples of this. Billions of cells firing every which way, all orchestrated together to do one particular thing: map visual output from the retina into a primitive map of lines, shapes and colors, to be analyzed by the rest of the brain. The best current AI systems are also examples of this. In fact, computer systems that haven’t passed this transition I’d be reluctant to call “AI” in any serious sense.

There are some so-called AI systems that haven’t even reached this particular transition – they’re really just collections of rules, and each behavior in the whole system can be traced back to one particular rule. But consider a sophisticated natural language system like LexiQuest – which tries to answer human questions, asked in ordinary language, based on information from databases or extracted from texts. In a system like this, we do have mind module emergence. When the system parses a sentence and tries to figure out what question it represents, it’s using hundreds of different rules for parsing, for finding out what various parts of the sentences mean. The rules are designed to work together, not in isolation. The control parameters of each part of the system are tuned so as to give maximal overall performance. LexiQuest isn’t a mind, but it’s a primitive mind module, with its own, albeit minimal, holistic emergence. The same is true of other current high-quality systems for carrying out language processing, computer vision, industrial robotics, and so forth. For an example completely different from LexiQuest, look at the MIT autonomous robots built under the direction of Rodney Books. These robots seem to exhibit some basic insect-level intelligence, roaming around the room trying to satisfy their goals, displaying behavior patterns that surprise their programmers. They’re action-reaction modules, not minds, but they have holistic structures and dynamics all their own.

On roughly the same level as LexiQuest and Brooks’ robots, we find computational neural networks, which carry out functions like vision or handwriting recognition or robot locomotion using hundreds up to hundreds of thousands of chunks of computer memory emulating biological neurons. As in the brain, the interesting behavior isn’t in any one neuron, it’s in the whole network of neurons, the integrative system. There are dozens of spin-offs from the neural network concepts, such as the Bayesian networks used in products like Autonomy and the Microsoft Help system. Bayesian networks are networks of rules capable of making decisions such as "If the user asks about ‘spreadsheet’, activate the Excel help system". The programmer of such a system never enters a statement where the rule "if the word spreadsheet occurs, activate the help system" appears -- rather this rule emerges from the dynamics of the network. However, the programmer sets up the network in a way that fairly rigidly controls what kinds of rules can emerge. So while the system can discover new patterns of input behavior that seem to indicate what actions should be taken, it is unable to discover new kinds of actions which can be taken – that is, it can only discover new instances of information, not new types of information. It’s not autonomous, not alive.

As I noted above, my own AI work has centered around creating systems that embody a metasystem transition beyond this “mind module” level. First, at Webmind Inc., my R&D team and I built a system called the Webmind AI Engine (or simply “Webmind”). Since Webmind Inc. folded in early 2001, a smaller group has been collaborating with me on a successor system called Novamente, with the same conceptual principles but a different mathematical underpinning and software design.

Novamente, like Webmind, is a multimodular AI. Each module of the Novamente system has roughly the same level of sophistication as one of these bread-and-butter AI programs. There has modules that carry out reasoning, language processing, numerical data analysis, financial prediction, learning, short-term memory, and so forth. All the modules are all built of the same components, C++ software objects called “nodes” and “relationships”. They arrange these components in different ways, so that each module achieves its own emergent behavior, each module realizing a metasystem transition on its own.

But mind modules aren’t real intelligence, not in the sense that we mean it: Intelligence as the ability to carry out complex goals in complex environments. Each mind module only does one kind of thing, requiring inputs of a special type to be fed to it, unable to dynamically adapt to a changing environment. Intelligence itself requires one more metasystem transition: the coordination of a collection of mind modules into a whole mind, each module serving the whole and fully comprehensible only in the context of the whole. This is a domain that AI research has basically not confronted yet – it it’s not mere egotism to assert that the Webmind/Novamente systems are almost unique in this regard. It takes a lot of man-hours, a lot of thinking, and a lot of processing power to build a single mind module, let alone to build a bunch of them – and even more to build a bunch of them in such a way as to support an integrative system passing the next metasystem transition. We’re just barely at the point now, computer-hardware-wise, that we can seriously consider doing such a thing. But even being just barely there is a pretty exciting thing.

Novamente allows the interoperation of these intelligent modules within the context of a shared semantic representation – nodes, links and so forth. Through the shared semantic representation these different intelligent components can interact and thus evolve a dynamical state which is not possible within any one of the modules. Like a human brain, each specialized sub-system is capable of achieving certain complex perceptual (such as reading a page of text) or cognitive (such as inferring causal relations) goals which in themselves seem impressive - but when they are integrated, truly exciting new possibilities emerge. Taken in combination, these intelligent modules embodying systems such as reasoning, learning and natural language processing, etc. undergo a metasystem transition to become a mind capable of achieving complex goals worthy of comparison to human abilities. The resulting mind can not be described merely as a pipeline of AI process modules, rather it has its own dynamical properties which emerge from the interactions of these component parts, creating new and unique patterns which were not present in any of the sub-systems.

Such a metasystem transition from modules to mind is a truly exciting emergence. A system such as Novamente can autonomously adapt to changes in more complex environments than their single-module predecessors, and can be trained in a manner which is more like training a human than programming a computer. This kind of a system theoretically can be adapted to any task for which it is able to perceive input, and while the initial Novamente system operates an a world of text and numerical files only, integrating it with visual and auditory systems, and perhaps a robot body, would allow it to have some facility to perform in the physical world as well. Applications of even the text and data constrained system are quite varied and exciting, such as autonomous financial analysis, conversational information retrieval, true knowledge extraction from text and data, etc.

While there are other systems that can find some interesting patterns in input data, a mind can determine the presence of previously unknown types of patterns and make judgments that are outside the realm of previous experience. An example of this can be seen in financial market analysis. Previously unknown market forces, such as the Internet, can impact various financial instruments in ways which prevent successful trading using traditional market techniques. A computer mind can detect this new pattern of behavior, and develop a new technique based on inferring how the current situation relates to, but also differs from, from previous experience. The Webmind Market Predictor product already did this, to a limited extent, through the emergence of new behaviors from the integration of only a few intelligent modules. As more modules are integrated the system becomes more intelligent. Currently the Webmind Market Predictor can create trading strategies in terms of long, short, and hold positions on instruments, detect changes in the market environment (using both numerical indicators and by reading news feeds), and develop new strategies based on these changes.

For another short-term, real-world example of the promise of computational mind, let’s return to the area of information retrieval. What we really want isn’t a search engine – we want a digital assistant, with an understanding of context and conversational give-and-take like a human assistant provides. AskJeeves tries to provide this, but ultimately it’s just a search engine/ chat-bot hybrid. It’s amusing enough, but quite far from the real possibilities in this area. A mind-based conversational search tool, as will be possible using a completed Novamente system, will be qualitatively different. When an ambiguous request is made of a mind, it does not blindly return some information pulled out of a database; a mind asks questions to resolve ambiguous issues, using its knowledge of your mind as well as the subject area to figure out what questions to ask. When you ask a truly intelligent system “find me information about Java”, it will ask back a question such as “do you want information about the island, the coffee, or the computer programming language?” But if it knows you’re a programmer, it should ask instead “Do you want to know about JVM’s or design patterns or what?” Like a human, a machine which has no exposure to the information that there is an island called Java, for example, might only ask about coffee and computers, but the ability to make a decision to resolve the ambiguity in the first place, in a context-appropriate way, is a mark of intelligence. An intelligent system will use its background knowledge and previous experience to include similar information (Java, J++, JVM, etc.), omit misleading information (JavaScript, a totally different programming language from Java), and analyze the quality of the information. Information retrieval segues into information creation, when a program infers new information by combining the information available in the various documents it reads, providing users with this newly created information as well as reiterating what humans have written.

These practical applications are important, but it’s worth remembering that the promise of digital mind goes beyond these simple short-term considerations. Consider, for example, the fact that digital intelligences have the ability to acquire new perception systems during the course of their lives. For instance, an intelligent computer system to be attached to a bubble chamber and given the ability to directly observe elementary particle interactions. Such a system could greatly benefit particle physics research, as the system would be able to think directly about the particle world, without having to resort to metaphorical interpretations of instrument readings as humans must do. Similar advantages are available to computers in terms of understanding financial and economic data, and recognizing trends in vast bodies of text.

The metasystem transition from mind modules to mind is the one that I have mulled over the most, due to its connection with my AI work. But it’s by no means the end of the story. When Turchin formulated the metasystem transition concept, he was actually thinking about something quite different – the concept of the global brain, an emergent system formed from humans and AI systems both, joined together by the Internet and other cutting-edge communication technologies. It’s a scary idea, but a potent one, and with a concrete reality that shouldn’t be ignored.

Communication technology makes the world smaller each day – will it eventually make it so small that the network of people has more intrinsic integrity than any individual person? Shadows of the sci-fi notion of a “hive mind” arise here… images of the Borg Collective from Star Trek. But what Turchin is hoping for is something much more benign: a social structure that permits us our autonomy, but channels our efforts in more productive directions, guided by the good of the whole.

As noted above, Turchin himself is somewhat pessimistic about the long-term consequences of all this, but not in quite the alarmist vein of Bill Joy -- more in the spirit of a typically Russian ironic doubt in human nature. In other words, Bill Joy believes that high tech may lead us down the road to hell, so we should avoid it; whereas Turchin sees human nature itself as the really dangerous thing, leading us to possible destruction through nuclear, biological, or chemical warfare, or some other physical projection of our intrinsic narrow-mindedness and animal hostility. He hopes that technological advancement will allow us to overcome some of the shortcomings of human nature and thus work toward the survival and true mental health of our race.

Through the Principia Cybernetica project, co-developed with Francis Heylighen (of the Free University of Brussels) and Cliff Joslyn (of Los Alamos National Labs in the US), Turchin has sought to develop a philosophical understanding to go with the coming tech revolution, grounded on the concept of the metasystem transition. As he says, the goal with this is “to develop -- on the basis of the current state of affairs in science and technology -- a complete philosophy to serve as the verbal, conceptual part of a new consciousness.” But this isn’t exactly being done with typical American technological optimism. Rather, as Turchin puts it, “My optimistic scenario is that a major calamity will happen to humanity as a result of the militant individualism; terrible enough to make drastic changes necessary, but, hopefully, still mild enough not to result in a total destruction. Then what we are trying to do will have a chance to become prevalent. But possible solutions must be carefully prepared.”

As I see it, the path from the Net that we have today to the global brain that envelops humans and machines in a single overarching superorganism involves not one but several metasystem transitions. The first one is the emergence of the global web mind – the transformation of the Internet into a coherent organism. Currently the best way to explain what happens on the Net is to talk about the various parts of the Net: particular Websites, e-mail viruses, shopping bots, and so forth. But there will come a point when this is no longer the case, when the Net has sufficient high-level dynamics of its own that the way to explain any one part of the Net will be by reference to the whole. This will come about largely through the interactions of AI systems – intelligent programs acting on the behalf of various Websites, Web users, corporations, and governments will interact with each other intensively, forming something halfway between a society of AI’s and an emergent mind whose lobes are various AI agents serving various goals. The traditional economy will be dead, replaced by a chaotically dynamical hypereconomy (a term coined by the late transhumanist theorist Alexander Chislenko) in which there are no intermediaries except for information intermediaries: producers and consumers (individually or in large aggregates created by automatic AI discovery of affinity groups) negotiate directly with each other to establish prices and terms, using information obtained from subtle AI prediction and categorization algorithms. How far off this is we can’t really tell, but it would be cowardly not to give an estimate: I’m betting no more than 10-30 years.

The advent of this system will be gradual. Initially when only a few AI systems are deployed on the Web, they will be individual systems which are going to be overwhelmed with their local responsibilities. As more agents are added to the Net, there will be more interaction between them. Systems which specialize will refer questions to each other. For example, a system that specialized in (had a lot of background knowledge and evolved and inferred thinking processes about) financial analysis may refer questions about political activities to political analyst systems, and then combine this information with its own knowledge to synthesize information about the effects of political events on market activity. This hypereconomic system of Internet agents will dynamically establish the social and economic value of all information and activities within the system, through interaction amongst all agents in the system. As these interactions become more complex, agent interconnections become more prevalent and more dynamic, and agents become more interdependent the network will become more of a true shared semantic space: a global integrated mind-organism. Individual systems will start to perform activities which have no parallel in the existing natural world. One AI mind will directly transfer knowledge to another by literally sending it a “piece of its mind”; an AI mind will be able to directly sense activities in many geographical locations and carry on multiple context-separated conversations simultaneously; a single global shared-memory will emerge allowing explicit knowledge sharing in a collective consciousness. Across the millions, someday billions, of machines on the Internet, this global Web mind will function as a single collective thought space, allowing individual agents to transcend their individual limitations and share directly in a collective consciousness, extending their capabilities far beyond their individual means.

All this is fabulous enough – collective consciousness among AI systems; the Net as a self-organizing intelligent information space. But yet, it’s after this metasystem transition – from Internet to global hypereconomic Web mind -- that the transition envisioned by Turchin and his colleages at Principia Cybernetica can take place: the effective fusion of the global Web mind and the humans interacting with it. It will be very interesting to see where biotech-enabled virtual reality technology is at this point. At what point will we really be jacking our brains into the global AI matrix, as in Gibson’s novel Neuromancer? At what point will we supercompile and improve our own cognitive functions, or be left behind by our constantly self-reprogramming AI compatriots? But we don’t even need to go that far. Putting these more science-fictional possibilities aside and focusing solely on Internet AI technology, it’s clear that more and more of our interactions will be mediated by the global emergent intelligent Net – every appliance we use will be jacked into the matrix; every word that we say potentially transmitted to anyone else on the planet using wearable cellular telephony or something similar; every thought that we articulate entered into an AI system that automatically elaborates it and connects it with things other humans and AI agents have said and thought elsewhere in the world – or things other humans and AI agents are expected to say based on predictive technology…. The Internet Supermind is not the end of the story – it’s only the initial phase; the seed about which will crystallize a new order of mind, culture and technology. Is this going to be an instrument of fascist control, or factional terrorism? It’s possible, but certainly not inevitable – and the way to avoid this is for as many people as possible to understand what’s happening, what’s likely to happen, and how they can participate in the positive expansion of this technology.

Imagine: human and machine identities joined into the collective mind, creating a complex network of individuals from which emerges the dynamics of a global supermind, with abilities and boundaries far greater than would be possible for any individual mind, human or artificial – or any community consisting of humans or AI’s alone. As Francis Heylighen has said, “Such a global brain will function as a nervous system for the social superorganism, the integrated system formed by the whole of human society.” Through this global human-digital emergent mind, we will obtain a unique perspective on the world, being able to simultaneously sense and think in many geographical locations and potentially across many perceptual media (text, sound, images, and various sensors on satellites, cars, bubble chambers, etc.) The cliché “let’s put our minds together on this problem” will become a reality, allowing people and machines to pool their respective talents directly to solve tough problems in areas ranging from theoretical physics to social system stabilization, and to create interesting new kinds of works in literature and the arts.

Weird? Scary? To be sure. Exciting? Amazing? To be sure, as well. Inevitable? An increasing number of techno-visionaries think so. Some, like Bill Joy, have retreated into neo-Ludditism, believing that technology is a big danger and advocating careful legal control of AI, nanotech, biotech and related things. Turchin is progressing ahead as fast as possible, building the technology needed for the next phase of the revolution, careful to keep an eye on the ethical issues as he innovates, hoping his pessimism about human nature will be proved wrong. As for us, we tend to be optimists. Life isn’t perfect, plants and animals aren’t perfect, humans aren’t perfect, computers aren’t perfect – but yet, the universe has a wonderful way of adapting to its mistakes and turning even ridiculous errors into wonderful new forms.

The dark world of tyranny and fear described in the writings of cyberpunk authors like William Gibson and Bruce Sterling, and in films such as The Matrix and Blade Runner, is certainly a possibility. But there’s also the possibility of less troubling relationships between humans and their machine counterparts, such as we see in the writings of transrealist authors like Rudy Rucker and Stanislaw Lem, and in film characters like Star Trek’s Data and Star Wars’ R2-D2 and C3P0. We believe, through ethical treatment of humans, machines, and information, that a mutually beneficial human-machine union within a global society of mind can be achieved. The ethical and ontological issues of identity, privacy, and selfhood are every bit as interesting and challenging as the engineering issues of AI, and we need to avoid the tendency to set them aside because they’re so difficult to think about. But these things are happening – right now we’re at the beginning, not the end, of this revolution; and the potential rewards are spectacular -- enhanced perception, greater memory, greater cognitive capacity, and the possibility of true cooperation among all intelligent beings on earth.

In thinking about such wild and amazing transitions, it pays to remember: we’ve been riding the rollercoaster of universal evolution all along. The metasystem transitions that gave rise to human bodies and human intelligence were important steps along the path, but there will be other steps, improving and incorporating humanity and ultimately going beyond it.

Having glimpsed such a glorious, ambitious, multifariously intricate futuristic vision – having bathed oneself in this vision for years, as I have -- where does one go next?

One can write science fiction stories or metaphysical poems exulting in the wonder of it all. And why not? I must admit I have succumbed to this urge now and then – and for some reason it is often followed by the urge to cruise down the highway in my 4Runner at 85 miles an hour with Yngwie Malmsteen, Deep Purple or Steve Morse cranked up on the stereo….

One can sit back and relax and enjoy oneself, waiting for the Singularity. Unlike Waiting for Godot, one of my favorite pieces of writing, which is all about empty spaces, this wait is full of entertainment, constant changes and advances.

Or one can plunge into the details. And there sure are a lot of them. The tech revolution and the impending Singularity are too big a topic for any single book to treat them with reasonable richness. In these pages I’ll plunge into the details of the biological aspects of the great unfolding story.

 
 

  Future superintelligent computers may consider us roughly as intelligent as one of these.

 
 

Some of my intellectual colleagues, deep into Singularitarian thinking, view biocomputing as a sideshow. The real story, they believe, will be in advanced AI, rapidly modifying its own sourcecode until it’s 5000 times more intelligent than humans and effortlessly reconfigures all the matter in the solar system ever 15 seconds or so. In the long run they may well be right. But I hope that John Maynard Keynes’ famous quip “In the long run, we’ll all be dead” turns out not to be right. A future populated by superintelligent AI systems is far more palatable to me than some other possibilities, such as, say, a future in which all life and intelligence in Earth are wiped out by nuclear weapons. But just as I’m glad bugs still exist alongside us more intelligent beasts, I’m happiest thinking about a future in which humans persist no matter how far technology advances. In fact I tend to think this is likely… though the humans of the future may not look, act or think exactly like we do.

Some people like the idea of growing old and dying. There is a spiritual completeness associated with death, a oneness with nature. This is fine, for them. Some of us, on the other hand, prefer the notion of existing as long as possible, perhaps as long as the universe itself – or longer, if advanced physics shows us how to escape into other universes. This is a modification of current humanity that bioscience seems very likely to provide. Apoptosis, the aging of cells, will one day become as obsolete as smallpox.

Some people don’t like video games, or computers, or electronic musical instruments. Personally, I have a fanatical hatred of television. Again – no worries – diversity of taste is a good thing. But we are going to see advances beyond current human-computer interfaces, which make the difference between parchment and e-mail seem microscopic. Already quadriplegics can control their computers directly using their brain waves. Once we can scan brain-states better in real-time, we’ll be able to project our thoughts directly into computers, and perhaps back out of computers into others’ brains. Virtual reality technology has been overhyped so far, but eventually it really will happen – there are no fundamental obstacles, “just” engineering difficulties and thorny neuroscience, of a type that we humans have proved ourselves rather adept at solving.

Genetic engineering scares people – and the downside is indeed dangerous. I don’t particularly want to see, for instance, the government outlawing genes that have been found to correlate with civil disobedience, or the writing of unpatriotic literature. But how bad would it be to see genes correlated with serial killing or profound retardation or multiple personality syndrome filtered out of the population? And how bad would it be if genetic engineering could modify the brain-encoding genes, producing people twice as good at math and physics as Einstein, or five times as skilled at pragmatic social compassion than Gandhi or Mother Teresa? Balancing the plusses and minuses of genetic engineering seems to me to be an unsolvable ethical puzzle. But I don’t think that mulling over this puzzle is a terribly high priority, because I don’t believe the technology could be squelched even if there were a 90% universal will to do so. And I don’t think that waiting to develop the technology is going to help anything – a 20 or 50 or 100 year delay is not going to make humans any more consistently ethical (the only thing that could do that would be genetic engineering itself!). I think that the best thing those concerned about the ethical implications of genetic engineering can do is to become geneticists themselves and work hard on positive applications. Let’s breed a more compassionate and ethical race before someone breeds a race of superintelligent psychopaths.

The interface between genetic engineering and human-computer interfacing and AI is going to be particularly exciting. Eventually we’ll be able to manipulate the genetic code in such a way as to create humans who are especially well-suited to interface with virtual worlds and artificial intelligences. Sure, it’s wild and crazy sci-fi -- just like TV, submarines, airplanes and spacecraft once were … ask your great-grandfather. What I’m going to tell you about in these pages is why these ideas aren’t so totally science-fictional after all. I’ll lead you through the real-world science of today as it pertains to the fantastic dreams of tomorrow. Turchin may not live to see the metasystem transition in which networked AI comes about, let alone the one in which genetically modified humans and AI’s interact in virtual worlds, forming a whole new kind of virtual mind/society/organism. On the other hand, born in 1966 as I was, now age 35 in 2002, I might just make it. And if Turchin avails himself of cryonic technology, and has his body frozen shortly after death, he may be defrosted into a whole new world….