A few hundred years ago, science was irrelevant to most peoples’ daily lives. It was mainly a hobby of aristocrats, in the same vague category as, say, metaphysical philosophy or jewelry collecting.

Today, science and technology are everywhere. Most people don’t understand much of it, but they know it’s important. They use computers and printers and monitors; they watch TV; they drive cars with fuel-injection engines and onboard diagnostic computers and fly on airplanes with autopilots; they take medications with names reflecting complex molecular structures; they shop at Wal-Mart, which exists almost entirely to sell various forms of plastic that that didn’t even exist 50 years ago.

But what most people don’t yet comprehend is that this is a transitional period. The importance of science and technology to ordinary life is not going to remain constant at its present level. Rather, it is going to increase – and it’s going to increase fast, exponentially or more so, just as it has been. Computers, plastics, medications and airplanes are just the beginning.

Technology is going to transform us, redefining what it means to be human, and not just culturally and psychologically but physically. Genetic engineering will happen, the protests of the religious-conservative camp notwithstanding: enhanced human beings will walk among us. AI will happen, and not in the 4000 years projected in Spielberg’s recent movie by that name, but within this century, probably early on. Computers will be smarter than us in ways more important than playing chess, factoring large numbers, or landing jet aircraft. Intelligent and semi-intelligent computers will help us solve puzzles that have so far eluded us: for instance, how genomes make cells, how particles make atoms, and how brains make minds. We will communicate with computers directly using our brains – “look ma, no hands!” We will communicate with each other similarly. We will move back and forth fluidly between the ordinary physical world, with its trees and roads and love affairs and childcare centers, and virtual worlds in which anything can happen, we can wrestle with a molecule or make love to a cloudburst or solve equations by hybridizing our intuition with that of an AI mathematician, and it all feels realer than real. Human nature in some form may persist, but not in the manner that we are accustomed to it. Science and technology are leading us into a transhuman age.

This all sounds like crazy science-fiction, and indeed, history shows that predicting the details of future science and technology is a pursuit fraught with peril. There will be limitations I cannot right now foresee, and also revolutions my paltry human-nervous-system-based creativity is inadequate to imagine. But even so, I think the qualitative character of the coming revolution is clear. Vernor Vinge, writing in the 1970’s, was right to call it a Singularity: as science and technology advance faster and faster, there will come a point where, suddenly, they are no longer just a big part of our world, they are inextricable from our world. New advances will come faster than we can understand them. AI’s will create new AI’s that rapidly outpace their creators.

I write about these topics, not as an outsider, but as someone who has been right smack in the middle of the ongoing revolution. After spending about a decade as a scientific theorist, about 5 years ago I decided to devote my life more fully to one particular futuristic goal: the construction of a truly thinking machine, a software program displaying human-level general intelligence. In 1997 I founded a company, Webmind Inc., with the twin goals of creating a “Real AI,” and constructing profitable software products from preliminary partial versions of this holy-grail end-goal. We made some interesting progress toward the grand Real AI goal, and created some pretty impressive software products along the way, but we never managed to make the firm profitable, and when the dot-com crash hit, a major funder pulled out at the last minute, and the company couldn’t recover. In March 2001, Webmind Inc. dissolved.

Following this personal and financial disaster, a group of friends from the Webmind Inc. R&D division and I decided to stick together as a team, and initiate a new chapter of our quest to build a Real AI (to a great extent, it had now become our quest, not just mine). We saw the mistakes we’d made at Webmind Inc., both technically in terms of the construction of our AI system, and businesswise in terms of conceiving, creating and marketing products based on preliminary partial versions of the end-goal Real AI. We resolved to do it right this time. We are building a new AI system, the Novamente AI Engine, and we’re applying it to various practical domains, mainly focusing on bioinformatics.

But as important as I hope my work will be, I realize that it’s just one piece of a huge, amazing, unfolding puzzle. No one scientist will accomplish the Singularity, no single 21’st century da Vinci or Einstein or even Goertzel. No one branch of science will accomplish it either. What is happening is synergetic, cross-disciplinary, international and emergent.

My aim in these pages is not to convince you that this sort of revolution is coming. Ray Kurzweil has attempted such an argument in his recent book The Singularity is Near, using countless graphs of technological and scientific progress, showing the hyperexponential growth curves observed in various industries and dissecting the causes underlying this growth pattern. I admire his work but don’t aim to replicate it, or extend it in any direct way. My goal here is more qualitative, though closely related. It is simply to describe for you various aspects of contemporary science, which seem to be pushing in a Singularity-ward direction.

I have focused on those areas of intensely-future-relevant science that I understand the best, which means mostly computer science, with a goodly helping of biology, and occasional dips into other areas such as physics and engineering and chemistry. But though it’s governed by my own interests, the coverage of topics is hardly arbitrary, because the topics I know best are generally things I’ve learned about precisely because of their central relevance to the ongoing sci-tech revolution. Some parts of what I’ll talk about pertain to my own AI research, but the majority regards to the work of other scientists whom I know, either personally or through their writings. Of course, in one brief book I can’t possibly tell you all of what’s happening in university and industry research labs, garages and livingrooms all around you. The vast accelerating advance of science and technology and creative intelligence defies concise description. But I wouldn’t have bothered to take the time to write this if I didn’t think I could do a moderately decent job of approximating the essence of it all.

Most books these days are written purely for the reader’s entertainment -- but this one isn’t. I wrote this book because I thought the ideas contained in it were important for you to know. And so I’m going to ask something of you: Read all this, and then think about it. Then if you want more information, read other books on related topics – talk to scientists and technologists and ask them questions. And don’t accept everything I’ve said: Draw your own conclusions about where things are headed.

The more people understand what is really happening with science and technology, and what the potential future outcomes are, the larger will be the segment of humanity that actively plays a role in the ongoing revolution, rather than passively being manipulated by it. And in spite of the phenomenal stupidity that sometimes is associated with large mobs of humans, on the whole I think that a broader awareness and conscious participation in our Singularity-ward trajectory would be a good thing.



One of the funny things about being a scientific researcher is that, however high your IQ may be, you wind up feeling incredibly stupid on a regular basis. You may find yourself thinking about the same thing over and over again, for months or years on end, without making any progress at all. And then when you get the answer, it may seem completely obvious, and you can’t understand how you could have been so moronic as not to see it earlier. And then you go out into the world of ordinary nonscientific people and you realize: “Wait a minute, I’m actually fairly intelligent compared to a lot of other people! – I’m not such an idiot after all! – this research is just really, really hard for the measly three-pound human brain.”

Given how hard science is even for scientists, explaining it to nonscientists in a reasonably honest way often seems an insurmountable challenge. Even a fully technical scientific paper can never tell the whole story of the research it purports to summarize. What is fascinating, though, is that at its best, a nontechnical summary of scientific work achieves a different kind of meaning from technical scientific papers. Scientific work focuses on the details – it drills down so far you’d think it would be impossible to drill down any further … and then it drills down yet further. But sometimes it’s valuable not to drill down, but rather to look back up, and see how each piece of work relates to everything else – to other bits of science, and to ideas and events and trends in the nonscientific world. This is the kind of meaning that nontechnical discussions of scientific ideas can add. The lack of total scientific precision allows broader cross-connections and intuitions to become perceivable. In this sense, I believe, scientific writing for the nonscientist audience doesn’t have to be a mere “dumbing-down” of scientific ideas, it can be a “contextualization” of scientific ideas, a deepening of scientific ideas in a different direction from the direction they’re taken by scientific research work.

(Okay, okay, so that’s a rather lofty perspective. If I’m lucky I’ve achieved this high-falutin’ ideal 30-40% of the time in the following pages. But at least you know what I’m aiming for!)

Of course, contextualizing science is one thing in the context of a narrow scientific topic such as laser physics or genetics; it’s quite another in the context of a synergetic mix of scientific topics whose common theme is their potential to transform the nature of humanity. I have bitten off a rather substantial task here. But anyone who knows me could tell you that unambitiousness is one of the few things not included on my long list of flaws. After all, the primary goal of my scientific career has been the creation of a computer program with human-level general intelligence – and after 15 years I’m still at it, and still optimistic (and I’ll tell part of that story in Chapter 3).

One aspect of popular science books that I have mixed feelings about is the incorporation of biographical information. Knowing the social context of an idea’s conception and development can be both fascinating and clarifying; and the adventure of scientific discovery can be gripping and exciting and well worth dramatic recounting. But on the other hand, I’ve read many books that told me far too much about the story of a scientific discovery, and not nearly enough about the discovery itself. I’ve tried to walk the fine line here, in this regard. The ideas described in these pages involve countless human stories, and I’ve told a small selection of these, introducing biographical information when it seemed particularly useful for bringing out some aspect of the sci-tech points in question. In these passages, rather than focusing on the already famous, I’ve made an effort to illustrate the breadth of scientific insight that is going into the sci-tech revolution, by recounting bits and pieces of the lives and intellectual adventures of some lesser-known individuals who have nonetheless made outstanding contributions.

My own story – the rise and fall of my company Webmind Inc., the gathering-together of an international team of mad scientists, programmers and financiers to focus on building a thinking machine – is not without its dramatic interest either, but I have only touched on it briefly here and there in these pages, when it seemed particularly apropos. If I ever have the time I’ll write a whole book on the Webmind Inc. story – a fascinating tragicomedy, to be sure. But this is not that book. Here the ideas are the focus. I think they’re dramatic enough.



One big part of the overall task of contextualizing science, is placing scientific results in a social and ethical context. Obviously, this is a particularly critical task when one considers scientific work of the sort I’m discussing here -- work that promises to, in time, completely redefine what it means to be human (and indeed, whether the “human” category continues to exist at all).

My perspective here is rather distant from the current ethical debates on biotechnology, which center on such things as the ethics of human cloning or genetically engineered produce. From my own deep-futurist point of view, the correct outcome of these debates is painfully obvious. If genetically engineered produce can help feed the hungry people of the Third World – and it can -- we should pursue it unreservedly. On the other hand, the fuss about cloning seems to me to be almost entirely due to peculiar religious notions. I have not seen anything resembling a convincing argument that human cloning will lead to great dangers. While I have great respect for the deep spiritual experience that lies at the heart of all religions, it seems to me that religious superstitions have been and will be the cause of a lot more pain and suffering than could ever come from human cloning.

I think the ethical debates over “frankenfood” (I love that expression – for a while I’ve been talking about starting a rock band composed entirely of biologists and called “Frankenfood Buffet”), human cloning, stem cells and related topics will die down within 10 years or so, as these relatively simple kinds of biotechnology become commonplace. The really profound and important ethical debates will be the ones that come after – the ones to do with advanced genetic engineering, man-machine synthesis, and the nature of posthumanity.

There is a real risk that one day genetic engineering will lead to the creation of a genetically-superendowed wealthy-nation elite, with the vast majority of the world unable to afford genetically enhanced children, just as they now cannot afford computers or Gameboys or GPS widgets or cutting-edge pharmaceuticals. There is a real risk that the wealthy will upload into a digital world, leaving the masses in a polluted, increasingly unlivable physical-world ghetto. There is also a risk that, somehow, in creating these exciting technological modifications to ourselves, we will lose the passion, the feeling, the essence of being human – that we will fashion for ourselves a more perfect but less intense and genuinely fulfilling reality. Perhaps the pharmacological elimination of suffering and the creation of viscerally satisfying virtual worlds will make life too easy, destroying the rich if sometimes difficult texture of everyday being that we take for granted. It is my view that these ethical nightmares can be averted, but it’s not entirely clear that humanity as a whole will have the will to avert them.

Some of those who wish to stop human cloning and genetically-modified food may feel as they do because they view these things as steps along the way toward more dramatic and dangerous things. I have some respect for this perspective. But I believe that the quest to stop science and technology developing – Bill Joy’s “relinquishment” – is doomed to fail. A pessimistic view would say: “Pandora’s box cannot be closed again.” But it’s better, in my view, to say: “It’s clear the future will involve these technologies, so let’s direct our energies toward maximizing the chance that these technologies will be used in a positive way.” It is entirely possible to use advanced computing and biotechnology to make human experience richer, deeper and better, for all humans. The more people realize this goal is important, the more likely it is to come about.

Genetically modifying humans so that they can more effectively hybridize themselves with computers, more effectively jack into virtual realities, share thoughts and feelings with each other directly rather than through the narrow-bandwidth medium of language. It sounds science-fictional, but yet, it seems to me extremely probable that this is 21’st century science, not 22’nd century. The research of today points in this direction just as clearly as, say, the common existence of radio and cinema once pointed towards television. Significant new inventions will be required, but it is abundantly clear that the human race possesses the collective brilliance to make them.

In these pages I’ll tell you about what scientists are doing today at the boundary of biology and computing, and what I believe they’ll be doing tomorrow, and how it fits into a bigger picture of “posthumanist philosophy.” It was over 100 years ago that Nietzsche, in his Zarathustra, said “Man is something that must be overcome.” He did not envision that silicon chips, gene chips, PCR and so forth would be the mechanisms of this self-overcoming – but his insight was dead-on nonetheless. We are well along in the process of overcoming ourselves, not through the visionary trances of prophets like Zarathustra, but through the workaday R&D activity of thousands of scientists worldwide, studying gene expression data analysis, neurocomputing, advanced computer architecture, anti-aging pharmacology, and a dozen other cutting-edge disciplines. It is humanity as a whole, not any particular individual, that is in the visionary trance, courtesy of the socially-psychoactive substance called science, and heading straight toward a major transformation of the soul. It’s quite a drama that’s unfolding, and we’re right smack in the middle of it – and within some fairly broad parameters, we get to choose the roles we play.