This chapter, more than any of the others in the book, has evolved significantly from its initial condition. It began, like many of the other chapters, as an article for the Frankfurter Allgemaine Zeitung. Of all the articles I wrote for FAZ during the 1999-2001 time period, this was the one that excited their editors the most. I can see why: rather than explaining difficult technical content, it focused on social and moral issues. And I have to admit that, like a lot of successful journalism, it had a bit of an over-sensationalistic tone. The article started out as follows:

Nietzsche, my favorite philosopher, gave his book “Twilight of the Idols” the subtitle “How to philosophize with a hammer.” It was the moral codes and habitual thought patterns of his culture that he was smashing. In a similar vein, the creed of the Extropians, a group of transhumanist futurists centered in California, might be labeled “How to technologize with a hammer.” This group of computer geeks and general high-tech freaks wants to push ahead with every kind of technology as fast as possible – the Internet, body modification, human-computer synthesis, nanotechnology, genetic modification, cryogenics, you name it. Along the way they want to get rid of governments, moral strictures, and eventually humanity itself, remaking the world as a hypereconomic virtual reality system in which money and technology control everything. Their utopian vision is sketchy but fabulous: a kind of Neuromancer-ish, Social-Darwinist Silicon-Valley-to-the-n’th-degree of the collective soul.

I sympathize with their techno-futurism and their lust for freedom. But their brand of ethics scares me a little.

Intuitively conceived as the opposite of entropy, Extropy is a philosophical rather than a scientific term. The Extropians website (www.extropy.org), the online Bible of the movement, defines Extropy as “A metaphor referring to attitudes and values shared by those who want to overcome human limits through technology. These values … include a desire to direct oneself in pursuing perpetual progress and self-transformation with an attitude of practical optimism implemented using rational thinking and intelligent technology in an open society.”

“Transhumanism,” as a general term, refers to philosophy that doesn’t view human life as the ultimate endpoint of the evolution of intelligence. Extropianism is a particular form of transhumanism, concerned with the quest for “the continuation and acceleration of the evolution of intelligent life beyond its currently human form and limits by means of science and technology, guided by life-promoting principles and values, while avoiding religion and dogma.” Working toward the obsolescence of the human race through AI and robots is one part of this; another aspect is the transfer of human personalities into “more durable, modifiable, and faster, and more powerful bodies and thinking hardware,” using technologies such as genetic engineering, neural-computer integration and nanotechnology.

Along with this technological vision comes a political vision. Extropians, according to extropy.org, are distinguished by a host of sociopolitical principles, such as: “Supporting social orders that foster freedom of speech, freedom of action, and experimentation. Opposing authoritarian social control and favoring the rule of law and decentralization of power. Preferring bargaining over battling, and exchange over compulsion. Openness to improvement rather than a static utopia. … Seeking independent thinking, individual freedom, personal responsibility, self-direction, self-esteem, and respect for others.” It is explicitly stated in Extropian doctrine that there cannot be socialist Extropians, although the various shades of democratic socialism are not explored in detail. In point of fact, the vast majority of Extropians are radical libertarians, advocating the total or near-total abolition of the government. This is really what is unique about the Extropian movement: the fusion of radical technological optimism with libertarian political philosophy. With only slight loss of meaning, one might call it libertarian transhumanism.

This characterization of Extropian philosophy was based on my conversations with many individuals identifying themselves as Extropians, both in person and via e-mail. Some of these individuals and conversations will be discussed later on in this chapter.

Conversations with other Extropians during the last couple years, however -- including Natasha Vita-Moore, one of the original Extropians -- have made me realize that my impression of the Extropian group at the time I wrote the FAZ article was somewhat flawed and incomplete.

Everything I observed in the article was true -- of a certain subset of the Extropian community. But the Extropian community has a lot more diversity than I gave it credit for. My statement that "the vast majority of Extropians are radical libertarians" was an overstatement. Many Extropians are radical libertarians, and very few are socialists, but all in all there is a much greater variety of political views in the community than I realized in 2000.

So, this chapter represents a variant of that FAZ article, but with a less sensationalistic and hopefully more accurate flavor. The basic themes are the same, but I'm pleased to be able to present them now as pertaining to a subset of the Extropian community rather than the Extropian community as a whole.

Even the new and mellower version of my take on Extropy and related ideas is unlikely to please everyone in the Extropian community. But, well, I'm calling it as I see it. And I need to emphasize the distinction between the "official Extropian line", as laid out on the Extropy website, and the actual belief systems that tend to be held by the majority of individuals associating themselves with Extropianism. My concern here is mainly with these actual belief systems. That is, I'm not writing about Extropianism as a formal set of beliefs nor as an official organization, but rather about the cluster of individuals and ideas that has aggregated around the Extropy concept over the last couple decades.

For example, libertarian politics, as such, is not part of the official Extropian philosophy; but it is a mighty common theme on the Extropy e-mail list and at Extropy conferences. The complaint that Extropianism isn’t intrinsically connected to libertarianism reminds me somewhat of an argument I once had with a Sufi, who claimed that Sufism isn’t a religion. He was right, formally speaking Sufism is not a religion – it’s a “wisdom tradition” of Arabic origin, associated with Islam. And yet 99.9%, perhaps 100% ,of Sufis are Muslims and are religious by the definitions of the rest of the world.

And there's no question: Some Extropians carry their anti-socialist libertarianism to a remarkable ultra-radical extreme. For instance, visionary roboticist Hans Moravec, a hero to many Extropians, had a somewhat disturbing exchange with writer Mark Dery in 1993. Dery asked Moravec about the socioeconomic implications of the robotic technology he envisioned. Moravec replied that “the socioeconomic implications are … largely irrelevant. It doesn’t matter what people do, because they’re going to be left behind like the second stage of a rocket. Unhappy lives, horrible deaths, and failed projects have been part of the history of life on Earth ever since there was life; what really matters in the long run is what’s left over.” Does it matter to us now, he asks, that dinosaurs are extinct? Similarly, the fate of humans will be uninteresting to the superintelligent robots of the future. Humans will be viewed as a failed experiment – and we can already see that some humans, and some human cultures, are worse failures than others.

Dery couldn’t quite swallow this. “I wouldn’t create a homology between failed reptilian strains and those on the lowermost rungs of the socioeconomic ladder.”

Moravec’s reply: “But I would.”

Put this way, Extropianism starts to seem like a dangerous and weird fringe philosophy. But one must remember that Moravec is just one voice among many -- a decent percentage of Extropians would be just as offended by Moravec's ethics as I am.

Moravec is intentionally confrontational and alienating -- but overall, the Extropian perspective isn't that far out on the fringe these days. Luminaries associated with it in one way or another include Marvin Minsky the AI guru, Eric Drexler the nanotechnologist, Kevin Kelly of Wired Magazine, and futurist writer Ray Kurzweil.

At this point, Extropianism does not rank as one of our more prominent cultural movements. But it is active, vibrant, and growing.The Extropy magazine had several thousand subscribers before it moved on the Web in 1997; and the Extropy e-mail discussion list is a hugely active one (though greatly uneven in quality). A vast amount of online literature exists, related to various aspects of Extropian thinking, linked to the www.extropy.org site. This is definitely one of the more important online communities. Whatever its strengths and weaknesses, it’s worth paying some attention to. Discussion of Extropian ideas brings up all sorts of interesting topics, which are highly pertinent to the future of technology, life and intelligence.

 
 
 

  Max More
 
 

The man who got this all started was Max More, a philosophy Ph.D. with a knack for rational argumentation and an impressive, convincing personal style. In 1995, Jim McClellan interviewed More for the UK newspaper Observer and noted, "The funny thing about Max is that while his ideas are wild, he argues them so calmly and rationally you find yourself being drawn in."

More started his career studying philosophy, politics and economics at St. Anne’s College at Oxford University, in the mid-1980’s. At that point his main focus was on economics, from the libertarian perspective. While doing his degree, Max became strongly interested in life extension, and he was the first person in Europe to sign up for cryonic suspension with the US firm Alcor. In 1995, when he received his philosophy degree from the University of Southern California for research on mind, ethics and personal identity, he was already deep into organizing the Extropian movement, bringing his political and technological interests together. Technology, he felt, was ready to push mind into new spaces altogether, such as virtual realities where the notion of “I” as currently conceived had no meaning. Governments were holding us back, preventing or slowing research in crucial areas.

The first edition of Extropy magazine came out in August/September 1988 with just 50 copies, co-edited by Max More and his friend T.O. Morrow. It was a wild mix of sci-fi-meets-reality thinking -- life extension, machine intelligence, and space exploration, to intelligence augmentation, uploading, enhanced reality, alternative social systems, futurist philosophy, and so on. The magazine seeded the social network that led to the e-mail list (1991), the first Extropy conference (1994) and the website (1996), which soon (1997) obsoleted and incorporated the paper magazine.

In terms of philosophical precedents, it’s not too inaccurate to call More’s credo a mix of Ayn Rand-ian anti-statist individualism with Nietzschean transmoralism, held together by a focus on future technologies. In Extropy #10 he explicitly equates the “optimal Extropian” with “Nietzsche’s Ubermensch.” But he cautions, in another essay (“Technological Transformation: Expanding Personal Extropy”), that “the Ubermensch is not the blond beast and plunderer.” Rather, the Extropian Ubermensch “will exude benevolence, emanating its excess of health and self-confidence.” That’s reassuring… yet hard to reconcile with Moravec’s Olympian detachment regarding the destruction of the human race. This contradiction, I believe, is both Extropianism’s core weakness and a primary source of its energy.

In spite of More’s forceful argumentative style, Extropianism is certainly not an orthodoxy. Within the general “party line” of Extropianism, there’s room for a lot of variety. This is one of the movement’s strengths, and surely a necessary aspect of any organization involving so many over-clever, individualistic oddball revolutionaries. Moravec and More don’t agree with each other entirely, and don’t necessarily agree with all their own past opinions. Consensus isn’t critical; progress is the thing.

 

 
 
 

  Sasha Chislenko
 
 

I’ve had intellectual exchanges with plenty of Extropians, including Marvin Minsky, Max More, Max’s wife Natasha Vita-More (a highly creative thinker), Eliezer Yudkowsky (whom I’ll discuss below), and too many others to name. But the Extropian I’ve known best on a personal level was Sasha Chislenko – a visionary cybertheorist and outstanding applied computer scientist. Sasha’s work, thought and life exemplify the brilliance and power and weakness and danger of the Extropian perspective in an extremely vivid way.

As with many Russian emigrants to the US, Sasha’s libertarianism was borne of years of oppression under the Soviet Socialist regime. Having seen at first hand how much trouble an authoritarian government can cause, he was convinced that government was intrinsically an almost entirely evil thing. After he left the Soviet Union, Sasha was a man without a country, lacking a Russian passport due to his illegal escape from Russia, and lacking an American passport because of his status as a political refugee. Once Sasha and I were invited to give a lecture together at Los Alamos National Labs in New Mexico, but we were informed that since he lacked a passport, he couldn’t get the security clearance required to enter the lab grounds. We ended up turning down the lecture invitation, both disgusted at the government’s closed-mindedness.

Sasha was impatient for body modification technology to advance beyond the experimental stage – he was truly, personally psyched to become a cyborg, to jack his brain into the Net, to replace his feeble body and brain with superior engineered components. Not that there was anything particularly wrong with his body and brain – in fact he was in fine shape -- it just wasn’t as good as the best synthetic model he could envision. He was a strong advocate of various “smart drugs,” some legal, some not, which he felt gave him a superhuman clarity of thought. He was outraged that any government would consider it had the right to regulate the chemicals he chose to put into his body to enhance his intelligence.

His own technical work focused on “active collaborative filtering,” technology that allows people to rate and review things they see on the Net, and then recommends things to people based on their past ratings and the ratings of other similar people. Popular websites like amazon.com and bn.com have collaborative filtering systems embedded in them – when you log on to buy a book, they give you a list of books you might be interested in. Sometimes these systems work, sometimes they don’t. Recently I logged onto Amazon to buy a “Bananas in Pyjamas” movie for my young daughter, and their recommendation system suggested that I might also be interested in the movie “Texas Chainsaw Massacre II.” How it came up with that recommendation I’m not sure, though I can guess: Perhaps the only previous person to by “Bananas in Pyjamas” had also bought the Texas Chainsaw Massacre film. The recommendation systems that Sasha designed were far more sophisticated than this one, probably the most advanced in the world. He led a team implementing some of his designs at Firefly, a company later acquired by Microsoft.

Compared to body modification, cranial jacks and superhuman artificial intelligence, active collaborative filtering might seem a somewhat unexciting path to the hypertechnological future, but to Sasha, it was a tremendously thrilling thing – a way for humans to come together and enhance one another’s mental effectiveness, passing along what they’d learned to one another in the form of ratings, reviews and recommendations. Recommendation and filtering technology was a kind of collective smart drug for the net-surfing human race.

Sasha’s vision in this area has become somewhat mainstream by this point. An example is the Website epinions.com, which pays users to give their reviews of consumer products and other things. The higher that others rate your reviews, the more you get paid. Sasha had nothing to do with his site, but it epitomized is ideal. He strongly felt that, as the economy transformed into a cyber-powered hypereconomy, intellectual contributions like his own would finally get the economic respect they’d always deserved. People would be paid for writing scientific papers to the extent that other scientists appreciated the papers. The greater good would be achieved, not by the edicts of an authoritarian government, but by the self-organizing effects of people rating each others productions, and paying each other for their ratings and opinions. He coined the word “hypereconomics” to refer to the complex dynamics of an economy in which artificial agents dispense small payments for everything, and in which complex financial instruments emerge even from simple everyday transactions – AI agents paying other agents for advice about where to get advice; your shopping agent buying you not just lettuce but futures and options on lettuce, and maybe even futures and options on advice from other agents.

But there was a painful contradiction lurking here, not far beneath the surface. And this personal contradiction, I believe, cuts close to the heart of Extropian philosophy -- at least, in the form that it takes in the mind of many Extropians. The libertarian strain in Sasha’s thinking was highly pronounced: Once he told me, tongue only halfway in cheek, that he thought air should be metered out for a price, and that those who didn’t have the money to pay for their air should be left to suffocate! I later learned this was a variation on a standard libertarian argument, sometimes repeated by Max More, to the effect that the reason the air was polluted was that nobody owned it – ergo, air, like everything else, should be private property. At the 2001 EXtropy conference I heard a speaker give an even more extreme variation: In the future, every molecule will be bar-coded so its owner can be identified. People or their descendants will have to pay for the oxygen they breathe, molecule by molecule....

Sasha equated wealth with fundamental value, and his vision of the cyberfuture was one of a complex hypereconomic network, a large mass of money buzzing around in small bits, inducing people and AI agents to interact in complex ways according to their various personal greed motives. But he was by no means personally wealthy, and this fact was highly disturbing to him. He often felt that he was being shafted, that the world owed him more financial compensation for his brilliant ideas, that the companies he’d worked for had taken his ideas and made millions of dollars from them, of which he’d seen only a small percentage in the form of salary and stock options.

Sasha worked for me for a while in 1999 and 2000; my company Webmind Inc. hired him away from Marvin Minsky’s lab at MIT. I must say that, while I enjoyed Sasha very much as an intellectual collaborator and a friend, he wasn’t in any way an easy employee. I was excited to bring him into the Webmind team, but I was relieved as well as sad when he quit in mid-2000, having been offered a position as CTO of a tech incubator in Boston. He contributed many interesting technical and conceptual ideas to our Webmind work – mostly to the Webmind Classification system product (which focused on automatically placing documents into categories), but to some extent to the AI R&D codebase as well. But he excelled neither as a practical implementor nor as a manager, and so it was sometimes hard to fit him into the work process of a start-up company based on collaborative teamwork. He spent very little of his life in a university research context, but I often thought this would be the most natural place for him – he had diverse skills and ambitions, but above all, he was a visionary deep thinker and conceptual guru par excellence.

Toward the end of 1999, he frequently told me how he had conquered many of the intellectual puzzles he had been struggling with for decades, and was now focusing on mastering his own mind and emotions. The gravity with which he declared this scared me a little. I told him, probably too flippantly, that I found emotions were sometimes more fun if you left them unmastered. But he wasn’t always so serious: he was also an avid disco dancer, occasionally observed leaving the Webmind office in the evening with a girl half his age on his arm, heading for a dance club or a rave, where he would move to the beat in his peculiarly robotic yet wonderfully life-ful way.

When Sasha committed suicide in mid-2000, I wondered at first whether it had been an act of philosophical despair. Had there been a problem at his new company – were they unwilling to implement his latest designs for online collaborative filtering? Had he received one more devastating piece of evidence that the world just wasn’t going to compensate him appropriately for his ideas, that the hypereconomic cyberfuture was far too slow in arriving? As it turned out, his terrible action was more directly motivated by a complicated and painful romantic relationship – good old-fashioned, low-tech human passionate distress.

His 19 year old girlfriend had jilted him – not one of the girls I’d seen him raving with in New York, someone else I’d never met. The situation was complex as such things often are, but the crux of it seems to be that he had wanted a more exclusive sort of relationshp than she had. She later created some controversy by posting his final love letters to her on her public website, along with her wide-ranging, youthful musings on life, the universe and everything. I corresponded with her briefly and found her sweet, intelligent, creative, understandably upset, and more than a little confused. Fresh out of high school, she’d been overwhelmed by the mind and affections of this 40-year-old sometimes-depressed genius. She’d known he was both depressed and jealous, but was as shocked as anyone else to learn that he’d hung himself in his apartment with electrical cables. She felt sure she had had a brief spiritual contact with him from beyond the grave.

In some important ways, Sasha was similar to Nietzsche, who as we’ve seen was one of the Extropians’ philosophical godfathers. Both Sasha and Nietzsche were intellectual superstars who explicitly enounced one moral philosophy, but lived another. Nietzsche preached toughness and hardness, but in his life he was a sweet person, respectful of the feelings of his mother and sister (whose beliefs he despised). On the day he went mad, he was observed hugging a horse in the street, sympathetic that its master had whipped it. He preached the merits of the robust, healthy man of action and criticized intellectual ascetics, yet he himself was sickly, nearly celibate, and sat in his room thinking and writing day in and day out. Similarly, Sasha extolled the money theory of value, yet lived his own life seeking truth and beauty rather than cash, trying to transform the world for the better and distributing his ideas for free online. He argued that air should be metered out only to those who could pay for it, yet was unfailingly kind and generous in real life, always willing to help young intellectuals along their way without asking for anything in return.

For what it’s worth, it’s impossible to avoid observing in this context that Sasha, the would-be-cyborg transhumanist, manifested a remarkable number of cyborg-like personality traits. His body movements were sometimes oddly robotic – in fact he looked most natural when dancing to techno music, with it computer-generated beats. It would be an unfair exaggeration to say that his voice had something of the manner of a speech synthesizer – but it did have a peculiar stiffness to it, that one might describe as wooden or metallic. Of course, I don’t want to make this point too strongly: Sasha was an outgoing, friendly human being, easily hurt and in some circumstances quick to anger; he was by no means devoid of affect. But when, about six months before his death, a group of us were coming up with silly e-mail nicknames for our co-workers (Sasha was among them at the time), the one we picked for Sasha was robotron@ …. It was clear to everyone who knew him that he had difficulties dealing with the ambiguities and subtleties of human attitudes and relationships. He acknowledged this himself, and sometimes said it was something he was working on. He was a poor politician, which is partly why he so often got himself into positions where his ideas weren’t adequately appreciated by his co-workers or employers. Extropianism, a clear-cut, simple philosophy, seemed to provide him a welcome respite from the human complexities and contradictions that caused him so much grief in ordinary life.

Of course, not all Extropians have Sasha’s personality characteristics. It would be a mistake to overgeneralize, to create a psychology of Extropians from this one example. Max More, for example, is extremely politically adept in his own way; and Max and his wife Natasha are both body-builders with much grace and naturalness in their physical motions, devoted to living well-rounded lives as well as to deep futurism and the life of themind. Many Extropians have above average mastery of human relations, happy personal lives, and so forth. But still, it’s impossible not to hypothesize that the role Extropianism played for Sasha – providing crisp certainties to serve as welcome relief from the puzzling, stressful confusion of everyday life – tells us something about the role Extropianism plays for some other individuals as well.

For some of its adherents, Extropianism serves the role of providing a simple, optimistic world-view, and a community of like-minded believers. Like most religions, and other religion-like belief systems like Marxism, Via its focus on a better future world, it can encourage avoidance of the difficult ambiguities of human reality. Of course, Extropianism is explicitly anti-religious, but it’s not a new observation that rabid anti-religiosity can, for some people, serve almost as a religion itself. As Dostoevsky said, the atheist is one step away from the devout. Atheism and theism provide the mind with the same kind of rigid certainty. For some people, this kind of definitive cutting-through of the Gordian knots of messy human reality can be indispensable, providing the comfort level prerequisite to a healthy and productive state of mind.

And of course, for other Extropians, the Extropian perspective does not serve this sort of role at all, but rather is a conceptual and practical philosophy fitting in naturally with an intellectually, emotionally and physically healthy life.

 

As Max More realized from the start, the moral-philosophy aspects of Extropianism are key. Like Nietzsche, Extropianism recognizes that morals are biologically and culturally relative, rather than absolute. Who hasn’t been struck by this at one time or another? We consider it OK to eat animals but not humans; Hindus consider it immoral to eat cows; Maori and other tribes until quite recently considered it OK to eat people. Or, consider sexual morals. Why are female sexual infidelity and promiscuity considered “worse” than similar behaviors on the part of males? This is common to all human cultures; it comes straight from the evolutionary needs of our selfish DNA.

Given this blatant arbitrariness, it’s very attractive to ignore human values altogether, and focus one’s attention on knowledge, understanding and power -- qualities which seem to have an absolute meaning that morality lacks. In this vein, Nietzsche focused on personal power achieved through mental exploration and self-discipline; whereas the Extropians focus, by an large, mainly on power achieved through technological advancement. They also share a focus on intellectual brilliance -- and many, though not all, Extropians seem to take a worrisomely dismissive attitude toward those whom they feel don’t have what it takes to make the next step on the cosmic evolutionary path (as exemplified in the Moravec quote above).

Sure, Moravec was playing Devil’s Advocate in that interview. But what I'd like to see is more Extropians taking an opposite point of view, and focusing on the value of transhumanist technologies for advancing the well-being of every sentient being. The further Extropian culture moves in this direction, the more I'll like it.

4 or 5 years ago, I posted a question on the Extropian e-mail list. This was well before writing the first version of this article, or even thinking about writing about Extropianism. I was just intellectually fishing. I posited, in my post to the list, that compassion, simple compassion, was an ethical universal, although it might manifest itself in different ways in different cultures and different species. I suggested that compassion, in which one mind extends beyond itself to feel the feelings of others and act for the good of others without requiring anything in return, was essential to the evolution of the complex self-organizing systems we call cultures and societies. Basically, I expressed my disbelief that all human interaction is, or should be, economic in nature.

The deep intellectual and ethical discussion that I was awaiting – well, no such luck. There was a bit of flaming, some impassioned Ayn Rand-ish refutations, and then they went back to whatever else they’d been talking about, unfazed by my heretical position that perhaps transhumanism and humanism could be compatible, that technological optimism wasn’t logically and irrefutably married to libertarian politics. At that time, you could only belong to their e-mail list for free for 30 days; after that you had to pay an annual subscription fee. After my 30 days expired, I chose not to pay the fee, bemused that this was the only e-mail list I knew of that charged members money, but impressed by their philosophical consistency in this matter. (Now the list is free though.)

I have more respect for the diversity of the Extropian community than I did when I first encountered it, or when I wrote the FAZ article that was the first version of this chapter. It’s definitely a loose conglomeration of individualists – heck, even though I’m not formally a member of the Extropian group, I’ve spent some time on their e-mail list, and I’ve been to one of their conferences, and so from the outside world’s perspective, I’m virtually an Extropian myself, even though some of the key philosophical habits of the group trouble me. Any statement made about “Extropians” as a whole is bound to be a bad overgeneralization, more so than would be the case for many other social subgroups.

I admire Extropianism's courage in going against conventional ways of thinking, in recognizing that the human race is not the end-all of cosmic evolution, and in foreseeing that many of the moral and legal restrictions of contemporary society are going to be mutated, lifted or transcended as technology and culture grow. I too am outraged and irritated when governments stop us from experimenting with our minds and bodies using new technologies -- chemical or electronic or whatever. I find Extropian writings vastly more fascinating than most things I read. Extropian individuals are looking far toward the future, exploring regions of concept-space that would otherwise remain unknown, and in doing so they may well end up pushing the development of technology and society for the better. But yet, I’m a bit vexed by the strain of Extropian thought that envisions Extropian human beings as supertechnological proto-Ubermensches, presiding over the inevitable obsolescence of humanity. It’s simultaneously attractive, amusing and disturbing.

Nietzsche, like Sasha Chislenko, was generally an exemplary human being in spite of the inhuman aspects of his philosophy. Yet many years after his death, Nietzsche’s work played a role in atrocities, just as he’d bitterly yet resignedly foreseen. In the back of my mind is a vision of a far-future hyper-technological Holocaust, in which cyborg despots dispense air at fifty dollars per cubic meter, citing turn-of-the-millenium Extropian writings to the effect that humans are going to go obsolete anyway, so it doesn’t make much difference whether we kill them off now or not. And so, I think Extropians should be read, because they’ve thought about some aspects of our future more thoroughly than just about anyone else. But I also think that the some of the key themes in the Extropian community -- particularly, the alliance of transhuman technology with simplistic, uncompassionate libertarian philosophy -- must be opposed with great vigor.


Many of the freedoms the Extropians seek – the legal freedom to make and take smart drugs, to modify the body and the genome with advanced technology – will probably come soon (though not soon enough for me, or them). But I hope that these freedoms will not come along with a cavalier disregard for those living in less fortunate economic conditions, who may not be able to afford the latest in phosphorescent terabit cranial jacks or quantum-computing-powered virtual reality doodaddles, or even an adequately nutritional diet for their children. I am an incurable optimist: I believe that we humans, for all our greed and weakness, have a compassionate core, and I hope and expect that this aspect of our humanity will carry over into the digital age – even into the transhuman age, outliving the human body in its present form. I love the human warmth and teeming mental diversity of important thinkers like Max More, Hans Moravec, Eliezer Yudkowsky and Sasha Chislenko, and great thinkers like Nietzsche – and I hope and expect that these qualities will outlast the more simplistic, ambiguity-fearing aspects of their philosophies. Well aware of the typically human contradictoriness that this entails, I’m looking forward to the development of a cyberphilosophy accepting what is great in Extropianism and moving beyond it in the explicit direction of compassion -- a humanist transhumanism.

The typical techno-futurist guru has a huge variety of domains of interest and knowledge, but one, maybe two special obsessions. Moravec is robotics-focused; More is particularly into life extension and libertarian politics; I’m an AI guy at heart, in spite of my recent forays into biotechnology. On the other hand, Eliezer Yudkowsky, one of the more interesting young folks in the Extropian circle, focuses his thinking on ensuring the Singularity is beneficial by creating what he calls “Friendly AI.” I find Eliezer’s work particularly interesting in that it is, in concept at least, a kind of humanist transhumanism. Unlike Moravec, Eliezer specifically and intensely wants the Singularity to help all humankind. He takes his altruism very seriously. “If one human dies,” he says, “it subtracts from me.”

Yudkoswky shares with me the idea that the best course to the delirious and universally beneficent cyberfuture is to create a computer smarter than us, one that can figure out these other puzzles for us. To accomplish this goal of real computer intelligence, he champions the notion of “seed AI,” in which one first writes a simple AI program that has a moderate level of intelligence, and the ability to modify its own computer code, to make itself smarter and smarter. His design for a “seed AI” is still evolving, and so far I don’t feel he’s achieved nearly the level of concreteness that we have with our Novamente system, but he’s a clever guy and I don’t doubt he’ll come up with something interesting. Discussions on his “Singularitarian” e-mail list led to the formation of the Singularity Institute devoted to the creation of seed AI, and to the company Vastmind.com, developer of a distributed processing framework that allows a collection of computers on the Net to act like a single vast machine.

Like many of the leading Extropians, Eliezer started his life as a gifted child; and, like many gifted children, he grew up neglected by the school system and misunderstood by his parents. He’s followed a unique psychological trajectory: After the seventh grade, he was stricken with a peculiar lack of energy, which to some degree plagues him to this day. His parents tried to help him cope with this in various ways, but without success: only when they allowed him to take control of his own life and his own mind was he able to work his way back to a productive and functional state of mind. This experience, he says, taught him that even well-meaning, loving people who want to help you can do you a lot of damage, due to their lack of understanding. He cites this as one of the sources of his libertarian political philosophy. Just as his parents tried to guide his life but failed in spite of good intentions, so does the government try to guide the lives of its citizens, but fails – and fails particularly where the vanguard of technology is concerned.

Eliezer runs an e-mail list called “SL4” –. It used to be ; now it's . He moderates the list with a sense of humor and an iron hand. His control is occasionally a little overbearing, but, all in all, it keeps the list quality about 100 times higher than on the Extropians list (). “SL4” stands for “shock level 4,” where he defines a shock level as a measurement of “the high-tech concepts you can contemplate without… experiencing future shock.” According to his Web page, http://sysopmind.com/sing/shocklevels.html, the first 4 shock levels are defined roughly as follows:

  • SL0: The legendary average person is comfortable with modern technology - not so much the frontiers of modern technology, but the technology used in everyday life.
  • SL1: Virtual reality, living to be a hundred, the frontiers of modern technology as seen by Wired magazine. Scientists, novelty-seekers, early-adopters, programmers, technophiles.
  • SL2: Medical immortality, interplanetary exploration, major genetic engineering, and new ("alien") cultures. The average SF fan.
  • SL3: Nanotechnology, human-equivalent AI, minor intelligence enhancement, uploading, total body revision, intergalactic exploration. Extropians and transhumanists.
  • SL4: The Singularity, Powers (a term taken from Vernor Vinge’s fiction, meaning superintelligent god-beings), complete mental revision, ultraintelligence, the total evaporation of "life as we know it."
Inevitably, every now and then someone posts a message on the SL4 group announcing that they have achieved SL5, under some definition or another.
  • According to Yudkowsky, “The use of this measure is that it's hard to introduce anyone to an idea more than one Shock Level above - and Shock Levels measure what you accept calmly, not what you know about. There are very few SL4s…. If somebody is still worried about virtual reality (low end of SL1), you can safely try explaining medical immortality (low-end SL2), but not nanotechnology (SL3) or uploading (high SL3). They might believe you, but they will be frightened - shocked.”

    The term “sysopmind” refers to Yudkowsky’s notion of the Sysop, a superintelligence that has achieved near-complete control over the structure of matter and energy in some region of spacetime, and thus plays the role of a (hopefully benevolent) system administrator for some portion of the world. A long recent thread on the SL4 group discussed the possibility of some lesser mind, at some future time, hacking into the Sysop and co-opting its powers for its own devious purposes.

    Of all the various discussions on SL4, though, the one that amused me most concerned the contrasts between Eliezer’s and my personal lives. Eliezer was raised in a strict Jewish family, and it seems to me that he shows this influence very strongly in his life and his work, even though he is an avowed atheist. His devotion to the Singularity is definitively monkish. As he has publicly declared many times, he does not “fight, drink alcohol, take drugs or have sex.” He recognizes the pleasures that can be obtained from these things – not through experience but through hearsay – but he does not want to get involved with activities that will evoke strong animal emotions and thus distract him from 100% focus on the Singularity. He specifically laid out, in an e-mail to the SL4 list, the only conditions under which he could see violating these precepts. For instance, if a wealthy woman approached him and told him she would fund his work on the Singularity, but only if he would marry her – then, he says, he would marry her, not for her sake, but for the Singularity.

    In e-mail dialogues with Eliezer on SL4, I presented my doubts as to the necessity of this monkish approach. I said that I was highly devoted to pushing toward the Singularity as well, but I didn’t see why it would necessary for me to give up having a rewarding personal life in order to manifest this devotion in a highly effective way. I work long hours because I enjoy it and because I believe what I’m doing is extremely important, both for myself (I want to build an AI that will help me figure out how to live forever!) and for the human race and the evolution of intelligence overall. But I still take time for my wife and children, and various other pursuits like composing music, playing the piano (not that expertly, but enthusiastically), occasional outdoor sports, and writing this book…. My own quest for the Singularity, I explained to him, was partly altruistic, but partly a consequence of my boundless curiosity and my desire for adventure and excitement – the same thing that pushes me to try new sports, to travel to different countries; and the same thing that, in my college years, impelled me to experiment with various mind-altering substances (though I never took “hard drugs”, which I judged too dangerous).

    He responded that his own quest to bring about the Singularity through creating seed AI was purely altruistic in motive. He also called himself a true romantic, stating that he knew a real love relationship would take too much of his time, so he was just going to steer away from that domain of life altogether. Waxing poetic, he said "...love is a cathedral that you build together, a rose that you grow and water together *for its own sake* "

    Upon reading this characterization of love, I couldn’t help but think of the Charles Bukowski line, “Love is a dog from hell.” I responded in a quasi-Bukowskian vein:

    "Love is a cathedral, huh? I can't let that one pass...

    Sometimes love *is* a cathedral, dude...

    Sometimes it's more like a sleazy, raunchy strip bar in downtown Las Vegas at 4AM ..

    And sometimes, for sure, it's more like a god damned

    dilapidated outhouse..."

    Both of us tired of e-mailing for the day, Eliezer went back to helping humanity, and after a moment spent wondering when my wife – with whom I’d recently reconciled after a 6 month separation -- was coming back from shopping, I started again on my own endless work. My last thought on the topic was that love didn’t really have much to do with any of these silly words anyway – but I sure hoped it would survive the Singularity in one form or another.

    There followed a hilarious SL4 e-mail thread in which a group of others discussed the theme of the “monk versus the warrior.” Somehow I had become a warrior – which struck me as rather amusing since in point of fact, much like Eliezer, I spend an excessive proportion of my waking hours on my butt in front of the computer. Unlike, say, a Robert Heinlein hero, my fencing skills are pretty much nonexistent. (Although I did get into hand-to-hand combat with a thief who entered an apartment I was temporarily living in, a couple months ago… fortunately he was drunk, so he was removed from the premises fairly easily.)

    The main practical consequence of Eliezer’s extreme altruism – apart from his abstemious lifestyle – is his focus on the notion of Friendly AI. He wants the Singularity to benefit all people, which in my view is a vast improvement of the Moravecian “to hell with the poor” attitude. And he believes the Singularity will be brought about by a seed AI transforming itself to superintelligence and then making endless further inventions and innovations, until it becomes a Sysop. It follows from this that making the seed AI as benevolent as possible to humans is an important idea. Of course, it can’t be known that a human-friendly seed AI will become a human-friendly Sysop. But, in Eliezer’s view, the lack of absolute knowledge in this regard is a lame excuse for not trying.

    His key idea is that Friendliness (to humans) should be at the top of any seed AI’s goal system. Other goals, such as learning things or surviving, should be represented within the system as subgoals of Friendliness. The system should try to survive, but not because survival is its ultimate goal – rather, because surviving will allow it to help people more.

    When I invited him to give a talk at Webmind Inc. in late 2000, he lectured us passionately on the need to give our AI system a friendly goal system. He was a little concerned that the Webmind AI Engine might undergo a “hard takeoff” – a rapid transition from intelligence to superintelligence via progressive self-modification – and that if it didn’t have the right goal system inside it at that point, the future of humanity might be a bleak one. Since we involved with the Webmind project were painfully aware of the incomplete state of our codebase, we were not so concerned about this possibility.

    Reactions to his talk by Webmind Inc. staff ranged from deep interest to distant amusement to outright disgust at the silliness and impracticality of the topic. Generally speaking, my Webmind colleagues were absorbed with the practical problems of trying to create real digital intelligence, whereas Eliezer was more concerned at that point with the various philosophical and futuristic issues that will arise once a truly intelligent AI system is completed. But the issue of “wiring in Friendliness” definitely struck everyone powerfully, one way or another. Among the milder responses, one of our Brazilian software engineers – not one of the several who had worked on the Net PC project before joining Webmind, but a good friend of those who had, and a student of Wagner Meira and Sergio Campos who worked on the Brazilian Net PC – raised his hand and politely said: “But perhaps the most important thing is not the in-built goal system, but whether we teach it by example.” The friendlier we are, in other words, the friendlier our AI systems are going to be.

    The issue is clear and poignant. What the Brazilian engineer was suggesting was that, if our superhuman AI grows up watching us act as though most humans are dispensable and irrelevant, perhaps it will, in its adulthood, believe that we too are dispensable and irrelevant. On the other hand, perhaps, as Eliezer says, it will grow up and understand that building it was the best thing the cyber-elite could do for humanity as a whole, and it will then proceed to spread joy and plenty throughout the land. Who knows?

    Personally, I find the motivation behind the Friendly AI concept admirable; but my own digital mind design does not have quite so rigid a goal system as Eliezer’s analysis implies, and I tend to agree with the view the Brazilian engineer expressed during Eliezer’s talk -- that experience and education are going to make more of a difference to the Friendliness or otherwise of a seed AI, than any structure explicitly embedded in its goal system. But I’ll be curious to see what kind of AI architecture Eliezer comes up with; no doubt his AI design will be more compatible with his own thoughts on goals and their management. He has recently written a paper on "Deliberative General Intelligence" which takes some serious steps in this direction.

     

I’ve been chatting on the SL4 list regularly for the last year or so. On the other hand, my last serious foray onto the Extropians e-mail list was in April 2001. Webmind Inc. had just folded, and I was in need of some distracting entertainment. I thought it would be amusing to bring up the old social-consciousness theme again, though with a less adversarial twist than I’d done in my last venture onto the list a few years before.

There was a thread discussing how hard it was to get Extropian ideas accepted into mainstream culture. I suggested that, as a counterbalance to the “scary” aspects of deep futurism, it might be valuable for the Extropian community, as a group, to become involved in some kind of socially beneficial project, perhaps spreading technology to the disadvantaged. I had the Brazilian net computing project in the back of my mind, along with a program I’d heard of that, for $14000 or so, allowed you to sponsor a Cambodian elementary school.

Eliezer, who was active on the Extropians list at that time as well, responded firmly that the best thing he could to for the disadvantaged of the world was to focus all his time and effort on bringing about the Singularity, because the Singularity will help everyone. He said,

If you can't, on a deep emotional level, see the connection between my work and the starving people in the Sudan, then this - from my perspective - is an emotional peculiarity on your part, not mine.

I replied as follows:

I do perceive the connection, of course, both rationally and emotionally. Your work has a decent chance of increasing the probability that the Singularity is good for humans, and it's therefore a very important kind of work. I feel the same way about my own work. AI technology is going to do a lot of good for a lot of people, someday. I do feel AI will be a profoundly positive technology for humans, not a negative one like, say, nuclear weapons, which I wouldn't enjoy working on even if it were intellectually stimulating.

But yet, for reasons that are still not easy for me to articulate, I feel a bit of discomfort with **solely focusing one's life** on this type of compassionate activity -- on "helping people by doing things that will help them in the future but don't affect them at all right now." This is a good kind of activity to be doing, for sure. But yet, I feel that, in general, this kind of long-term helping of others can be conducted better if it's harmonized with a short-term helping of others.

Not surprisingly (and not too disappointingly – I love Eliezer’s work and don’t really want everyone to think exactly the same way I do), I didn’t convince him. He asked:

How do you resolve issues like these? Split your efforts between both alternatives to maximize output. How much money is spent on attempts to actually ship food directly to the poor? Lots. How much money is spent on direct efforts to implement the Singularity? We can both personally attest, Ben, that there is not much.

To his

There is absolutely *nothing* I could do that would help the rest of the world more than what I am already doing.

I replied

In my view, given the numerous uncertainties as to the timing and qualitative nature of the Singularity, it is irrational of you to hold to this view with such immense certainty.

Actually, I honestly feel that if you spent a year teaching kids in the Sudan, you'd probably end up contributing MORE to the world than if you just kept doing exactly what you're doing now. You'd gain a different sort of understanding of human nature, and a different sort of connection with people, which would enrich your work in a lot of ways that you can hardly imagine at the moment. Not to mention a healthy respect for indoor plumbing!!

Samantha Atkins, a long-time Extropian and a very thoughtful person, chipped in as follows, presenting a view a bit closer to mine:

Perhaps there is a productive middle ground. Some of us could say more about precisely how the Singularity, and the technologies along the way, can be applied to solving many of the problems that beset real people right now. We can produce and spread the memes of technology generally and AI, NT and the Singularity in particular as answering the deepest needs, hopes and dreams of human beings.

As part of this we also need more of a story about the steps up to Singularity as involves the actual lives and living conditions of people. That we will muddle along somehow while a few of the best and the brightest create a miracle is not very satisfying. What kind of world do we work toward in the meantime? What do we do about poverty, about technology obsoleting skills faster than new ones can be acquired, about creating workable visions including ethics and so on? What is our attitude toward humanity?

The world we make along the way will shape the Singularity and may well determine whether it occurs at all.

I then presented a parable. Suppose you're stuck on a boat in the middle of the ocean with a bunch of people, and they're really hungry, and the boat is drifting away from the island that you know is to the east. Suppose you're the only person on the boat who can fish well, and also the only person who can paddle well. You may be helping the others most by ignoring their short-term needs (fish) in favor of their longer-term needs (getting to the island). If you get them to the island, then eventually they'll get to an island with lots of food on it, a much better situation than being on a sinking boat with a slightly full stomach.

If the other people don't realize the boat is drifting in the wrong direction, though, because, they don't realize the island is there, then what? Then they'll just think you're a bit loopy for paddling so hard. And if they know you're a great fisherman, they'll be annoyed at you for not helping them get food....

What, I asked, is this little parable missing?

I answered my own question: Sociality. If you feed the other people, they'll be stronger, and maybe they'll be more help in paddling the boat. Furthermore, if you maintain a friendly relationship with them by helping them out in ways that they perceive as well as ways that they do not, then they're more likely to collaborate creatively with you in figuring out ways to save the situation. Maybe because of their friendship with you, they'll take your notion that there's an island to the east more seriously, and they'll think about ways to get there faster, and they'll notice a current that you didn't notice, floating on which will allow you to get there faster with less paddling.

The difference here, I posited, is between the following two attitudes

1) Seeking to, as a lone and noble crusader, save the world in spite of itself

2) Seeking to cooperatively engage the world in the process of saving itself

To do 2, I pointed out, it's not enough to do things that you perceive are good for everyone in the long run. You have to gain the good will of others, and work with them together on things that both they, and you, feel are important.

Of course, it's impossible and undesirable to have a consensus among all humans as to what is good and what is bad. So like most things in the human world, the distinction between 1 and 2 is fuzzy rather than absolute.

So my argument to Eliezer, based on this parable, was:

I realize that you, Eli, are trying to cooperatively engage the world in the process of saving itself your way, by publishing your thoughts on Friendly AI. But I have an inkling that the way to cooperatively engage the world in the process of saving itself ISN'T to try to convince them to see them your way through rational argumentation. Rather, it's to try to enter into a real dialogue where each side (transhumanists vs. normal people in this case) makes a deep and genuine effort to understand the perspective of the other side.

His reply was both well-thought-out in its details, and predictable in its overall course:


If the other people don't **realize** the boat is drifting in the wrong direction, though, because, they don't realize the island is there, then what? Then they'll just think you're a bit loopy for paddling so hard. And if they know you're a great fisherman, they'll be annoyed at you for not helping them get food....

Except I'm *not* a great fisherman. I am a far, far better paddler than I am a fisherman. There are *lots* and *lots* of people fishing, and nobody paddling. That is the situation we are currently in.

What is my answer missing? Sociality. Very well, then, let's look at the social aspects of this.

Your answer makes sense for a small boat. Your answer even scales for a hunter-gatherer tribe of 200 people. But we don't live in a hunter-gatherer tribe. We live in a world with six billion people. From a "logical" perspective, that means that it takes something like AI to get the leverage to benefit that many people. From a "social" perspective, it means that at least some of those people will always be ticked off, and hopefully some of them will sign on.

Plans can be divided into three types. There are plans like Bill Joy's, that work only if everyone on the planet signs on, and which get hosed if even 1% disagree. Such plans are unworkable. There are plans like the thirteen colonies' War for Independence, which work only if a *lot* of people - i.e., 30% or 70% or whatever - sign on. Such plans require tremendous effort, and pre-existing momentum, to build up to the requisite number of people.

And there are plans like building a seed AI, which require only a finite number of people to sign on, but which benefit the whole world. The third class of plan requires only that a majority *not* get ticked off enough to shut you down, which is a more achievable goal than proselytizing a majority of the entire planet.

Plans of the third type are far less tenuous than plans of the second type.

And the fact is that a majority of the world isn't about to knock on my door and complain that I'm doing all this useless paddling instead of fishing. The fall-off-the-edge-of-the-world types might knock and complain about my *evil* paddling, but *no way* is a *majority* going to complain about my paddling instead of fishing. Certainly not here in the US, where going your own way is a well-established tradition, and most people are justifiably impressed if you spend a majority of your time doing *anything* for the public benefit.

As Brian Atkins said:

"The moral of the story, when it comes to actually having a large effect on the world: the more advanced technology you have access to, the more likely that the "lone crusader" approach makes more sense to take compared to the traditional "start a whole movement" path. Advanced technologies like AI give huge power to the individual/small org, and it is an utter waste of time (and lives per day) to miss this fact."



Brian Atkins, another Extropian, was for many years Eliezer’s patron – meaning that he was the primary source of funding for the Singularity Institute, whose primary practical function was the financial support of Eliezer Yudkowsky. (In late 2002, for personal-finance reasons, Brian decreased his support of Eliezer's fellowship and as I write this around the start of 2003, Eliezer is seeking alternate financing.) Brian is not an AI wizard, but he does have a quick mind, a broad knowledge base, and a good sense for future technology. Not surprisingly, he is a pretty close to a “true believer” in Eliezer – not that he’s sure Eliezer’s work will save the world, but he reckons there’s a significantly greater than nonzero chance, and for him that’s enough to merit some investment. He and his wife also obviously have a lot of personal affection for Eliezer, and have in some sense taken him under their wing. When Eliezer began to work for the Singularity Institute, funded by Brian, he moved to Atlanta where the Atkins live.

My response to Eli’s response was:

Eli… here is my sense of things, which I know is different than yours.

There's the seed AI, and then there's the "global brain" -- the network of computing and communication systems and humans that increasingly acts as a whole system.

For the seed AI to be useful to humans rather than indifferent or hostile to them, what we need in my view is NOT an artificially-rigged Friendliness goal system, but rather, an organic integration of the seed AI with the global brain.

And this, I suspect, is a plan of the second type, according to your categorization....

And the fact is that a majority of the world isn't about to knock on my door and complain that I'm doing all this useless paddling instead of fishing. The fall-off-the-edge-of-the-world types might knock and complain about my *evil* paddling, but *no way* is a *majority* going to complain about my paddling instead of fishing. Certainly not here in the US, where going your own way is a well-established tradition, and most people are justifiably impressed if you spend a majority of your time doing *anything* for the public benefit.

My belief is that one will work toward Friendly AI better if one spends a bit of one's time actually engaged in directly Friendly (compassionate, helpful) activities toward humans in real-time. This is because such activities help give one a much richer intuition for the nuances of what helping people really means.

This is an age-old philosophical dispute, of course. Your lifestyle and approach to work are what Nietzsche called "ascetic", and he railed against ascetisicm mercilessly while practicing it himself. I'm fairly close to an ascetic by most standards -- I spend most of my time working on abstract stuff, and otherwise I don't do all that much else aside from play with my kids -- but, yes, I admit it, I spend some of my time indulging myself in the various pleasures of the real world ... and some of my time doing stuff like teaching in my kids' schools, which is fun and useful to the kids, but doesn't use my unique talents as fully as working on AI. I think my work is the better, not the worse, for these "diversions".... But perhaps it wouldn't be so for you.... Perhaps the philosophical dispute over the merits of asceticism just comes down to individual differences in personality .

All in all, neither Eliezer nor I really convinced each other, but we did make some headway in terms of understanding each others’ points of view. I am happy that Eliezer’s altruistic attitude exists, it’s a great counterbalance to the more draconianly elitist strains of Extropian thought. And I think it is important that such things be discussed, even if the discussions are at times rambling and silly. Just because we in the deep-futurist camp don’t sympathize much with the ethical concerns of the mainstream media (is human cloning somehow intrinsically immoral – give me a break!), doesn’t mean that ethical issues are irrelevant to our thinking and our work.

Like Eliezer Yudkowsky, I believe that the Singularity can be brought about in a way that benefits everyone, or nearly everyone. I’m not sure that the path to this conclusion is as simple as the creation of a human-friendly seed AI, however. I think this a laudable goal, but I also think it’s important to bring as much of the world as possible into the process of creating the Singularity. The Global Brain idea may be critical here. If the first real AI doesn’t achieve superintelligence locked in a box, but rather through ongoing interaction with humans in all nations across the world, then its mind stands a good chance of being intrinsically human-focused and human-friendly as a consequence of its upbringing. It will be both a separate being, an individual AI mind, and part of a symbiotic mind of sorts involving little bits of millions of people. As it invents new technologies, it will want to invent not only technologies to make itself smarter, but also technologies to improve the human component of the symbiotic AI-human-internet mind. Having a human-friendly goal system is fine, but in any system flexible enough to be an intelligence, goals and motivations are going to shift over time. To be really meaningful and stable, a human-friendly goal system must be allowed to evolve and mature through intensive mutually rewarding interactions with the mass of human beings, and the Global Brain path to superintelligent AI would seem to have the potential to accomplish this.

If we can carry it off properly….