We're starting to brush up against real robots, real nanotech, and maybe even the first real artificial intelligence. But will emerging technologies destroy humankind — or will humankind be saved by an emerging transhumanism?
And which answer is more liberating?
If anybody knows, it's R.U. Sirius. The former editor in chief at Mondo 2000 (and a Timothy Leary expert) has teamed up with "Better Humans LLC." They're producing a new transhumanist magazine called h+. (And R.U. is also one of the head monkeys at 10 Zen Monkeys.) But can he answer this ultimate question? Terminator Salvation played with questions about where technology ends and humanity begins.
But what will we do when we're confronting the same questions in real life?
10 Zen Monkeys: Isn't this whole idea of real transhumanism kind of scary?
RU SIRIUS: Everything's scary. Human beings weren't born to be wild so much as we were born to be scared, starting on a savanna in Africa as hunter-gatherers watching out for lions and tigers and bears (oh my... Okay, maybe just lions), subjected to the random cruelties of a Darwinian planet. I would say that the transhumanist project is probably an attempt to use human ingenuity to make living in this situation as not scary as possible, and in some theories, to actually change the situation, to create a post-Darwinian era.
See Also
For instance, we explored the very rapid development of robotic technologies for warfare during the web site's Terminator Week. That's viscerally scary. Logically it can also mean less civilian casualties, less harm to soldiers, and so on. And on the other hand, it can also mean less hesitation to use violence against others, or a possibly objectionable system of total control in which revolution is permanently rendered impossible. And on the other hand... I can do the "on the one hand and on the other hand" until the Singularity or at least until the Mayan apocalypse of 2012.
But seriously, what really scares the crap out of me is that we might not make radical technological problem-solving breakthroughs — that we might stop, or that the technologies might fall short of their promises. What scares me is the idea of a 6 billion-strong species finding itself with diminishing hopes, resource scarcities, insoluble deadly pandemics, and global depression based on the delusions of abstract capital flow resulting in increases in violence and suffering and territoriality and xenophobia.
10Z: But how does transhumanism resolve these problems? How does a bunch of rich people living longer solve any of this?
RU: Let's take this one at a time. The technological paradigm that has grown out of transhumanist or radical technological progressive circles that I'm most fond of is NBIC. Nano-Bio-Info-Cogno. The promise of nanotechnology — which has become much more tangible just in the last few months (thanks to developments we recently covered on our site) — is basic control over the structure of matter. This should eventually solve most of our scarcity problems, with the possible exception of physical space. (And there are ways we might deal with that, but I'm trying to keep it short.)
Nanotechnology, of course, has enormous potentials in terms of health as does biotechnology. People can find these details just about anywhere so I won't go into it. Anyway, sickness is perhaps our greatest source of misery and our greatest resource sink... particularly if you contrast sickness not just with the absence of disease but with the possibilities of maintaining a high level of vitality.
Then... information technology allows us to organize the data for distributed problem solving and — to a great degree — democratizes it. (More eyes and more brains on the problem, working with and through more intelligent machines.) IT is at the heart of all the breakthroughs and potential breakthroughs in nano and bio — and all this is leaving aside the further out projections of hyper-intelligent AIs.
You know, getting back to what's scary, I agree with Vernor Vinge that the greatest existential threat is still nuclear warfare. But next in line is the possibility of a major plague... a rapidly spreading pandemic. And already we can see that the tools for dealing with that come down to intelligent systems and biotech. There's biotech medical solutions using intelligent systems married to global mapping and communications and organized distribution. Human behavior has a role too, of course... but not as much as romantics might wish.
Which perhaps brings us to cogno — getting control and better use out of the brain for greater intelligence, greater happiness, less misery... hell, maybe even cheaper thrills! Why not? A lot of our problems are self-created... or they're created by particularly unstable or irrational people. As a veteran of the psychedelic culture, the potentials and problems of cognition are a particular area of fascination for me — and also as a nonconformist who is suspicious of the tendency of society to be hostile towards what we might call creative madness. So I do have some ambiguities, but it's just a huge area of intrigue as far as I'm concerned.
Now, all of this is just the prosaic stuff, without imagining Singularities, or say hyperintelligent humans who aren't needy... happily living on converted urine and nutrient pills while entertaining one and other in ever-complexifying virtual spaces. Lots of energy savings there, Bubb.
10Z: President Obama is reconstituting his bio-ethics panel. Just how high are the stakes, in the here and now, regarding U.S. political policy governing future research?
RU: You know, I think the bioconservatives who dominated Bush's bio-ethics panel and opposed stem cell research were just pissing in the wind... but that stuff can hit you in the face. Really though, I think that the discourse in opposition to embryonic stem cells will some day be seen as every bit as absurd as Monty Python's "every sperm is sacred."
More broadly, I don't think the stakes are very high because I don't think you can get the federal government today to be terribly functional... and I'm not a knee-jerk anti-government guy at the level of economics or investment in research. I just think there's a certain all-American "can't do" thing going on there and there's no effective strategy for changing it.
Sometimes I think that the people who really control America — the corporate oligarchs and finance kleptocrats, the national security apparatus and so forth — realize that the Titanic has already hit the iceberg. And laughing up their sleeves they said, "Quick! Put that charismatic black guy behind the wheel!"
10Z: I'm surprised to hear that you're not a knee-jerk anti-government sort of guy. I read that you were an anarchist.
RU: I've read that too. I have an anarchistic streak, but I can't even begin to believe in it. I do think that being an anarchist is an excellent choice though, because it's never going to be tried by any large group on a highly populated planet with advanced technology. So you never have to witness or experience the consequences of your belief system being enacted. It will remain forever romantic.
On the whole, though... I should try to be diplomatic. Let's just say that anarchists and pure libertarians are the most anti-authoritarian, and I like to be anti-authoritarian. It would be more convenient and more consistent to believe, but I don't think ideologies work in the real world.
10Z: Let's get back to those ambiguities you mentioned. That seems like a rare trait in the community represented by h+ magazine.
RU: Hardly. But I'm probably more richly ambiguous than most other human beings. My only ideology is uncertainty. Although you'll see it if you explore transhumanist-oriented discussion groups and blogs like Michael Anissimov's Accelerating Future or the writings of Nick Bostrom ad infinitum. They're rife with complexity and argumentation, and concern about existential threats, inequalities in the distribution of positive results from scientific achievement, and on and on. The reality is there's a rich and varied discourse within the techno-progressive movement just as there is between the progressives and the bio-conservatives.
10Z: It's hard to see where longevity and immortality fits into your vision of social responsibility.
RU: First of all, I emphasized problem solving to respond to your question about fear. And in essence my answer was I'm more afraid of standing still or going backwards than I am of moving forward. But man... and woman... cannot live by social responsibility alone. (We don't go around now asking people to die so we can spare resources or whatever.)
And I think that our humor columnist Joe Quirk had the best response to people who are against hyper-longevity... holy crap! These people want me to die!
Can we allow people to be the owners and operators of their own experiences and decide for themselves how to answer the Shakespearian question — to be or not to be? I think it's doable. There's a very substantive discussion from Ramez Naam in our first issue about why hyper-longevity should not create big resource problems. It has to do with demographics and the tendencies of educated, comfortable people to make less kids, and a fairly high percentage of inevitable deaths even if we cure aging and most illnesses.
10Z: But won't this exacerbate already extreme class distinctions? Won't we have a wealthy race of immortals and then everybody else?
RU: That's plausible, but very unlikely. And it always surprises me that that's the first thing you usually hear, since a great portion of the human species already has access to universal health care. Even left to the market, the investment that's being made in this should eventually lead to a need to sell to a large consumer market. In our first issue, we have a chart that shows billionaires who are investing in revolutionary science projects... and a few of them are investing in longevity. Well, they're going to want to take their product to market and get a big consumer share. John Sperling isn't going to be sitting in some mountain retreat rubbing his hands together and saying, "Foolish mortals, I shall use this only for myself and my beautiful blonde cyborg bride Britney!" That's the movie version, not the reality.
The reality is actually sort of comical — the wealthy are the early adapters of new technologies, but those new technologies usually don't work very well at first... they tend to fuck up. Now, I think you can imagine that as a potential movie that can satisfy everybody's need for schadenfreude.
10Z: Francis Fukuyama wrote some critiques of the transhumanist vision. In one essay he writes: "Modifying any one of our key characteristics inevitably entails modifying a complex, interlinked package of traits, and we will never be able to anticipate the ultimate outcome." How would you respond?
RU: This gets us to the cover story on so-called designer babies in the current Summer Edition of h+ magazine. There's hugely intriguing and potentially controversial issues about enhancement in this edition. And that's not only around parents pre-selecting traits for their children, but there's also a portrait of Andy Miah in the issue. He's a British professor who — for all intents and purposes — is pro-sports doping.
Before I go into this, I want to take a bit of a detour. When I wake up in the morning and start working on h+, I'm not thinking "How can I spread propaganda for the glories of transhumanism?" or anything like that. I'm thinking: "How can I do a totally cool-ass website and magazine with the transhumanist idea and sensibility at the center of it." That's my charge, and I'm approaching it as a craftsman. So I'm looking at this first as a magazine writer and editor — I want it to be accessible, exciting and fun, and I want it to look great. I want it to ride along the boundary between being a pro-transhumanist magazine and being more of a balanced and very hip generalist geek culture magazine. That, for me, is the sweet spot in this, and I think, along with other contributors, we've pretty much nailed it.
So I'm first of all an editor and writer. And secondly, I'm a curious editor and writer. This isn't necessarily all good or all bad. It's interesting. And that's how I'd hope and expect most readers would approach it.
And there's one more thing coming in a very distant third. In the context of an overarching commitment to my philosophy of uncertainty — or meta-agnosticism — I'm an advocate of the radical technological vision. I've thought long and hard about politics — and about consciousness unassisted by radical technology — and I've concluded that radical technology is the only bet that has a chance of winning not just a sufferable but a generally positive and enjoyable human future. But I'm not a stoical defender of the cause or anything like that.
So what Fukuyama proposes is interesting — that altering a few alleles to create some characteristics could iterate into monstrous or unhappy consequences further down the road. And I think that the general consensus among geneticists is that this is very unlikely with the small kinds of changes that are being discussed now (for example, selections of eye and hair color). Beyond that point, I say... let the arguments rage on! One of the assumptions among advocates is that by the time we're able to make significant incursions into germ line engineering (to affect people's intelligence or make them more or less aggressive or sexier or whatever), we'll have significantly advanced measurement and predictive tools...plus, a really good understanding of what we're doing.
And there's another argument: we change stuff all the time in the "natural" evolution of human beings — and we reap both positive and negative consequences. But generally we gain more than we lose by proceeding with technological advances. There's this idea called the "proactionary principle" which came from Max More, one of the originators of transhumanism. He basically argues that we measure the potential negative consequences of a technology, but we also need to measure the negative consequences of not developing a technology. What do we lose by its absence?
Anyway, I sort of want to punt — in the specific — on the issue around choosing traits for babies. I prefer to acknowledge that it's a controversial area, but I'm excited to present the articles that are favorable towards these activities and hope they generate lots of interest and discussion.
10Z: Before I let you go, let me ask you about the politics of h+ magazine and the transhumanist movement. Ronald Bailey, who writes for the libertarian magazine Reason, criticized another transhumanist — James Hughes — who apparently advocates democratic socialism. Where do you come down on all this, and what are the politics of h+?
RU: First of all, the magazine has no explicit politics. Having said that, I think we have an implicit politic that both Ron Bailey and James Hughes agree with. It's the idea that human beings have a right to a high degree of autonomy over their minds and bodies, and that the trend towards transhuman technologies makes those rights all the more important and poignant. So human beings would have the right not just to choose their sexual preferences, or to control their birth processes, or as consenting adults to take whatever substances they like, or to eat what they like. We would also have the right to control and change our biologies, to self-enhance, to alter our bodies through surgery and on and on. So let me be oh-so-diplomatic, by emphasizing our points of agreement.
I'll give a bit of my own perspective in terms of the great late second millennium debate that puts an unfettered market at one end of the spectrum and communism at the other end of the spectrum; that puts competition on one end of the spectrum and cooperation at the other end; that puts decentralization at one of the spectrum and centralization on the other end of the spectrum. I'd have to say I'm horribly centrist. I'm dead center. It's not a mainstream centrism, but without going into a long explication, I'm almost embarrassingly moderate.
But while I think these arguments are still lively and vital today — and I have my own cheers and jeers over each day's political issues — from a near-futurist transhumanist perspective, the debate seems really tired. For about a decade I've been arguing that the future I see emerging is witnessed by the open source culture, Wikipedia, and file sharing. And in another decade or two the dominant economic mode will not be the market or socialism or the mixed economy that we actually have pretty much everywhere — it will be voluntary collaboration. And yes, that's kind of an anarchist view... but I'm saying it will become the dominant mode, not the only mode. (The market and the state will continue to be factors.) I hear Kevin Kelly just figured this out. :)... although his use of loaded words like socialism and collectivism are somewhat unfortunate.
People sometimes wonder how wealth will get distributed in a future economy that will likely require close to 0% human participation and that still presumably requires people to hustle themselves up some proof of value. But I think there's a good chance that an advanced "file-sharing" culture hooked up to advanced production nanotechnology will render the question moot.
Free lunch for everybody!
See Also:
Latest issue of h+ magazine
Read the first issue
R.U. Sirius on "Terminator/Robot Week"
"Is the Future Cancelled?" Spring 2009 Edition
HPlus Magazine's main site
R.U. Sirius's editor's blog
R.U., you continue to be a beacon of rationality and intelligent moderation (which seems an incredibly strange thing to say about you!). Congratulations on your success with the H+ magazine. It is exceptionally well done and strikes the balance you refer to in the interview extraordinarily well. Keep up the fantastic work!
I want to go back to south africa and be a hunter-gatherer!
Ru sirius made me do it! :p