The Singularity!

Brain  singularity mind control

But Is It for Real !!!

For some time now, futurists have been talking about a concept called the Singularity, a technological jump so big that society will be transformed. If they’re right, the Industrial Revolution-or even the development of agriculture or harnessing of fire-might seem like minor historical hiccups by comparison. The possibility is now seeming realistic enough that scientists and engineers are grappling with the implications-for good and ill.

When I spoke to technology pioneer and futurist Ray Kurzweil (who popularized the idea in his book The Singularity Is Near), he put it this way: “Within a quarter-century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it.”

Even before we reach that point, Kurzweil and his peers foresee breathtaking advances. Scientists in Israel have developed tiny robots to crawl through blood vessels attacking cancers, and labs in the United States are working on similar technology. These robots will grow smaller and more capable. One day, intelligent nanorobots may be integrated into our bodies to clear arteries and rebuild failing organs, communicating with each other and the outside world via a “cloud” network. Tiny bots might attach themselves to neurons in the brain and add their processing power-and that of other computers in the cloud-to ours, giving us mental resources that would dwarf anything available now. By stimulating the optic, auditory or tactile nerves, such nanobots might be able to simulate vision, hearing or touch, providing “augmented reality” overlays identifying street names, helping with face recognition or telling us how to repair things we’ve never seen before.

Scientists in Japan are already producing rudimentary nanobot “brains.” Could it take decades for these technologies to come to fruition? Yes-but only decades, not centuries. The result may be what Kurzweil calls “an intimate merger between the technology-creating species and the technological evolutionary process it spawned.”

If scientists can integrate tiny robots into the human body, then they can build tiny robots into, well, everything, ushering in an era of “smart matter.” Nanobots may be able to build products molecule-by-molecule, making the material world look a lot like the computer world-with just about everything becoming smart, cheap and networked to pretty much everything else, including your brain.

It’s almost impossibly futuristic-sounding stuff. But even that scenario is just the precursor to the Singularity itself, the moment when, in Kurzweil’s words, “nonbiological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle.” Imagine computers so advanced that they can design and build new, even better computers, with subsequent generations emerging so quickly they soon leave human engineers the equivalent of centuries behind. That’s the Singularity-and given the exponential acceleration of technological change, it could come by midcentury.

singularity

But Is It for Real !!!

It seems like a tall order, but lots of people think that such predictions are likely to come true. I asked science-fiction writer John Scalzi about Singularity issues and he pointed out that the Skype video we were using to chat would have seemed like witchcraft a few centuries earlier. Profound technological changes once took millennia, then centuries, and then decades. Now they occur every few years. The iPhone and pocket-size 12-megapixel digital cameras would have seemed amazing a decade ago. Web browsers are only about 15 years old. People (including my wife) have computers implanted in their bodies already, in the form of defibrillators, pacemakers and other devices.

Still, I’m describing a world in which nanotechnology makes us (nearly) immortal, in which robots can make almost any object from cheap raw materials (basically, dirt) and in which ordinary people are smarter than Einstein thanks to brain implants-but still nowhere near as smart as fully artificial intelligences. That’s a world that’s hard to imagine. And what we do imagine can sound either good or bad. On the upside, what’s not to like about being super-smart and healthy, with access to most products essentially for free? On the downside, could always-on links from our brains to the computing cloud lead to Star Trek’s über-totalitarian Borg collective or something equally scary? And, what happens to those computer-brain interfaces and nanobots when they’re taken over by the descendants of the Conficker worm? Now there’s an argument for strong antivirus software.

Dramatically enhancing human capabilities for good, alas, also means enhancing human capabilities for evil. That’s something famed computer science professor and writer Vernor Vinge warns about: technology that could, as he wrote in his novel Rainbows End, “put world-killer weapons into the hands of anyone having a bad-hair day.” Then there’s the mind-control problem. Nanorobots floating around in your bloodstream could keep your coronary arteries from clogging, but they also could release drugs on command, making you, say, literally love Big Brother. Knowing what we know about human history, do such abuses seem terribly unlikely?

Of course, the problem may never come up. Vinge, who originated the Singularity idea, has written about why it may never arrive-though he’s betting the other way. So what can we do now to affect how things turn out? Some people are trying. The Foresight Institute has published guidelines for developing nanotechnology, such as a ban on self-replicating nanobots that function independently (potentially turning the whole world into more nanobots, something known in the trade as the gray-goo problem) and sharp limitations on weapons-related nanotech research. Researchers in artificial intelligence are working on guidelines for producing “friendly AI” that would be well-disposed toward humans as part of their programming, thus foreclosing any pesky robotic world-domination ambitions. NASA, Google and others have even started something called the Singularity University to study ways to avoid problems while still reaping the benefits. Some have suggested that we ought to go slow on the so-called GRAIN technologies (Genetics, Robotics, Artificial Intelligence and Nanotechnology). Sun Microsystems’ Bill Joy has even called for “relinquishing” some technologies he sees as dangerous.

But I wonder if that’s such a good idea. Destructive technologies generally seem to come along sooner than constructive ones-we got war rockets before missile interceptors, and biological warfare before antibiotics. This suggests that there will be a window of vulnerability between the time when we develop technologies that can do dangerous things, and the time when we can protect against those dangers. The slower we move, the longer that window may remain open, leaving more time for the evil, the unscrupulous or the careless to wreak havoc. My conclusion? Faster, please.

2 Responses to The Singularity!

  1. There is an urgent need for thorough public debate and consultation before these devices are let loose on society.

  2. “Part of my concern is that when we start seeing these things emerging, we’re going to suddenly find that the people who could bring benefits to us won’t because they’re scared of the legal uncertainty,” he said. “So one of the things we’re trying to promote is a debate about the rights and wrongs – the ethics – and that should inform the law afterwards.”