We make most everything by tearing stuff apart. To make paper, we plant trees, chop them down, and send the wood through our mills. To make spoons, we yank iron up out of the earth, drop it into blast furnaces to make steel, and then mold and shape it at extreme temperatures.
But what if we could work from the bottom up and construct paper from atoms, the smallest building blocks of life and matter? What if we could make a spoon by taking individual iron atoms and locking them together one by one with carbon atoms and whatever else we wanted? It’d sure be easier and cleaner. We could throw away those miners’ helmets, plant geraniums in our blast furnaces, and create almost anything out of our trash.
The idea has been percolating since 1959 when Richard Feynman, one of the century’s most admired physicists, gave a speech titled “There’s Plenty of Room at the Bottom” in which he argued that: “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom.” Of course, saying something is possible doesn’t mean it’s practical and for years afterward the vision wavered, attracting more attention from science fiction philosophers than actual scientists. Eric Drexler, the man who first laid out for a popular audience the complex potential of nanotechnology with his 1986 book Engines of Creation, was given only slightly more credit by the scientific community than the average street-corner chemist explaining how to fax your brain to Neptune.
But in the last few years, scientists have come up with one important innovation after another in nanotechnology (the term conveys the size of the objects to be manipulated: a nanometer is one billionth of a meter) and science fiction has gradually been rolling into science fact. New articles appear almost monthly in the leading scientific journals and university research departments are starting to fill with enthusiasts. Chemist Paul Alivisatos at the University of California, Berkeley, says that, 10 years ago, he was the only one in his department working on nanotech; now about 30 percent of the department does. In his 2000 State of the Union address, President Clinton asked us to imagine some of the possibilities: “materials with ten times the strength of steel and only a small fraction of the weight, shrinking all of the information housed in the Library of Congress into a device the size of a sugar cube, detecting cancerous tumors when they are only a few cells in size.” This year, you can buy sunscreen whose development owes a crucial debt to nanotechnology; IBM is using nanotech processes to produce read heads (part of a computer’s hard drive); and samples of hybrid materials stronger than steel can be purchased online for $1000 a gram.
It seems great, and a lot of it truly is. But we’re also pointing our raft down some pretty wild, unmapped waters. Nanotechnology is like biotechnology and genetic engineering on steroids. Together with almost magical possibilities, it portends brutal military applications, dystopic scenarios in which parts of the world turn into “gray goo,” and we are moving one giant step closer to playing with the very basis of life. We can’t and shouldn’t turn the clock backwards; but there remain crucial steps that we need to take, and there is a critical role for government in the development of this technology. Clinton’s proposed National Nanotech Initiative should pass Congress in some form this fall; but there needs to be more. Deep government involvement in nanotechnology is more than a practical obligation from a research and national defense perspective. It’s close to becoming a moral imperative.
Nature has created wonderful things, but there are inefficiencies everywhere. Arrange carbon atoms one way and you’ll get diamond: a strong substance but not a terribly flexible one. Rearrange them and you’ll get rubber bands: flexible, but weak and unable to conduct electricity. There’s also another way to fit them together that nature hasn’t figured out on its own—but that scientists in a lab at Rice University have. Richard Smalley, a Nobel-prize-winning chemist there, has used nanotechnology to create molecules, called nanotubes, that are stronger than any object on earth yet still extremely flexible. If designed to be straight, these nanotubes conduct electricity better than gold; if given a slight twist, they can serve as transistors.
It is currently much too complicated and expensive to use these nanotubes to build original structures from the ground up, but we’re not far from being able to mix them in with, for example, the materials currently used to make airplane wings. It’s also quite possible that, in 10 or 15 years, we may be able to use something like Smalley’s molecules to make entire airplane wings or widescale power systems based on molecular solar panels that take up only a tiny fraction of the space needed for such systems today.
In about the same time, there is strong hope that molecular manipulation could power computers. Computers work on a binary system, meaning that every problem is translated into a series of zeros and ones rapidly added together. (A computer is like a small child who counts on her thumbs, but does it at something approaching light speed.) Modern computers add these basic alternatives together through complicated constructions based on tiny transistors etched into silicon that either carry electrons (one) or don’t (zero). Programs translate whatever you input (the letters you type, for example) into this code of ones and zeros and send the data back to the transistors to be processed.
Last summer, researchers at Hewlett Packard and UCLA led by James Heath announced that they had built a single molecular layer capable of serving as a switch able to carry out a single operation, allowing electrons to flow once. A few months later, researchers led by James Tour of Rice and Mark Reed of Yale announced that they had built one that could work repeatedly and, a month later, announced that they had built another that could also store its electrons after calculation (serving as memory). Size is perhaps the main limitation in computer design and, if one of the transistors that we etch onto silicon to power our modern computers were the size of this piece of paper, a molecular computer would be about the size of the period at the end of this sentence.
There are of course many major steps that must be taken before molecular computers sweep the world and we all embed them in our shoes. Long chains of molecules have to be built and integrated with the rest of a computer’s functions—faster processors are unquestionably great, but who wants a monitor the size of a fruit fly? Assembly lines and production factories would have to be turned inside out and, like all technologies, it would take a while for it to spread through society, if it ever did. Still, the advantages of nanotechnology are so extraordinary that there’s substantial reason to push ahead even if nothing’s inevitable. According to Neil Lane, chief scientific adviser to the president, developments in the field “are likely to change the way almost everything—from vaccines, to computers, to automobile tires, to objects not yet imagined—is designed and made.”
Individual companies, however, seem unlikely to start vigorously building these molecular computers or digging into nanotechnology’s other potential uses. We’re at a scientific halfway point. We know that the technology is plausible, and even likely; but we haven’t advanced enough for private industry to commit money to more than sporadic or targeted research. Scientists still have to trudge pretty deep into the jungle and they are likely to frequently step in quicksand. As a result, companies seem likely to support niche research—Nanophase, the company making sunscreen, is researching nanocrystals; Hewlett Packard is exploring how nanotechnology can improve computer memory—but funding for research into the big questions seems more likely to come from public sources. According to Alivisatos: “The government has got to be a major player. It won’t just happen in industry. You can’t say that this technology is going to make you $400 million next quarter.”
Nanotechnology also involves vast overlaps between traditionally defined fields. Every breakthrough by physicists pushes the chemistry a little further, which pushes the molecular biology and so on. But the many arms of research haven’t cut a single path forward. To progress, there needs to be constant information sharing—one more unlikely development from companies hoping to patent the ultimate technique. In fact, much of the research that has already led to important advancements in the field has been government-funded. James Tour and Mark Reed, for example, completed much of their research into molecular computing with support from DARPA, the Defense Advanced Research Projects Agency.
A good parallel comes from another project initially funded by DARPA: the Internet. Private industry has spurred incredible Internet development, but government took the first steps. The Arpanet, the backbone for what would eventually be transformed into the Internet, was designed in the late ’60s to facilitate communication during nuclear battle. The same pattern holds for invention after invention. Global position satellites were first created for military use and are now used by everyone from lost hikers to UPS. Government helped spur along laser beam technology that we now use to etch circuits into silicon and improve vision.
The problem for scientists, and the principal reason for the murkiness of the road ahead, lies in how complicated the science gets at this size. In large groups, atoms move predictably. Left alone, they tend to skid randomly in “Brownian motion,” and similar problems arise from the quantum effect: the ability of single electrons to manifest in two places at the same time. According to Martin Muscovits of the University of Toronto, a machine trying to handle individual atoms could easily become confused and place them incorrectly “like a poorly handled marionette that places a forkful of food in its eye.” Other scientists describe what they call the “fat fingers” problem: Picking up and moving atoms entails building instruments at least as big as the things they are moving. Individual atoms also react to tiny increments of heat and motion. When researchers were first able to push them around with a device called a Scanning Tunneling Microscope, the experiment was carried out in a vacuum at a temperature approaching absolute zero (-459 degrees Fahrenheit).
Proponents of nanotechnology automatically respond to these concerns with what they call the existence proof: Our bodies, and the bodies of sea anemones or plants for that matter, are essentially nanomachines manipulating single atoms. Our whole body works through a complicated system whereby DNA tells RNA to go and tell ribosomes to go and make proteins that, in turn, create muscles, hormones, and everything else that powers our body. If it’s impossible to move atoms around, how do humans do it? It sounds preposterous to even think that, as Eric Drexler described in the 1970s, we could create a machine that could rearrange the atoms in a bale of hay, a few shrubs, and some water, and make a slab of beef. But that isn’t so far away from what a cow already does—even if it does it in a way that we are miles from fully understanding.
Given recent advancements and our rapidly expanding ability to understand how the body works and how we can mimic it in a laboratory, the optimists who say nanotechnology will eventually have a major impact are probably closer to right than the pessimists focused on Brownian motion. There are of course numerous forecasted scientific revolutions that flopped or floundered—think how much hype there was about robots 20 years ago. But as Smalley says, “when a scientist says something is possible, they’re probably underestimating how long it will take. But if they say it’s impossible, they’re probably wrong.”
Beyond the funding, the government has an imperative to develop an architecture to influence how nanotech develops. Although the government essentially invented the Internet, once it began to boil over into a societal force, the government kept away and let the technological wonks whose minds were absorbed by their modems essentially choose the Net’s regulatory scheme: nothing. Government doesn’t tax Web commerce and it has a terribly hard time blocking child pornography. There are few ways to thwart viruses like Melissa’ or I Love You.’ With genetic engineering, regulations were crafted during the Reagan administration by corporations fascinated with the new technology and with economic incentives to make the market grow as quickly as possible. The result was a decision to mostly avoid new legislation and wrap existing laws around the new technology. That was good for the industry, but not great for safety.
Nanotechnology has one foot planted in academia, but another firmly planted in the swaggering techno-libertarian culture of Silicon Valley. Nanotech is seen as one of the triumphant gateways into computing’s future, and the Valley crowd makes it a hot discussion topic on favored technology Web sites like Slashdot; and magazines like Wired. Consequently, there’s a real danger that nanotechnology regulation will continue to swing further and further toward a world where the next gadget is the future and the government is the past. If, as the saying goes, war is too serious to be left just for generals, then nanotechnology is far too serious to be left just to scientists and the new technology firebrands. As former Wired editor, Paulina Borsook writes in her new book Cyberselfish, “The Silicon Valley worldview contains within it all different colors of free-market/antiregulation/social Darwinistic/ aphilanthropic/guerrilla/neo-pseudo-biological/atomistic threads.” Richard Hayes, coordinator of the Exploratory Initiative on the New Human Genetic Technologies, puts it more succinctly: “Silicon Valley types suffer from a sort of arrested development. They are enamored of their toys and just don’t see the dangers.”
So far, the more solemn proponents of nanotechnology have been able to deflect the wild-eyed 20-somethings dreaming of unregulated billions. The discipline’s political center is the Foresight Institute, an organization founded by Eric Drexler to plan for a future which nanotechnology will help shape. Foresight directors are emphatic about the potential dangers and they have been publishing papers for years about the possible consequences of nanotechnology and strategies to avoid them. Still, even at Foresight, there is an overriding optimism that, at the least, needs balancing when real decisions are made. When asked about concerns voiced by very serious scientists, such as Brownian motion, that could undercut the dependability of nanomachines, Christine Peterson, president of the institute, grew exasperated, as though the debate had really been settled beyond discussion for everyone: “I am ashamed that people will say these things. It has been dealt with in the literature. There is exhaustive research into this!”
The stakes are very high. All too many technological advances make possible more powerful weapons: from slingshots to sharper blades to hydrogen bombs to nasty new toxins. But nanotech has an even larger downside than most, both in the scale of potential disasters and the opportunity for individuals to create them.
The main problem stems from the very real possibility of nanoassemblers. It would be impossibly time- consuming to make a piece of paper by hand from the ground up, bonding carbon atoms to one another. But what if we could build little robots (nanobots) to do it? And nanobots to build more little robots? Then we would start with one nanobot, soon have two, then four, eight, sixteen, and so on. If they reproduced every hour, you could fire one up on Friday afternoon and, because of the wonders of exponential growth, come back on Monday morning to find 18 quintillion. Drink a cup of coffee, check your email, and before you knew it they’d have doubled again. This wouldn’t be a bad thing if they were doing something like deradiating nuclear waste and there were some limit to their reproduction; it wouldn’t be so good if all 18 quintillion were chewing up the local foliage.
In a recent essay quickly seized on by NPR, The New York Times and The Washington Post—along with, of course, Slashdot—Bill Joy, one of the founders of Sun Microsystems and inventors of the Java programming language, issued a dire warning. “A bomb is blown up only once—but one nanobot can become many, and quickly get out of control … Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction, this destructiveness hugely amplified by the power of self-replication. I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil.” To Joy, we should just stop developing these technologies so readily tuned to evil or unforeseeable disaster: “The only realistic alternative I see is relinquishment.”
There’s a gaping hole in Joy’s proposed strategy however: It’s impossible. We have never relinquished scientific advancement, even in the easiest cases when research is extremely centralized, the dangers are clear, and the direct human benefits are few—as in nuclear development. With the dispersed technology of nanotech where, eventually, it will be possible for scientists around the world to labor away in small hidden labs designing toxins that can reproduce themselves, it’s also impractical. If all of the goodhearted people in the world relinquished nanotechnology and the next Unabomber turns out to be a molecular biologist, not a mathematician, it might be time to dig Dr. Strangelove’s tunnels into the center of the earth. Furthermore, even if relinquishment were practical, there’s too much potential benefit. Nanotechnology might well allow us to get most of our energy from non-polluting sources and restore virtual vision to the blind and virtual hearing to the deaf. Mihail Roco, chair of the interagency group that advised the president on nanotech, predicts that, within 10 years, half of all pharmaceuticals will be created by using nanotech. Is this something we want to relinquish?
The logical solution is controlled development. The United States needs to push the science forward but we also need to lasso a thick rope around its neck. We need to make sure that, as much as possible, the main research bases for this technology develop either on our own soil or with close allies, and we need to support much of the early research so it can be closely tied into government regulation. The most obvious danger would come if the United States falls behind the rest of the world and finds itself unable to control the technology. The genie isn’t close to out of the bottle yet—we are at least 10 years away from functional nanotechnology in our most advanced labs with some of the best scientists in the world; a small lab in Afghanistan is a great deal further away—but we’ve gotten off to a slow start. As Roco says, “this is the first major technology in 50 years where the United States does not have a commanding lead internationally.”
This is doubly important because nanotechnology can’t just be used for offensive purposes (like nuclear weapons). The best way to fend off a hostile nanotech attack would be through counter nanotechnology. If some evil scientist really does want to create the dystopia imagined by Bill Joy, he probably couldn’t be stopped by ICBMs; it would take predesigned nanotechnology safety systems, built in conjunction with other nations, that could reverse whatever damage was being done. The design of such world-based “active shields,” as nano-enthusiasts call them, is a topic for discussion far down the road. We have no idea how the technology will develop and Bill Joy’s disaster scenarios are a little farfetched: nano-chemist Susan Sinott says she’s “as scared of that as the possibility that my laptop is going to jump off my desk in a minute and eat me.” But there are still numerous military implications and we do need to start laying the groundwork for international cooperation and setting the terms for discussion, ideally, by pushing the technology as hard as we reasonably can.
A second more likely problem is that some private company, smelling profit, will cut a corner. Ralph Merkle, a leading scientist who recently left a research position at Xerox PARC to work for Zyvex, the first start-up trying to build nanoassemblers, doesn’t think that’s likely. For one, he argues, corporations want goodwill and probably would support any proposed guidelines. Zyvex, for example, has pledged to support guidelines proposed by Eric Drexler’s Foresight Institute that, among other provisions, require that nanoassemblers be forced to rely on broadcast transmissions, or fuel sources that don’t exist in nature, and thus prevent them from running rampant. Furthermore, according to Merkle: “The corporation actually has an incentive to manufacture products but keep its replicators in-house. It’s not going to give away something that someone else can use to make the same products. It will want a system that it can create things with and sell.”
There’s certainly something to this, but it’s not entirely convincing in a world where corporations routinely dump effluent into the drinking-water supply, or to take a more recent example, conceal the fact that their automobile tires are likely to explode on rounding a sharp corner. Moreover, it ignores the fact that there are always corporations on the margins fighting for their lives, willing to sell out their final values for a nickel or an eighth-of-a-point stock price increase. If everyone in the business was making lots of money and able to diligently follow every rule, then everybody else would be trying to start new companies. Eventually, someone would end up on the margin. Yes, the private sector will be needed to move the technology, but there are going to have to be significant national and international constraints and oversights. There’s something a little unnerving in James Von Ehr, the president of Zyvex’s, explanation for his company’s advantage over government research: “In the private sector we don’t need to justify our work to anyone.”
What’s needed is a government that can move and adapts quickly and that can inject some humility into nanotech’s development: Fairly soon, companies are going to be trying to make things by next Thursday on the atomic level that natural evolution, in all its wisdom, hasn’t made in four billion years. In short, the government needs to ensure that what ends up being good for Zyvex is good for America too.
Fortunately, there is some support for scientific funding, necessarily tied to regulation, from both major parties. Moreover, with the amazing completion of the human genome project, there’s a spreading recognition that scientific forays based on understanding and mimicking living systems could be as important to the next 40 years as science on the nuclear scale was to the last 40. President Clinton proposed a 500 million dollar nanotech funding initiative that targets five percent of its funding toward ethical research that could begin to create the framework for regulation and Al Gore, if elected, would almost certainly continue this trend: He was the first congressman to hold hearings on the subject, questioning Drexler among others in 1992 on nanotechnology’s potential to solve environmental problems. Ideally this would not be a partisan issue, but even with broad support, there’d still be a long way to go. The Department of Defense’s nanotech budget, even with Clinton’s proposed increase, would still only be $110 million, about the cost of running this summer’s single failed test of our Star Wars missile defense.
It’s possible that nanotechnology will go nowhere and the carcass of the idea will be dropped off somewhere into the vast pile of potential scientific revolutions that did not revolve. But that’s not a risk one should really want to take. As Merkle says, if nanotechnology amounts to even half of what many people think it will, “If you’ve relinquished it, then you’re hosed.”
|