sabato 11 aprile 2009

”Are You Living in a Computer Simulation?”

SOURCE

No longer relegated to the domain of science fiction or the ravings of street corner lunatics, the “simulation argument” has increasingly become a serious theory amongst academics, one that has been best articulated by philosopher Nick Bostrom.
In his seminal paper ”Are You Living in a Computer Simulation?” Bostrom applies the assumption of substrate-independence, the idea that mental states can reside on multiple types of physical substrates, including the digital realm. He speculates that a computer running a suitable program could in fact be conscious. He also argues that future civilizations will very likely be able to pull off this trick and that many of the technologies required to do so have already been shown to be compatible with known physical laws and engineering constraints.
Harnessing computational powerSimilar to futurists Ray Kurzweil and Vernor Vinge, Bostrom believes that enormous amounts of computing power will be available in the future. Moore’s Law, which describes an eerily regular exponential increase in processing power, is showing no signs of waning, nor is it obvious that it ever will.To build these kinds of simulations, a posthuman civilization would have to embark upon computational megaprojects. As Bostrom notes, determining an upper bound for computational power is difficult, but a number of thinkers have given it a shot. Eric Drexler has outlined a design for a system the size of a sugar cube that would perform 10^21 instructions per second. Robert Bradbury gives a rough estimate of 10^42 operations per second for a computer with a mass on order of a large planet. Seth Lloyd calculates an upper bound for a 1 kg computer of 5*10^50 logical operations per second carried out on ~10^31 bits – this would likely be done on a quantum computer or computers built of out of nuclear matter or plasma [check out this article and this article for more information].More radically, John Barrow has demonstrated that, under a very strict set of cosmological conditions, indefinite information processing (pdf) can exist in an ever-expanding universe.At any rate, this extreme level of computational power is astounding and it defies human comprehension. It’s like imagining a universe within a universe—and that’s precisely be how it may be used.Worlds within worlds“Let us suppose for a moment that these predictions are correct,” writes Bostrom. “One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears.” And because their computers would be so powerful, notes Bostrom, they could run many such simulations.This observation, that there could be many simulations, led Bostrom to a fascinating conclusion. It’s conceivable, he argues, that the vast majority of minds like ours do not belong to the original species but rather to people simulated by the advanced descendants of the original species. If this were the case, “we would be rational to think that we are likely among the simulated minds rather than among the original biological ones.”Moreover, there is also the possibility that simulated civilizations may become posthuman themselves. Bostrom writes:

"They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.) Virtual machines can be stacked: it’s possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration...we would have to suspect that the posthumans running our simulation are themselves simulated beings; and their creators, in turn, may also be simulated beings".

Given this matrioshkan possibility, the number of “real” minds across all existence should be vastly outnumbered by simulated minds. The suggestion that we’re not living in a simulation must therefore address the apparent gross improbabilities in question.Again, all this presupposes, of course, that civilizations are capable of surviving to the point where it’s possible to run simulations of forebears and that our descendants desire to do so. But as noted above, there doesn’t seem to be any reason to preclude such a technological feat.
Next: Kurzweil’s nano neural nets.

George Dvorsky serves on the Board of Directors for the Institute for Ethics and Emerging Technologies. George is the Director of Operations for Commune Media, an advertising and marketing firm that specializes in marketing science. George produces Sentient Developments blog and podcast.

venerdì 10 aprile 2009

Quantum Computers Will Require Complex Software To Manage Errors

ScienceDaily (Apr. 9, 2009) — Highlighting another challenge to the development of quantum computers, theorists at the National Institute of Standards and Technology (NIST) have shown that a type of software operation, proposed as a solution to fundamental problems with the computers’ hardware, will not function as some designers had hoped.
Quantum computers—if they can ever be realized—will employ effects associated with atomic physics to solve otherwise intractable problems. But the NIST team has proved that the software in question, widely studied due to its simplicity and robustness to noise, is insufficient for performing arbitrary computations. This means that any software the computers use will have to employ far more complex and resource-intensive solutions to ensure the devices function effectively.
Unlike a conventional computer’s binary on-off switches, the building blocks of quantum computers, known as quantum bits, or “qubits,” have the mind-bending ability to exist in both “on” and “off” states simultaneously due to the so-called “superposition” principle of quantum physics. Once harnessed, the superposition principle should allow quantum computers to extract patterns from the possible outputs of a huge number of computations without actually performing all of them. This ability to extract overall patterns makes the devices potentially valuable for tasks such as codebreaking.
One issue, though, is that prototype quantum processors are prone to errors caused, for example, by noise from stray electric or magnetic fields. Conventional computers can guard against errors using techniques such as repetition, where the information in each bit is copied several times and the copies are checked against one another as the calculation proceeds. But this sort of redundancy is impossible in a quantum computer, where the laws of the quantum world forbid such information cloning.
To improve the efficiency of error correction, researchers are designing quantum computing architectures so as to limit the spread of errors. One of the simplest and most effective ways of ensuring this is by creating software that never permits qubits to interact if their errors might compound one another. Quantum software operations with this property are called “transversal encoded quantum gates.” NIST information theorist Bryan Eastin describes these gates as a solution both simple to employ and resistant to the noise of error-prone quantum processors. But the NIST team has proved mathematically that transversal gates cannot be used exclusively, meaning that more complex solutions for error management and correction must be employed.
Eastin says their result does not represent a setback to quantum computer development because researchers, unable to figure out how to employ transversal gates universally, have already developed other techniques for dealing with errors. “The findings could actually help move designers on to greener pastures,” he says. “There are some avenues of exploration that are less tempting now.”
Journal reference:
Eastin et al. Restrictions on Transversal Encoded Quantum Gate Sets. Physical Review Letters, 2009; 102 (11): 110502 DOI: 10.1103/PhysRevLett.102.110502
Adapted from materials provided by National Institute of Standards and Technology.