venerdì 15 maggio 2009

Is a room-temperature, solid-state quantum computer mere fantasy?

Marshall StonehamLondon Centre for Nanotechnology and Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
Published April 27, 2009
Creating a practical solid-state quantum computer is seriously hard. Getting such a computer to operate at room temperature is even more challenging. Is such a quantum computer possible at all? If so, which schemes might have a chance of success?

In his 2008 Newton Medal talk, Anton Zeilinger of the University of Vienna said: “We have to find ways to build quantum computers in the solid state at room temperature—that’s the challenge.” [1] This challenge spawns further challenges: Why do we need a quantum computer anyway? What would constitute a quantum computer? Why does the solid state seem essential? And would a cooled system, perhaps with thermoelectric cooling, be good enough?
Some will say the answer is obvious. But these answers vary from “It’s been done already” to “It can’t be done at all.” Some of the “not at all” group believe high temperatures just don’t agree with quantum mechanics. Others recognize that their favored systems cannot work at room temperature. Some scientists doubt that serious quantum computing is possible anyway. Are there methods that might just be able to meet Zeilinger’s challenge?
The questions that challenge
What is a computer? Standard classical computers use bits for encoding numbers, and the bits are manipulated by the classical gates that can execute AND and OR operations, for example. A classical bit has a value of 0 or 1, according to whether some small subunit is electrically charged or uncharged. Other forms are possible: the bits for a classical spintronic computer might be spins along or opposite to a magnetic field. Even the most modest computers on sale today incorporate complex networks of a few types of gates to control huge numbers of bits. If there are so few bits that you can count them on your fingers, it can’t seriously be considered a computer.
What do we mean by quantum? Being sure a phenomenon is “quantum” isn’t simple. Quantum ideas aren’t intuitive yet. Could you convince your banker that quantum physics could improve her bank’s security? Perhaps three questions identify the issues. First, how do you describe the state of a system? The usual descriptors, wave functions and density matrices, underlie wavelike interference and entanglement. Entanglement describes the correlations between local measurements on two particles, which I call their “quantum dance.” Entanglement is the resource that could make quantum computing worthwhile. The enemy of entanglement is decoherence, just as friction is the enemy of mechanical computers. Second, how does this quantum state change if it is not observed? It evolves deterministically, described by the Schrödinger equation. The probabilistic results of measurements emerge when one asks the third question: how to describe observations and their effects. Measurement modifies entanglement, often destroying it, as it singles out a specific state. This is one way that you can tell if an eavesdropper intercepted your message in a quantum communications system.
Proposed quantum computers have qubits manipulated by a few types of quantum gates, in a complex network. But the parallels are not complete [2]. Each classical bit has a definite value, it can only be 0 or 1, it can be copied without changing its value, it can be read without changing its value and, when left alone, its value will not change significantly. Reading one classical bit does not affect other (unread) bits. You must run the computer to compute the result of a computation. Every one of those statements is false for qubits, even that last statement. There is a further difference. For a classical computer, the process is Load → Run → Read, whereas for a quantum computer, the steps are Prepare → Evolve → Measure, or, as in one case discussed later, merely Prepare → Measure.
Why do we need a quantum computer? The major reasons stem from challenges to mainstream silicon technology. Markets demand enhanced power efficiency, miniaturization, and speed. These enhancements have their limits. Future technology scenarios developed for the semiconductor industry’s own roadmap [3] imply that the number of electrons needed to switch a transistor should fall to just 1 (one single electron) before 2020. Should we follow this innovative yet incremental roadmap, and trust to new tricks, or should we seek a radical technology, with wholly novel quantum components operating alongside existing silicon and photonic technologies? Any device with nanoscale features inevitably displays some types of quantum behavior, so why not make a virtue of necessity and exploit quantum ideas? Quantum-based ideas may offer a major opportunity, just as the atom gave the chemical industry in the 19th century, and the electron gave microelectronics in the 20th century. Quantum sciences could transform 21st century technologies.
Why choose the solid state for quantum computing? Quantum devices nearly always mean nanoscale devices, ultimately because useful electronic wave functions are fairly compact [4]. Complex devices with controlled features at this scale need the incredible know-how we have acquired with silicon technology. Moreover, quantum computers will be operated by familiar silicon technology. Operation will be easier if classical controls can be integrated with the quantum device, and easiest if the quantum device is silicon compatible. And scaling up, the linking of many basic and extremely small units is a routine demand for silicon devices. With silicon technologies, there are also good ways to link electronics and photonics. So an ideal quantum device would not just meet quantum performance criteria, but would be based on silicon; it would use off-the-shelf techniques (even sophisticated ones) suitable for a near-future generation fabrication plant. A cloud on the horizon concerns decoherence: can entanglement be sustained long enough in a large enough system for a useful quantum calculation?
All the objections
It has been done already? Some beautiful work demonstrating critical steps, including initializing a spin system and transfer of quantum information, has been done at room temperature with nitrogen-vacancy (NV-) centers in diamond [5]. Very few qubits were involved, and scaling up to a useful computer seems unlikely without new ideas. But the combination of photons—intrinsically insensitive to temperature—with defects or dopants with long decoherence times leaves hope.
It can’t be done: serious quantum computing simply isn’t possible anyway. Could any quantum computer work at all? Is it credible that we can build a system big enough to be useful, yet one that isn’t defeated by loss of entanglement or degraded quantum coherence? Certainly there are doubters, who note how friction defeated 19th century mechanical computers. Others have given believable arguments that computing based on entanglement is possible [6]. Of course, it may prove that some hybrid, a sort of quantum-assisted classical computing, will prove the crucial step.
It can’t be done: quantum behavior disappears at higher temperatures. Confusion can arise because quantum phenomena show up in two ways. In quantum statistics, the quantal ħ appears as ħω/kT. When statistics matter most, near equilibrium, high temperatures T oppose the quantum effects of ħ. However, in quantum dynamics, ħ can appear unassociated with T, opening new channels of behavior. Quantum information processing relies on staying away from equilibrium, so the rates of many individual processes compete in complex ways: dynamics dominate. Whatever the practical problems, there is no intrinsic problem with quantum computing at high temperatures.
It can’t be done: the right qubits don’t exist. True, some qubits are not available at room temperature. These include superconducting qubits and those based on Bose-Einstein condensates. In Kane’s seminal approach [7], the high polarizability needed for phosphorus-doped silicon (Si:P) corresponds to a low ionization donor energy, so the qubits disappear (or decohere) at room temperature. In what follows, I shall look at methods without such problems.
What needs to be done: Implementing quantum computing
David DiVincenzo at IBM Research Labs devised a checklist [8] that conveniently defines minimal (but seriously challenging) needs for a credible quantum computer. There must be a well-defined set of quantum states, such as electron spin states, to use as qubits. One needs scalability, so that enough qubits (let’s say 20, though 200 would be better) linked by entanglement are available to make a serious quantum computer. Operation demands a means to initialize and prepare suitable pure quantum states, a means to manipulate qubits to carry out a desired quantum evolution, and means to read out the results. Decoherence must be slow enough to allow these operations.
What does this checklist imply for solid-state quantum computing? Are there solid-state systems with decoherence mechanisms, key energies, and qubit control systems that might work at useful temperatures, ideally room temperature? Solid-state technologies have good prospects for scalability. There is a good chance that there are ingenious ways to link the many qubits and quantum gates needed for almost any serious application. However, decoherence might be fast. This may be less of a problem than imagined, for fast operating speeds go hand in hand with fast decoherence. Fast processing needs strong interactions, and such strong interactions will usually cause decoherence [9].
For spin-based solid-state quantum computing, most routes to initialization group into four categories. First, there are optical methods (including microwaves), based on selection rules, such as those used for NV- experiments. Then there are spintronic approaches, using a source (perhaps a ferromagnet) of spin-polarized electrons or excitons. (Note that spins have been transferred over distances of nearly a micron at room temperature [10].) Then there are brute force methods aiming for thermal equilibrium in a very large magnetic field, where the ratio of Zeeman splitting to thermal energy kBT is large. And finally there are tricks involving extra qubits that are not used in calculations. Of these methods, the optical and spintronic concepts seem most promising for room-temperature operation.
For readout, there are two broad strategies. Most ideas for spin-based quantum information processing aim at the sequential readout of individual spins. However, there are other less-developed ideas in which the ensemble of relevant spins is looked at together, as in some neutron scattering studies of antiferromagnetic crystals. What methods are there for probing single spins, if the sequential strategy is chosen? First, there is direct frequency discrimination, including the use of Zeeman splitting, of hyperfine structure, and so on. Ideas from atom trap experiments suggest that one can continue to interrogate a spin with a sequence of photons that do not change the qubit [11]. Such methods might work at room temperature, at least if the relevant spectral lines remain sharp enough. Second, there are many ways to exploit spin-dependent rates of carrier scatter or trapping. One might examine how mobile polarized spins are scattered by a fixed spin that is to be measured. Or the spin of a mobile carrier might be measured by its propensity for capture or scatter by fixed spin, or by some combination of polarized mobile spins and interferometry. At room temperature, the problem is practice rather than principle, and acceptable methods seem possible. A third way is to use relative tunnel rates, where one spin state can be blocked. Tunneling-based methods can become very hard at higher temperatures. There are then various ideas, all of which seem to be both tricky and relatively slow, but I may be being pessimistic. These include the use of circularly polarized light and magneto-optics, the direct detection of spin resonance with a scanning tunneling microscope, the exploitation of large spin-orbit coupling, or the direct measurement of a force with a scanning probe having a magnetic tip.
For the manipulations during operation, probably the most important ways use electromagnetic radiation, whether optical, microwave or radio frequency. Other controls, such as ultrasonics or surface acoustic waves, are less flexible. Electromagnetic methods might well operate at room temperature. Other suggestions invoke nanoscale electrodes. I do not know of any that look both credible and scalable.
Hopes for higher temperature operation
In what follows, I shall concentrate on two proposals as examples, with apologies to those whose suggestions I am omitting. Both of the proposals use optical methods to control spins, but do so in wholly different ways. The first is a scheme for optically controlled spintronics that I, Andrew Fisher, and Thornton Greenland proposed [11, 12]. The second route exploits entanglement of states of distant atoms by interference [13] in the context of measurement-based quantum computing [14]. A broader discussion of the materials needed is given in Ref. [15].
Optically controlled spintronics [11, 12]. Think of a thin film of silicon, perhaps 10 nm thick, isotopically pure to avoid nuclear spins, on top of an oxide substrate (Fig. 1). The simple architecture described is essentially two dimensional. Now imagine the film randomly doped with two species of deep donor—one species as qubits, the other to control the qubits. In their ground states, these species should have negligible interactions. When a control donor is excited, the electron’s wave function spreads out more, and its overlap with two of the qubit donors will create an entangling interaction between those two qubits (Fig. 2). Shaped pulses of optical excitation of chosen control donors guide the quantum dance (entanglement) of chosen qubit donors [16].
For controlling entanglement in this way, typical donor spacings in silicon must be of the order of tens of nanometers. Optically, one can only address regions of the order of a wavelength across, say 1000 nm. The limit of optical spatial resolution is a factor 100 larger than donor spacings needed for entanglement. How can one address chosen pairs of qubits? The smallest area on which we can focus light contains many spins. The answer is to exploit the randomness inevitable in standard fabrication and doping. Within a given patch of the film a wavelength across, the optical absorptions will be inhomogeneously broadened from dopant randomness. Even the steps at the silicon interfaces are helpful because the film thickness variations shift transition energies from one dopant site to another. Light of different wavelengths will excite different control donors in this patch, and so manipulate the entanglements of different qubits. Reasonable assumptions suggest one might make use of perhaps 20 gates or so per patch. Controlled links among 20 qubits would be very good by present standards, though further scale up—the linking of patches—would be needed for a serious computer (Fig. 3). The optically controlled spintronics strategy [11, 12] separates the two roles: qubit spins store quantum information, and controls manipulate quantum information. These roles require different figures of merit.
To operate at room temperature, qubits must stay in their ground states, and their decoherence—loss of quantum information—must be slow enough. Shallow donors like Si:P or Si:Bi thermally ionize too readily for room-temperature operations, though one could demonstrate principles at low temperatures with these materials. Double donors like Si:Mg+ or Si:Se+ have ionization energies of about half the silicon band gap and might be deep enough. Most defects in diamond are stable at room temperature, including substitutional N in diamond and the NV- center on which so many experiments have been done.
What about decoherence? First, whatever enables entanglement also causes decoherence. This is why fast switching means fast decoherence, and slow decoherence implies slow switching. Optical control involves manipulation of the qubits by stimulated absorption and emission in controlled optical excitation sequences, so spontaneous emission will cause decoherence. For shallow donors, like Si:P, the excitation energy is less than the maximum silicon phonon energy; even at low temperatures, one-phonon emission causes rapid decoherence. Second, spin-lattice relaxation in qubit ground states destroys quantum information. Large spin-orbit coupling is bad news, so avoiding high atomic number species helps. Spin lattice relaxation data at room temperature are not yet available for those Si donors (like Si:Se+) where one-phonon processes are eliminated because their first excited state lies more than the maximum phonon energy above the ground state. In diamond at room temperature, the spin-lattice relaxation time for substitutional nitrogen is very good (~1 ms) and a number of other centers have times ~0.1 ms. Third, excited state processes can be problems, and two-photon ionization puts constraints on wavelengths and optical intensities. Fourth, the qubits could lose quantum information to the control atoms. This can be sorted out by choosing the right form of excitation pulses [16]. Fifth, interactions with other spins, including nuclear spins, set limits, but there are helpful strategies, like using isotopically pure silicon [17].
The control dopants require different criteria. The wave functions of electronically excited controls overlap and interact with two or more qubits to manipulate entanglements between these qubits. The transiently excited state wave function of the control must have the right spatial extent and lifetime. While centers like Si:As could be used to show the ideas, for room-temperature operation one would choose perhaps a double donor in silicon, or substitutional phosphorus in diamond. The control dopant must have sharp optical absorption lines, since what determines the number of independent gates available in a patch is the ratio of the spread of excitation energies, inhomogeneously broadened, to the (homogeneous) linewidth. The spread of excitation energies—inhomogeneous broadening is beneficial in this optical spintronics approach [11, 12]—has several causes, some controllable. Randomness of relative control-qubit positions and orientations is important, and it seems possible to improve the distribution by using self-organization to eliminate unusable close encounters. Steps on the silicon interfaces are also helpful, provided there are no unpaired spins. Overall, various experimental data and theoretical analyses indicate likely inhomogeneous widths are a few percent of the excitation energy.
A checklist of interesting systems as qubits or controls shows some significant gaps in knowledge of defects in solids. Surprisingly little is known about electronic excited states in diamond or silicon, apart from energies and (sometimes) symmetries. Little is known about spin lattice relaxation and excited state kinetics at temperatures above liquid nitrogen, except for the shallow donors that are unlikely to be good choices for a serious quantum computer. There are few studies of stabilities of several species present at one time. Can we be sure to have isolated P in diamond? Would it lose an electron to substitutional N to yield the useless species P+ and N- ? Will most P be found as the irrelevant (spin S=0) PV- center?
What limits the number of gates in a patch is the number of control atoms that can be resolved spectroscopically one from another. As the temperature rises, the lines get broader, so this number falls and scaling becomes harder. Note the zero phonon linewidth need not be simply related to the fraction of the intensity in the sidebands. Above liquid nitrogen temperatures, these homogeneous optical widths increase fast. Thus we have two clear limits to room-temperature operation. The first is qubit decoherence, especially from spin lattice relaxation. The second is control linewidths becoming too large, reducing scalability, which may prove a more powerful limit.
Entangled states of distant atoms or solid-state defects created by interference. A wholly different approach generates quantum entanglement between remote systems by performing measurements on them in a certain way [13]. The systems might be two diamonds, each containing a single NV- center prepared in specific electron spin states, the two centers tuned to have exactly the same optical energies (Fig. 4). The measurement involves “single shot” optical excitation. Both systems are exposed to a weak laser pulse that, on average, will achieve one excitation. The single system excited will emit a photon that, after passing though beam splitters and an interferometer, is detected without giving information as to which system was excited (Fig. 5). “Remote entanglement” is achieved, subject to some strong conditions. The electronic quantum information can be swapped to more robust nuclear states (a so-called brokering process). This brokered information can then be recovered when needed to implement a strategy of measurement-based quantum information processing [14].
The materials and equipment needs, while different from those of optically controlled spintronics, have features in common. For remote entanglement, a random distribution of centers is used, with one from each zone chosen because of their match to each other. The excitation energies of the two distant centers must stay equal very accurately, and this equality must be stable over time, but can be monitored. There are some challenges here, since there will be energy shifts when other defect species in any one of the systems change charge or spin state (the difficulty is present but less severe for the optical control approach). As for optically controlled spintronics [11, 12], scale-up requires narrow lines, and becomes harder at higher temperatures, though there are ways to reduce the problem. Remote entanglement needs interferometric stability, avoiding problems when there are different temperature fluctuations for the paths from the separate systems. Again, there are credible strategies to reduce the effects.
So is room-temperature quantum computing feasible?
Spectroscopy is a generic need for both optically controlled spintronics and remote entanglement approaches. Both need qubits (the electron qubit for the measurement-based approach) with slow decoherence, a significant multiple of switching times. Both need sharp optical transitions with weak phonon sidebands to avoid loss of quantum information. A few zero phonon lines do indeed remain sharp at room temperature. The sharp lines should have frequencies stable over extended times. This mix of properties is hard to meet, but by no means impossible.
Perhaps the hardest conditions have yet to be mentioned. A quantum gate is no more a quantum computer than a transistor is a classical computer. Putting all the components of a quantum computer together could prove really hard. System integration may be the ultimate challenge. Quantum information processing (QIP) will need to exploit standard silicon technology to run the quantum system; and QIP must work alongside a feasible laser optics system. The optical systems are seriously complicated, though each feature seems manageable. It may be necessary to go to architectures even more complicated than those I have described. It might even prove useful to combine elements of remote entanglement and optical spin control, whether this is regarded as using remote entanglement to link spin patches, or as having spin patches instead of NV- centers as nodes for remote entanglements. A short article like this has to miss out many features of importance, not least questions of error correction, but a major message is that, even in the most rudimentary approaches, we have to think through all of the system when talking of a possible computer.
And what would you do with a quantum computer if you had one? Proposals that do not demand room temperature range from probable, like decryption or directory searching, to the possible, like modeling quantum systems, and even to the difficult yet perhaps conceivable, like modeling turbulence. More frivolous applications, like the computer games that drive many of today’s developments, make much more sense if they work at ambient temperatures. And available quantum processing at room temperature would surely stimulate inventive new ideas, just as solid-state lasers led to compact disc technology.
Summing up, where do we stand? At liquid nitrogen temperatures, say 77 K, quantum computing is surely possible, if quantum computing is possible at all. At dry ice temperatures, say 195 K, quantum computing seems reasonably possible. At temperatures that can be reached by thermoelectric or thermomagnetic cooling, say 260 K, things are harder, but there is hope. Yet we know that small (say 2–3 qubit) quantum devices operate at room temperature. It seems likely, to me at least, that a quantum computer of say 20 qubits will operate at room temperature. I do not say it will be easy. Will such a QIP device be as portable as a laptop? I won’t rule that out, but the answer is not obvious on present designs.
Acknowledgments
This work was supported in part by EPSRC through its Basic Technologies program. I am especially grateful for input from Gabriel Aeppli, Polina Bayvell, Simon Benjamin, Ian Boyd, Andrea Del Duce, Andrew Fisher, Tony Harker, Andy Kerridge, Brendon Lovett, Stephen Lynch, Gavin Morley, Seb Savory, and Jason Smith. I am particularly grateful to Simon Benjamin and Stephen Lynch for preparing the initial versions of the figures.
References
http://www.iop.org/activity/awards/International%20Award/page_31978.html..
C. P. Williams and S. H. Clearwater, Ultimate Zero and One: Computing at the Quantum Frontier (Copernicus, New York, 2000)[Amazon][WorldCat].
International Technology Roadmap for Semiconductors, http://www.itrs.net/.
General discussions relevant here: R. W. Keyes, J. Phys. Condens. Matter 17, V9 (2005); R. W. Keyes, J. Phys. Condens. Matter 18, S703 (2006); T. P. Spiller and W. J. Munro, J. Phys. Condens. Matter 18, V1 (2006); R. Tsu, Int. J. High Speed Electronics and Systems 9, 145 (1998); R. W. Keyes, Appl. Phys. A 76, 737 (2003); M. I. Dyakonov, Future Trends in Microelectronics: Up the Nano Creek, edited by S. Luryi, J. Xu, and A. Zaslavsky (Wiley, Hoboken, NJ, 2007)[Amazon][WorldCat].
Examples include: E. van Oort, N. B. Manson, and M. Glasbeek, J. Phys. C 21, 4385 (1988); F. T. Charnock and T. A. Kennedy, Phys. Rev. B 64, 041201 (2001); J. Wrachtrup et al., Opt. Spectrosc. 91, 429 (2001); J. Wrachtrup and F. Jelezko, J. Phys. Condens. Matter 18, S807 (2006); R. Hanson, F. M. Mendoza, R. J. Epstein, and D. D. Awschalom, Phys. Rev. Lett. 97, 087601 (2006); A. D. Greentree, P. Olivero, M. Draganski, E. Trajkov, J. R. Rabeau, P. Reichart, B. C. Gibson, S. Rubanov, S. T. Huntington, D. N. Jamieson, and S. Prawer, J. Phys. Condens. Matter 18, S825 (2006).
M. B. Plenio and P. L. Knight, Philos. Trans. R. Soc. London A 453, 2017 (1997).
B. E. Kane, Nature 393, 133 (1998).
D. P. DiVincenzo and D. Loss, Superlattices Microstruct. 23, 419 (1998).
A. J. Fisher, Philos. Trans. R. Soc. London A 361, 1441 (2003); http://arxiv.org/abs/quant-ph/0211200v1..
V. Dediu, M. Murgia, F. C. Matacotta, C. Taliani, and S. Barbanera, Solid State Commun. 122, 181 (2002).
A. M. Stoneham, A. J. Fisher, and P. T. Greenland, J. Phys Condens. Matter 15, L447 (2003).
R. Rodriquez, A .J. Fisher, P. T. Greenland, and A. M. Stoneham, J. Phys. Condens. Matter 16, 2757 (2004).
C. Cabrillo, J. I. Cirac, P. García-Fernández, and P. Zoller, Phys. Rev. A 58, 1025 (1999).
S. C. Benjamin,B. W. Lovett and J. M. Smith, Laser Photonics Rev. (to be published).
A. M. Stoneham, Materials Today 11, 32 (2008).
A. Kerridge, A. H. Harker, and A. M. Stoneham, J. Phy. Condens. Matter 19, 282201 (2007); E. M. Gauger et al., New J. Phys. 10, 073027 (2008).
A. M. Tyryshkin, J. J. L. Morton, S. C. Benjamin, A. Ardavan, G. A. D. Briggs, J. W. Ager, and S. A. Lyon, J. Phys. Condens. Matter 18, S783 (2006).
About the Author
Marshall Stoneham
Marshall Stoneham is Emeritus Massey Professor of Physics at University College London. He is a Fellow of the Royal Society, and also of the American Physical Society and of the Institute of Physics. Before joining UCL in 1995, he was the Chief Scientist of the UK Atomic Energy Authority, which involved him in many areas of science and technology, from quantum diffusion to nuclear safety. He was awarded the Guthrie gold medal of the Institute of Physics in 2006, and the Royal Society’s Zeneca Prize in 1995. He is the author of over 500 papers, and of a number of books, including Theory of Defects in Solids, now an Oxford Classic, and The Wind Ensemble Sourcebook that won the 1997 Oldman Prize. Marshall Stoneham is based in the London Centre for Nanotechnology, where he finds the scope for new ideas especially stimulating. His scientific interests range from new routes to solid-state quantum computing through materials modeling to biological physics, where his work on the interaction of small scent molecules with receptors has attracted much attention. He is the co-founder of two physics-based firms.

The Origin of Artificial Species: Creating Artificial Personalities


(Left) Rity was developed to test the world’s first robot “chromosomes,” which allow it to have an artificial genome-based personality. (Right) A representation of Rity’s artificial genome. Darker shades represent higher gene values, and red represents negative values. Image credit: Jong-Hwan Kim, et al. ©2009 IEEE.
(PhysOrg.com) -- Does your robot seem to be acting a bit neurotic? Maybe it's just their personality. Recently, a team of researchers has designed computer-coded genomes for artificial creatures in which a specific personality is encoded. The ability to give artificial life forms their own individual personalities could not only improve the natural interactions between humans and artificial creatures, but also initiate the study of “The Origin of Artificial Species,” the researchers suggest.
The first artificial creature to receive the genomic personality is Rity, a dog-like software character that lives in a virtual 3D world in a PC. Rity’s genome is composed of 14 chromosomes, which together are composed of a total of 1,764 genes, each with its own value. Rather than manually assign the gene values, which would be difficult and time-consuming, the researchers proposed an evolutionary process that generates a genome with a specific personality desired by a user. The process is described in a recent study by authors Jong-Hwan Kim of KAIST in Daejeon, Korea; Chi-Ho Lee of the Samsung Economic Research Institute in Seoul, Korea; and Kang-Hee Lee of Samsung Electronics Company, Ltd., in Suwon-si, Korea.
“This is the first time that an artificial creature like a or software agent has been given a genome with a personality,” Kim told PhysOrg.com. “I proposed a new concept of an artificial chromosome as the essence to define the personality of an artificial creature and to pass on its traits to the next generation, like a genetic inheritance. It is critical to provide an impression that the robot is a living creature. With this respect, having emotions enhances natural for human-robot symbiosis in the coming years.”
As the researchers explain, an autonomous artificial creature - whether a physical robot or agent - can behave, interact, and react to environmental stimuli. Rity, for example, can interact with humans in the physical world using information through a mouse, a camera, or a microphone, with 47 perceptions. For instance, a single click and double click on Rity are perceived as “patted” and “hit,” respectively. Dragging Rity slowly and softly is perceived as “soothed,” and dragging it quickly and wildly as “shocked.”
To react to these stimuli in real time, Rity relies on its internal states which are composed of three units - motivation, homeostasis, and emotion - and controlled by its internal control architecture. The three units have a total of 14 states, which are the basis of the 14 chromosomes: the motivation unit includes six states (curiosity, intimacy, monotony, avoidance, greed, and the desire to control); the homeostasis unit includes three states (fatigue, hunger, and drowsiness); and the emotion unit has five states (happiness, sadness, anger, fear, and neutral).
“In Rity, internal states such as motivation, homeostasis and emotion change according to the incoming perception,” Kim said. “If Rity sees its master, its emotion becomes happy and its motivation may be ‘greeting and approaching’ him or her. It means the change of internal states and the activated behavior accordingly is internal and external responses to the incoming stimulus.”
The internal control architecture processes incoming sensor information, calculates each value of internal states as its response, and sends the calculated values to the behavior selection module to generate a proper behavior. Finally, the behavior selection module probabilistically selects a behavior through a voting mechanism, where each reasonable behavior has its own voting value. Unreasonable behaviors are prevented with matrix masks, while a reflexive behavior module, which imitates an animal’s instinct, deals with urgent situations such as running into a wall and enables a more immediate response.
“Rity was developed to test the world's first robotic ‘chromosomes,’ which are a set of computerized DNA (Deoxyribonucleic acid) code for creating robots that can think, feel, reason, express desire or intention, and could ultimately reproduce their kind, and evolve as a distinct species in a virtual world,” Kim said. “Rity can express its feeling through facial expression and behavior just like a living creature.”
As the researchers explain, each of the 14 chromosomes in Rity’s genome is composed of three gene vectors: the fundamental gene vector, the internal-state-related gene vector, and the behavior-related gene vector. As each chromosome is represented by 2 F-genes, 47 I-genes, and 77 B-genes, Rity has 1,764 genes in total. Each gene can have a range of values represented by real numbers. While genes are inherited, mutations may also occur. The nature of the genetic coding is such that a single gene can influence multiple behaviors, and also a single behavior can be influenced by multiple genes.
Depending on the values of the genes, the researchers specified five personalities (“the Big Five personality dimensions”) and their opposites to classify an artificial creature’s personality traits: extroverted/introverted, agreeable/antagonistic, conscientious/negligent, openness/closeness, and neurotic/emotionally stable.
To demonstrate an artificial genome, the researchers used their evolutionary algorithm to generate two contrasting personalities for Rity - agreeable and antagonistic - and compare Rity’s behavior in the different cases. Running the algorithm through 3,000 generations took about 12 hours to generate a genome encoding a desired personality by a Pentium 4, 2 GHz processor. For comparison, the researchers also used manual and random processes to generate genomes with agreeable and antagonistic personalities, though neither outperformed the evolutionary algorithm in terms of personality consistency and similarity to desired personality. Finally, the researchers also verified the accuracy of the evolutionary genome encoding by observing how the artificial creature reacted to a series of stimuli.
“The genome is an essential one encoding a mechanism for growth, reproduction and evolution, which necessarily defines ‘The Origin of Artificial Species,’” Kim said. “It means the origin stems from a computerized genetic code, which defines the mechanism for growing, multiplying and evolving along with its propensity to ‘feel’ happy, sad, angry, sleepy, hungry, afraid, etc.”
As the researchers showed, a 2D representation of the genome can enable users to view the chromosomes of the three gene types and easily insert or delete certain chromosomes or genes related to an artificial creature’s personality.
In the future, the researchers plan to combine the genome-based personality with the artificial creature’s own experiences in order to influence the creature’s behavioral responses. They also plan to classify and standardize the different behaviors in order to generalize the artificial genome structure.
More information:
Robot Intelligence Technology Lab: http://rit.kaist.ac.kr/home/ArtificialCreatures
Jong-Hwan Kim, Chi-Ho Lee, and Kang-Hee Lee. “Evolutionary Generative Process for an Artificial Creature’s Personality.” IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews, Vol. 39, No. 3, May 2009.
Copyright 2009 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.

domenica 10 maggio 2009

Faster Computers, Electronic Devices Possible After Scientists Create Large-area Graphene On Copper

ScienceDaily (May 11, 2009) — The creation of large-area graphene using copper may enable the manufacture of new graphene-based devices that meet the scaling requirements of the semiconductor industry, leading to faster computers and electronics, according to a team of scientists and engineers at The University of Texas at Austin.
"Graphene could lead to faster computers that use less power, and to other sorts of devices for communications such as very high-frequency (radio-frequency-millimeter wave) devices," said Professor and physical chemist Rod Ruoff, one of the corresponding authors on the Science article. "Graphene might also find use as optically transparent and electrically conductive films for image display technology and for use in solar photovoltaic electrical power generation."
Graphene, an atom-thick layer of carbon atoms bonded to one another in a "chickenwire" arrangement of hexagons, holds great potential for nanoelectronics, including memory, logic, analog, opto-electronic devices and potentially many others. It also shows promise for electrical energy storage for supercapacitors and batteries, for use in composites, for thermal management, in chemical-biological sensing and as a new sensing material for ultra-sensitive pressure sensors.
"There is a critical need to synthesize graphene on silicon wafers with methods that are compatible with the existing semiconductor industry processes," Ruoff said. "Doing so will enable nanoelectronic circuits to be made with the exceptional efficiencies that the semiconductor industry is well known for."
Graphene can show very high electron- and hole-mobility; as a result, the switching speed of nanoelectronic devices based on graphene can in principle be extremely high. Also, graphene is "flat" when placed on a substrate (or base material) such as a silicon wafer and, thus, is compatible with the wafer-processing approaches of the semiconductor industry. The exceptional mechanical properties of graphene may also enable it to be used as a membrane material in nanoelectromechanical systems, as a sensitive pressure sensor and as a detector for chemical or biological molecules or cells.
The university researchers, including post-doctoral fellow Xuesong Li, and Luigi Colombo, a TI Fellow from Texas Instruments, Inc., grew graphene on copper foils whose area is limited only by the furnace used. They demonstrated for the first time that centimeter-square areas could be covered almost entirely with mono-layer graphene, with a small percentage (less than five percent) of the area being bi-layer or tri-layer flakes. The team then created dual-gated field effect transistors with the top gate electrically isolated from the graphene by a very thin layer of alumina, to determine the carrier mobility. The devices showed that the mobility, a key metric for electronic devices, is significantly higher than that of silicon, the principal semiconductor of most electronic devices, and comparable to natural graphite.
"We used chemical-vapor deposition from a mixture of methane and hydrogen to grow graphene on the copper foils," said Ruoff. "The solubility of carbon in copper being very low, and the ability to achieve large grain size in the polycrystalline copper substrate are appealing factors for its use as a substrate --along with the fact that the semiconductor industry has extensive experience with the use of thin copper films on silicon wafers. By using a variety of characterization methods we were able to conclude that growth on copper shows significant promise as a potential path for high quality graphene on 300-millimeter silicon wafers."
The university's effort was funded in part by the state of Texas, the South West Academy for Nanoelectronics (SWAN) and the DARPA CERA Center. Electrical and computer engineering Professor Sanjay Banerjee, a co-author of the Science paper, directs both SWAN and the DARPA Center.
"By having a materials scientist of Colombo's caliber with such extensive knowledge about all aspects of semiconductor processing and now co-developing the materials science of graphene with us, I think our team exemplifies what collaboration between industrial scientists and engineers with university personnel can be," said Ruoff, who holds the Cockrell Family Regents Chair #7. "This industry-university collaboration supports both the understanding of the fundamental science as well its application."
Other co-authors of the work not previously mentioned include: research associate Richard Piner of the Department of Mechanical Engineering; Assistant Professor Emanuel Tutuc of the Department of Electrical and Computer Engineering; post-doctoral fellows Jinho An, Weiwei Cai, Inhwa Jung, Aruna Velamakanni and Dongxing Yang in the Department of Mechanical Engineering; and graduate students Seyoung Kim and Junghyo Nah in the Department of Electrical and Computer Engineering.
Journal reference:
Li et al. Large-Area Synthesis of High-Quality and Uniform Graphene Films on Copper Foils. Science, 2009; DOI: 10.1126/science.1171245
Adapted from materials provided by University of Texas at Austin, via EurekAlert!, a service of AAAS.

NIST demonstrates method for reducing errors in quantum computing

SOURCE

A team of researchers working at the National Institute of Standards and Technology in Boulder, Colo., have demonstrated the effectiveness of using microwave pulses to suppress errors in quantum bits, or qubits, the media for carrying and manipulating data in the still experimental field of quantum computing.
The dynamical decoupling technique using microwave pulses they tested is not new, said John Bollinger, lead scientist on the project.
“It’s something we borrowed from the [magnetic resonance imaging] community that was developed in the ’50s and ’60s,” Bollinger said. “Our work is a validation of an idea that has been out there.”
But the experiments also advanced the theories, said Michael J. Biercuk, a NIST researcher who took part in the work. By using new pulse sequences, researchers demonstrated that the number of errors introduced into quantum computing through environmental noise could be reduced by an order of magnitude. This means the expected error rate can be brought down to well below the threshold for fault tolerance in quantum computing.
The ability to suppress errors before they accumulate is important because qubits are to subject to the introduction of errors through stray electromagnetic “noise” in the environment. To date, there is no practical way to correct these qubit errors.
The work was described in the April 23 issue of Nature.
Quantum computing uses subatomic particles rather than binary bits to carry and manipulate information. While a traditional bit is either on or off, a 1 or a 0, a qubit can exist in both states simultaneously. Once harnessed, this superposition of states should let quantum computers extract patterns from possible outputs of huge computations without performing all of them, allowing them to crack complex problems not solvable by traditional binary computers.
The researchers used an array of about 1,000 ultracold beryllium ions held in a magnetic field as the qubits. Sequences of microwave pulses were used to reverse changes introduced into the quantum states. The pulses in effect decouple the qubits from electromagnetic noise in the environment.
Work on using the technique for suppressing quantum errors began a decade ago, Biercuk said. “Our work validated essentially all of the work” that had been done up to this point. It also introduced new ideas by moving the pulses relative to each other in the patterns, rather than increasing the number of pulses. The results showed an unexpectedly high rate of error suppression.
The novel pulse sequences are tailored to the specific noise environment. The effective sequences can be found quickly through an experimental feedback technique and were shown to significantly outperform other sequences. The researchers tested these sequences under realistic noise conditions for different qubit technologies, making their results broadly applicable.
Announcement of the work comes a little more than a month after other NIST researchers showed that a promising technique for correcting quantum errors would not work. The technique, called transversal encoded quantum gates, seemed simple at first. “But after substantial effort, no one was able to find a quantum code to do that,” said information theorist Bryan Eastin. “We were able to show that a way doesn’t exist.”
The transversal operations used by Eastin were a “specific case” of error correction, Biercuk said, and the work does not mean that error correction cannot be done in quantum computers. Effective techniques for suppressing errors would mean that any error correction method would also be more effective, since there would be fewer errors to deal with.
But quantum computing still is some years away. Biercuk said that practical quantum computing already has been demonstrated with arrays of several coupled qubits. “That is wonderful from an experimental point of view, but it is not useful,” he said.
A quantum computer useful for doing complex simulations would require an array of about 100 qubits, he said. “That’s at least a decade away.” A computer capable of doing cryptographic factoring on a scale that cannot be done effectively by traditional computers still is 20 to 30 years off, he said.

giovedì 7 maggio 2009

Neural Networks Used To Improve Wind Speed Forecasting

ScienceDaily (May 6, 2009) — A team of researchers from the University of Alcala (UAH) and the Complutense University in Madrid (UCM) have invented a new method for predicting the wind speed of wind farm aerogenerators. The system is based on combining the use of weather forecasting models and artificial neural networks and enables researchers to calculate the energy that wind farms will produce two days in advance.
"The aim of the hybrid method we have developed is to predict the wind speed in each of the aerogenerators in a wind farm", explained Sancho Salcedo, an engineer at the Escuela Politécnica Superior and co-author of the study, published on-line in the journal Renewable Energy.
In order to develop the new model, the scientists used information provided by the Global Forecasting System from the US National Centers for Environmental Prediction. The data from this system cover the entire planet with a resolution of approximately 100 kilometres and are available for free on the internet.
Researchers are able to make more detailed predictions by integrating the so-called ‘fifth generation mesoscale model (MM5), from the US National Center of Atmospheric Research, designed to enhance resolution to 15x15 kilometres.
"This information is still not enough to predict the wind speed of one particular aerogenerador, which is why we applied artificial neural networks," Salcedo clarified. These networks are automatic information learning and processing systems that simulate the workings of animal nervous systems. In this case, they use the temperature, atmospheric pressure and wind speed data provided by forecasting models, as well as the data gathered by the aerogenerators themselves.
With these data, once the system has been "trained", predictions regarding wind speed will be made between one and 48 hours in advance. Wind farms are obliged by law to supply these predictions to Red Eléctrica Española, the company that delivers electricity and runs the Spanish electricity system.
Salcedo says the method can be applied immediately: "If the wind speed of one aerogenerator can be predicted, then we can estimate how much energy it will produce. Therefore, by summing the predictions for each ‘aero', we can forecast the production of an entire wind farm." The method has already been used very successfully at the wind farm in Fuentasanta, in Albacete.
Millions of Euros could be saved
Researchers are continuing to improve the method and recently proposed the use of several global forecasting models instead of just one, according to an article published this year in Neurocomputing. As a result, several sets of observations are obtained, which are then applied to banks of neural networks to achieve a more accurate prediction of aerogenerator wind speeds.
The results obtained reveal an improvement of 2% in predictions compared to the previous model. "Although this may seem like a small improvement, it is really substantial, as we are talking about an improvement in predicting energy production that could be worth millions of euros, Salcedo concluded.
Journal references:
Salcedo-Sanz et al. Hybridizing the fifth generation mesoscale model with artificial neural networks for short-term wind speed prediction. Renewable Energy, 2009; 34 (6): 1451 DOI: 10.1016/j.renene.2008.10.017
Salcedosanz et al. Accurate short-term wind speed prediction by exploiting diversity in input data using banks of artificial neural networks. Neurocomputing, 2009; 72 (4-6): 1336 DOI: 10.1016/j.neucom.2008.09.010
Adapted from materials provided by Plataforma SINC.