giovedì 1 ottobre 2009

First Intelligent Financial Search Engine Developed.

ScienceDaily (Sep. 30, 2009) — Researchers from the Carlos III University of Madrid (UCM3) have completed the development of the first search engine designed to search for information from the financial and stock market sector based on semantic technology, which enables one to make more accurate thematic searches adapted to the needs of each user.
Unlike conventional search engines, SONAR -so named by its creators- enables the user to perform structured searches which are not based solely on concordance with a series of key words. This corporate financial search engine based on semantic technology, as described on the project website (http://www.proyecto-sonar.org), was developed by researchers from the UC3M in partnership with the University of Murcia, el Instituto de Empresa (the Business Institute) and the company Indra.
According to its creators, it has two main advantages. First, its effectiveness in a concrete domain- that of finance- which is closely defined and has very precise vocabulary. According to Juan Miguel Gómez Berbís, from the Computer Department of the UC3M “This verticality distinguishes SONAR from other more generic search engines, such as Google or Bing” Second, its capacity to establish relations between news, share valuations and prices via logical reasoning.
The first prototype works by making use of semantic web elements. Basically, the system collects data from both public information sources (Internet) and private, corporate ones (Intranet), adds them to a repository of semantically recorded data (labelled and structured) and allows intelligent access to this data. To achieve this, the platform incorporates an inference engine, a mechanism capable of performing reasoning tasks on the recorded information, as well as a natural language processor, which helps the user to perform the search in the simplest way possible. In this way the results obtained are matched to requests, eliminating ambiguities in polysemic terms, for example in searches carried out by users on stored data. “SONAR enables us to establish relations between different sources of information and discover and expand our knowledge, while at the same time it allows us to classify them so that users can get much more benefit from the experience”
Potential users
This search tool is designed for both private investors and large financial concerns. Its creators anticipate that it will be a very useful tool for analysts and stockbrokers. “It will be especially useful to the finance departments of banks and saving banks or to add to an existing search engine added value over its competitors” Gómez Berbís points out. And the search for accurate, reliable, relevant information in this business area has become a key factor in a domain where speed and quality of data are critical factors with an exceptional impact on business processes.
According to the researchers, this project aims to respond to a need from the financial sector, that is, the analysis of a large volume of information in order to take decisions. In this way, the execution of this project will allow the financial community to have access to a set of intelligent systems for the aggregated search of information in the financial domain and enable them to improve procedures for integrating company information and processes.
Researchers are currently incorporating new functions into the search tool and also receiving requests to adapt it to other domains, such as transport and biotechnology. In any case, the project is constantly evolving in order to enhance accuracy and reliability. “In SONAR2 we are working on two Intelligent Decision Support Systems for Financial Investments, one based on Fundamental Analysis and the other on Technical Chartist Analysis, which assists the work of the trader and average investor”, reveals professor Gómez Berbis.
Adapted from materials provided by
Universidad Carlos III de Madrid.

lunedì 28 settembre 2009

Ants Vs. Worms: New Computer Security Mimics Nature.

ScienceDaily (Sep. 28, 2009) — In the never-ending battle to protect computer networks from intruders, security experts are deploying a new defense modeled after one of nature’s hardiest creatures — the ant.
Unlike traditional security devices, which are static, these “digital ants” wander through computer networks looking for threats, such as “computer worms” — self-replicating programs designed to steal information or facilitate unauthorized use of machines. When a digital ant detects a threat, it doesn’t take long for an army of ants to converge at that location, drawing the attention of human operators who step in to investigate.
The concept, called “swarm intelligence,” promises to transform cyber security because it adapts readily to changing threats.
“In nature, we know that ants defend against threats very successfully,” explains Professor of Computer Science Errin Fulp, an expert in security and computer networks. “They can ramp up their defense rapidly, and then resume routine behavior quickly after an intruder has been stopped. We were trying to achieve that same framework in a computer system.”
Current security devices are designed to defend against all known threats at all times, but the bad guys who write malware — software created for malicious purposes — keep introducing slight variations to evade computer defenses.
As new variations are discovered and updates issued, security programs gobble more resources, antivirus scans take longer and machines run slower — a familiar problem for most computer users.
Glenn Fink, a research scientist at Pacific Northwest National Laboratory (PNNL) in Richland, Wash., came up with the idea of copying ant behavior. PNNL, one of 10 Department of Energy laboratories, conducts cutting-edge research in cyber security.
Fink was familiar with Fulp’s expertise developing faster scans using parallel processing — dividing computer data into batches like lines of shoppers going through grocery store checkouts, where each lane is focused on certain threats. He invited Fulp and Wake Forest graduate students Wes Featherstun and Brian Williams to join a project there this summer that tested digital ants on a network of 64 computers.
Swarm intelligence, the approach developed by PNNL and Wake Forest, divides up the process of searching for specific threats.
“Our idea is to deploy 3,000 different types of digital ants, each looking for evidence of a threat,” Fulp says. “As they move about the network, they leave digital trails modeled after the scent trails ants in nature use to guide other ants. Each time a digital ant identifies some evidence, it is programmed to leave behind a stronger scent. Stronger scent trails attract more ants, producing the swarm that marks a potential computer infection.”
In the study this summer, Fulp introduced a worm into the network, and the digital ants successfully found it. PNNL has extended the project this semester, and Featherstun and Williams plan to incorporate the research into their master’s theses.
Fulp says the new security approach is best suited for large networks that share many identical machines, such as those found in governments, large corporations and universities.
Computer users need not worry that a swarm of digital ants will decide to take up residence in their machine by mistake. Digital ants cannot survive without software “sentinels” located at each machine, which in turn report to network “sergeants” monitored by humans, who supervise the colony and maintain ultimate control.
Adapted from materials provided by
Wake Forest University. Original article written by Eric Frazier, Office of Communications and External Relations.

domenica 20 settembre 2009

Reconstruct Mars Automatically In Minutes.

SOURCE

ScienceDaily (Sep. 18, 2009) — A computer system is under development that can automatically combine images of the Martian surface, captured by landers or rovers, in order to reproduce a three dimensional view of the red planet. The resulting model can be viewed from any angle, giving astronomers a realistic and immersive impression of the landscape.
The new development has been presented at the European Planetary Science Congress in Potsdam by Dr Michal Havlena.
“The feeling of ‘being right there’ will give scientists a much better understanding of the images. The only input we need are the captured raw images and the internal camera calibration. After minutes of computation on a standard PC, a three dimensional model of the captured scene is obtained,” said Dr Havlena.
The growing amount of available imagery from Mars is nearly impossible to handle for the manual image processing techniques used to date. The new automated method, which allows fast high quality image processing, was developed at the Center for Machine Perception of the Technical University of Prague, under the supervision of Tomas Pajdla, as a part of the EU FP7 Project PRoVisG.
From the technical point of view, the image processing consists of three stages: the first step is determining the image order. If the input images are unordered, i.e. they do not form a sequence but still are somehow connected, a state-of-the-art image indexing technique is able to find images of cameras observing the same part of the scene. To start with, up to a thousand features on each image are detected and “translated” into visual words, according to a visual vocabulary trained on images from Mars. Then, starting from an arbitrary image, the following image is selected if it shares the highest number of visual words with the previous image.
The second step of the pipeline, the so-called ‘structure-from-motion computation’, helps scientists determine the accurate camera positions and rotations in three dimensional space. Just five corresponding features are enough to obtain a relative camera pose between the two images that have been selected as sequential.
The last and most important step is the so-called ‘dense 3D model generation’ of the captured scene, which essentially creates and fuses the Martian surface depth maps. To do this, the model uses the disparities (parallaxes) present in images taken at two distinct camera positions, which were identified in the second step.
“The pipeline has already been used successfully to reconstruct a three dimensional model from nine images captured by the Phoenix Mars Lander, which were obtained just after performing some digging operation on the Mars surface,” said Dr Havlena.
“The challenge is now to reconstruct larger parts of the surface of the red planet, captured by the Mars Exploration Rovers Spirit and Opportunity,” concluded Dr Havlena.
Adapted from materials provided by
Europlanet Media Centre, via AlphaGalileo.

giovedì 13 agosto 2009

Quantum Computing: From qubits to qudits, with five energy levels

Source: ScienceDaily
ScienceDaily (Aug. 13, 2009) — Scientists at UC Santa Barbara have devised a new type of superconducting circuit that behaves quantum mechanically – but has up to five levels of energy instead of the usual two. The findings are published in the August 7 issue of Science.
These circuits act like artificial atoms in that they can only gain or lose energy in packets, or quanta, by jumping between discrete energy levels. "In our previous work, we focused on systems with just two energy levels, 'qubits,' because they are the quantum analog of 'bits,' which have two states, on and off," said Matthew Neeley, first author and a graduate student at UCSB.
He explained that in this work they operated a quantum circuit as a more complicated artificial atom with up to five energy levels. The generic term for such a system is "qudit," where 'd' refers to the number of energy levels –– in this case, 'd' equals five.
"This is the quantum analog of a switch that has several allowed positions, rather than just two," said Neeley. "Because it has more energy levels, the physics of a qudit is richer than for just a single qubit. This allows us to explore certain aspects of quantum mechanics that go beyond what can be observed with a qubit."
Just as bits are used as the fundamental building blocks of computers, qubits could one day be used as building blocks of a quantum computer, a device that exploits the laws of quantum mechanics to perform certain computations faster than can be done with classical bits alone. "Qudits can be used in quantum computers as well, and there are even cases where qudits could be used to speed up certain operations with a quantum computer," said Neeley. "Most research to date has focused on qubit systems, but we hope our experimental demonstration will motivate more effort on qudits, as an addition to the quantum information processing toolbox."
The senior co-author of the paper is John M. Martinis, professor of physics at UCSB. Other co-authors from UCSB are: Markus Ansmann, Radoslaw C. Bialczak, Max Hofheinz, Erik Lucero, Aaron D. O'Connell, Daniel Sank, Haohua Wang, James Wenner, and Andrew N. Cleland. Another co-author, Michael R. Geller, is from the University of Georgia.
Adapted from materials provided by University of California - Santa Barbara.

mercoledì 22 luglio 2009

This Article Will Self-destruct: Tool To Make Online Personal Data Vanish


ScienceDaily (July 22, 2009) — Computers have made it virtually impossible to leave the past behind. College Facebook posts or pictures can resurface during a job interview. A lost cell phone can expose personal photos or text messages. A legal investigation can subpoena the entire contents of a home or work computer, uncovering incriminating, inconvenient or just embarrassing details from the past.
The University of Washington has developed a way to make such information expire. After a set time period, electronic communications such as e-mail, Facebook posts and chat messages would automatically self-destruct, becoming irretrievable from all Web sites, inboxes, outboxes, backup sites and home computers. Not even the sender could retrieve them.
"If you care about privacy, the Internet today is a very scary place," said UW computer scientist Tadayoshi Kohno. "If people understood the implications of where and how their e-mail is stored, they might be more careful or not use it as often."
The team of UW computer scientists developed a prototype system called Vanish that can place a time limit on text uploaded to any Web service through a Web browser. After a set time text written using Vanish will, in essence, self-destruct. A paper about the project went public today and will be presented at the Usenix Security Symposium Aug. 10-14 in Montreal.
Co-authors on the paper are doctoral student Roxana Geambasu, assistant professor Tadayoshi Kohno, professor Hank Levy and undergraduate student Amit Levy, all with the UW's department of computer science and engineering. The research was funded by the National Science Foundation, the Alfred P. Sloan Foundation and Intel Corp.
"When you send out a sensitive e-mail to a few friends you have no idea where that e-mail is going to end up," Geambasu said. "For instance, your friend could lose her laptop or cell phone, her data could be exposed by malware or a hacker, or a subpoena could require your e-mail service to reveal your messages. If you want to ensure that your message never gets out, how do you do that?"
Many people believe that pressing the "delete" button will make their data go away.
"The reality is that many Web services archive data indefinitely, well after you've pressed delete," Geambasu said.
Simply encrypting the data can be risky in the long term, the researchers say. The data can be exposed years later, for example, by legal actions that force an individual or company to reveal the encryption key. Current trends in the computing and legal landscapes are making the problem more widespread.
"In today's world, private information is scattered all over the Internet, and we can't control the lifetime of that data," said Hank Levy. "And as we transition to a future based on cloud computing, where enormous, anonymous datacenters run the vast majority of our applications and store nearly all of our data, we will lose even more control."
The Vanish prototype washes away data using the natural turnover, called "churn," on large file-sharing systems known as peer-to-peer networks. For each message that it sends, Vanish creates a secret key, which it never reveals to the user, and then encrypts the message with that key. It then divides the key into dozens of pieces and sprinkles those pieces on random computers that belong to worldwide file-sharing networks, the same ones often used to share music or movie files. The file-sharing system constantly changes as computers join or leave the network, meaning that over time parts of the key become permanently inaccessible. Once enough key parts are lost, the original message can no longer be deciphered.
In the current Vanish prototype, the network's computers purge their memories every eight hours. (An option on Vanish lets users keep their data for any multiple of eight hours.)
Unlike existing commercial encryption services, a message sent using Vanish is kept private by an inherent property of the decentralized file-sharing networks it uses.
"A major advantage of Vanish is that users don't need to trust us, or any service that we provide, to protect or delete the data," Geambasu says.
Researchers liken using Vanish to writing a message in the sand at low tide, where it can be read for only a few hours before the tide comes in and permanently washes it away. Erasing the data doesn't require any special action by the sender, the recipient or any third party service.
"Our goal was really to come up with a system where, through a property of nature, the message, or the data, disappears," Levy says.
Vanish was released today as a free, open-source tool that works with the Firefox browser. To work, both the sender and the recipient must have installed the tool. The sender then highlights any sensitive text entered into the browser and presses the "Vanish" button. The tool encrypts the information with a key unknown even to the sender.
That text can be read, for a limited time only, when the recipient highlights the text and presses the "Vanish" button to unscramble it. After eight hours the message will be impossible to unscramble and will remain gibberish forever.
Vanish works with any text entered into a Web browser: Web-based e-mail such as Hotmail, Yahoo and Gmail, Web chat, or the social networking sites MySpace and Facebook. The Vanish prototype now works only for text, but researchers said the same technique could work for any type of data, such as digital photos.
It is technically possible to save information sent with Vanish. A recipient could print e-mail and save it, or cut and paste unencrypted text into a word-processing document, or photograph an unscrambled message. Vanish is meant to protect communication between two trusted parties, researchers say.
"Today many people pick up the phone when they want to talk with a lawyer or have a private conversation," Kohno said. "But more and more communication is happening online. Vanish is designed to give people the same privacy for e-mail and the Web that they expect for a phone conversation."
The paper and research prototype are at http://vanish.cs.washington.edu.
Adapted from materials provided by University of Washington.

venerdì 17 luglio 2009

Program For Cyber Security 'Neighborhood Watch' Developed

SOURCE

ScienceDaily (July 16, 2009) — U.S. Department of Energy laboratories fight off millions of cyber attacks every year, but a near real-time dialog between these labs about this hostile activity has never existed – until now.
Scientists at DOE's Argonne National Laboratory have devised a program that allows for Cyber Security defense systems to communicate when attacked and transmit that information to cyber systems at other institutions in the hopes of strengthening the overall cyber security posture of the complex.
"The Federated Model for Cyber Security acts as a virtual neighborhood watch program. If one institution is attacked; secure and timely communication to others in the Federation will aide in protecting them from that same attack through active response," cyber security officer Michael Skwarek said.
Prior to the development of the Federated Model for Cyber Security, the exchange of hostile activity was solely on the shoulders of the human element. In cyber attacks, every second counts and the quicker that such information can be securely shared, will assist in strengthening others against similar attacks. With millions of cyber security probes a day, the human element will not be successful alone.
"This program addresses the need for the exchange of hostile activity information, with the goal of reducing the time to react across the complex. History has shown, hostile activity is often targeted at more than one location, and having our defenses ready and armed will assist greatly." Skwarek said.
Currently, the program is capable of transmitting information regarding hostile IP addresses and domain names, and will soon be able to share hostile email address and web URLs to others in the Federation.
The development of this program led to Skwarek along with Argonne's cyber security team members Matt Kwiatkowski, Tami Martin, Scott Pinkerton, Chris Poetzel, Gene Rackow and Conrad Zadlo winning the DOE's 2009 Cyber Security Innovation and Technology Achievement Award.
The Federated Model for Cyber Security has proved to be an important cyber security and communication tool. Use in the private sector, as well as in institutions with heavy collaborative efforts, can realize an operational gain by leveraging the power of sharing and learning from others on what they see and defend against on a daily basis.
Adapted from materials provided by DOE/Argonne National Laboratory.

martedì 14 luglio 2009

Tracking The Life And Death Of News


ScienceDaily (July 14, 2009) — As more and more news appears on the Internet as well as in print, it becomes possible to map the global flow of news by observing it online. Using this strategy, Cornell computer scientists have managed to track and analyze the "news cycle" -- the way stories rise and fall in popularity.
Jon Kleinberg, the Tisch University Professor of Computer Science at Cornell, postdoctoral researcher Jure Leskovec and graduate student Lars Backstrom tracked 1.6 million online news sites, including 20,000 mainstream media sites and a vast array of blogs, over the three-month period leading up to the 2008 presidential election -- a total of 90 million articles, one of the largest analyses anywhere of online news. They found a consistent rhythm as stories rose into prominence and then fell off over just a few days, with a "heartbeat" pattern of handoffs between blogs and mainstream media. In mainstream media, they found, a story rises to prominence slowly then dies quickly; in the blogosphere, stories rise in popularity very quickly but then stay around longer, as discussion goes back and forth. Eventually though, almost every story is pushed aside by something newer.
"The movement of news to the Internet makes it possible to quantify something that was otherwise very hard to measure -- the temporal dynamics of the news," said Kleinberg. "We want to understand the full news ecosystem, and online news is now an accurate enough reflection of the full ecosystem to make this possible. This is one [very early] step toward creating tools that would help people understand the news, where it's coming from and how it's arising from the confluence of many sources."
The researchers also say their work suggests an answer to a longstanding question: Is the "news cycle" just a way to describe our perception of what's going on in the media, or is it a real phenomenon that can be measured? They opt for the latter, and offer a mathematical explanation of how it works.
The research was presented at the Association for Computing Machinery Special Interest Group on Conference on Knowledge Discovery and Data Mining Conference June 28-July 1 in Paris.
The ideal, Kleinberg said, would be to track "memes," or ideas, through cyberspace, but deciding what an article is about is still a major challenge for computing. The researchers sidestepped that obstacle by tracking quotations that appear in news stories, since quotes remain fairly consistent even though the overall story may be presented in very different ways by different writers.
Even quotes may change slightly or "mutate" as they pass from one article to another, so the researchers developed an algorithm that could identify and group similar but slightly different phrases. In simple terms, the computer identified short phrases that were part of longer phrases, using those connections to create "phrase clusters." Then they tracked the volume of posts in each phrase cluster over time. In the August and September data they found threads rising and falling on a more or less weekly basis, with major peaks corresponding to the Democratic and Republican conventions, the "lipstick on a pig" discussion, rising concern over the financial crisis and discussions of a bailout plan.
The slow rise of a new story in the mainstream, the researchers suggest, results from imitation -- as more sites carried a story, other sites were more likely to pick it up. But the life of a story is limited, as new stories quickly push out the old. A mathematical model based on the interaction of imitation and recency predicted the pattern fairly well, the researchers said, while predictions based on either imitation or recency alone couldn't come close.
Watching how stories moved between mainstream media and blogs revealed a sharp dip and rise the researchers described as a "heartbeat." When a story first appears, there is a small rise in activity in both spheres; as mainstream activity increases, the proportion blogs contribute becomes small; but soon the blog activity shoots up, peaking an average of 2.5 hours after the mainstream peak. Almost all stories started in the mainstream. Only 3.5 percent of the stories tracked appeared first dominantly in the blogosphere and then moved to the mainstream.
The mathematical model needs to be refined, the researchers said, and they suggested further study of how stories move between sites with opposing political orientation. "It will be useful to further understand the roles different participants play in the process," the researchers concluded, "as their collective behavior leads directly to the ways in which all of us experience news and its consequences."
Adapted from materials provided by Cornell University.

venerdì 10 luglio 2009

Quantum Computers And Tossing A Coin In The Microcosm


ScienceDaily (July 9, 2009) — When you toss a coin, you either get heads or tails. By contrast, things are not so definite at the microcosmic level. An atomic 'coin' can display a superposition of heads and tails when it has been thrown. However, this only happens if you do not look at the coin. If you do, it decides in favour of one of the two states. If you leave the decision where a quantum particle should go to a coin like this, you get unusual effects. For the first time, physicists at the University of Bonn have demonstrated these effects in an experiment with caesium.
Let's assume we carried out the following experiment: we put a coin in the hand of a test person. We'll simply call this person Hans. Hans's task is now to toss the coin several times. Whenever the coin turns up 'heads', his task is to take a step to the right. By contrast, if it turns up 'tails', he takes a step to the left. After 10 throws we look where Hans is standing. Probably he won't have moved too far from his initial position, as 'heads' and 'tails' turn up more or less equally often. In order to walk 10 paces to the right, Hans would have to get 10 'heads' successively. And that tends not happen that often.
Now, we assume that Hans is a very patient person. He is so patient that he does this experiment 1000 times successively. After each go, we record his position. When at the end we display this result as a graph, we get a typical bell curve. Hans very often ends up somewhere close to his starting positions after 10 throws. By contrast, we seldom find him far to the left or right.
The experiment is called a 'random walk'. The phenomenon can be found in many areas of modern science, e.g. as Brownian motion. In the world of quantum physics, there is an analogy with intriguing new properties, the 'quantum walk'. Up to now, this was a more or less a theoretical construct, but physicists at the University of Bonn have now actually carried out this kind of 'quantum walk'.
A single caesium atom held in a kind of tweezers composed of laser beams served as a random walker and coin at the same time. Atoms can adopt different quantum mechanical states, similar to head and tails of a coin facing upwards. Yet at the microcosmic level everything is a little more complicated. This is because quantum particles can exist in a superposition of different states. Basically, in that case 'a bit of heads' and 'a bit of tails' are facing upwards. Physicists also call this superposition.
Using two conveyor belts made of laser beams, the Bonn physicists pulled their caesium atom in two opposite directions, the 'heads' part to the right, the 'tails' part to the left. 'This way we were able to move both states apart by fractions of a thousandth of a millimetre,' Dr. Artur Widera from the Bonn Institute of Applied Physics explains. After that, the scientists 'threw the dice once more' and put each of both components into a superposition of heads and tails again.
After several steps of this 'quantum walk' a caesium atom like this that has been stretched apart is basically everywhere. Only when you measure its position does it 'decide' at which position of the 'catwalk' it wants to turn up. The probability of its position is predominantly determined by a second effect of quantum mechanics. This is due to two parts of the atom being able to reinforce themselves or annihilate themselves. As in the case of light physicists call this interference.
As in the example of Hans the coin thrower, you can now carry out this 'quantum walk' many times. You then also get a curve which reflects the atom's probability of presence. And that is precisely what the physicists from Bonn measured. 'Our curve is clearly different from the results obtained in classical random walks. It does not have its maximum at the centre, but at the edges,' Artur Widera's colleague Michal Karski points out. 'This is exactly what we expect from theoretical considerations and what makes the quantum walk so attractive for applications.' For comparison the scientists destroyed the quantum mechanical superposition after every single 'throw of the coin'. Then the 'quantum walk' becomes a 'random walk', and the caesium atom behaves like Hans. 'And that is exactly the effect we see,' Michal Karski says.
Professor Dieter Meschede's group has been working on the development of so-called quantum computers now for many years. With the 'quantum walk' the team has now achieved a further seminal step on this path. 'With the effect we have demonstrated, entirely new algorithms can be implemented,' Artur Widera explains. Search processes are one example. Today, if you want to trace a single one in a row of zeros, you have to check all the digits individually. The time taken therefore increases linearly with the number of digits. By contrast, using the 'quantum walk' algorithm the random walker can search in many different places simultaneously. The search for the proverbial needle in a haystack would thus be greatly speeded up.
Their research will be published in the July 10 issue of the scientific journal Science.
Adapted from materials provided by University of Bonn.

martedì 7 luglio 2009

DIY Production In 'Second Life' Factory


ScienceDaily (July 7, 2009) — Anyone who wants to can now produce their own vehicle in a factory on the “Second Life” Internet platform. They can program the industrial robots, and transport and assemble the individual parts themselves. Learning platforms provide relevant background information.
In the “transparent factory”, car enthusiasts can watch vehicles being assembled part by part, and a new system set up by researchers of the Fraunhofer Institute for Manufacturing Engineering and Automation IPA even enables users to try their own hand at producing a quad bike, a four-wheeled motorbike. They can switch on conveyor belts, program industrial robots, and paint the frame themselves. At the end, they can zoom out of the factory hall with their finished product without paying a single cent. How is this possible? Because the factory does not exist in the real world but on the Internet platform of “Second Life”, a virtual world through which users can move in the form of a virtual figure known as an “avatar”.
“With the ‘factory of eMotions’, we want to familiarize people with a modern, technically advanced factory. We also want to demonstrate how the latest media can set things in motion,” says IPA scientist
Stefan Seitz. “Second Life has grown steadily: While in 2007, between 20,000 and 40,000 people were simultaneously online at any given time, this number has now risen to between 50,000 and 80,000.” In the factory, users first of all indicate which quad model they would like to produce. Powerful or fuel-saving? Black, silver or red? What type of wheel rims? They can choose from a variety of models as they please. Once their avatar has made a choice, production can begin. The parts list is sent out, and all components are manufactured, assembled and subjected to a quality inspection. The avatar can watch the production process and interact at certain stages. Learning platforms located at various points in the factory hall provide users with relevant background information. How is the production process controlled? How does a press work?
“The main challenge lay in reproducing the control logic for production – in other words, teaching the system how to produce a part on Machine A, transport it to Machine B and mount it there. Until now, the ‘Second Life’ platform has offered no support for this,” says Seitz. The researchers have developed a modular system which also enables any other product to be made. Industrial companies and private users can use the building blocks to set up their own virtual factories. The scientists have even integrated a speech recognition system, so the machines and robots can also be controlled by telephone. The factory will be revealed to the public in early July on the occasion of the IPA’s 50th anniversary.
Adapted from materials provided by Fraunhofer-Gesellschaft.

Physicists Find Way To Control Individual Bits In Quantum Computers.

SOURCE

ScienceDaily (July 7, 2009) — Physicists at the National Institute of Standards and Technology (NIST) have overcome a hurdle in quantum computer development, having devised a viable way to manipulate a single "bit" in a quantum processor without disturbing the information stored in its neighbors. The approach, which makes novel use of polarized light to create "effective" magnetic fields, could bring the long-sought computers a step closer to reality.
A great challenge in creating a working quantum computer is maintaining control over the carriers of information, the "switches" in a quantum processor while isolating them from the environment. These quantum bits, or "qubits," have the uncanny ability to exist in both "on" and "off" positions simultaneously, giving quantum computers the power to solve problems conventional computers find intractable – such as breaking complex cryptographic codes.
One approach to quantum computer development aims to use a single isolated rubidium atom as a qubit. Each such rubidium atom can take on any of eight different energy states, so the design goal is to choose two of these energy states to represent the on and off positions. Ideally, these two states should be completely insensitive to stray magnetic fields that can destroy the qubit's ability to be simultaneously on and off, ruining calculations. However, choosing such "field-insensitive" states also makes the qubits less sensitive to those magnetic fields used intentionally to select and manipulate them. "It's a bit of a catch-22," says NIST's Nathan Lundblad. "The more sensitive to individual control you make the qubits, the more difficult it becomes to make them work properly."
To solve the problem of using magnetic fields to control the individual atoms while keeping stray fields at bay, the NIST team used two pairs of energy states within the same atom. Each pair is best suited to a different task: One pair is used as a "memory" qubit for storing information, while the second "working" pair comprises a qubit to be used for computation. While each pair of states is field- insensitive, transitions between the memory and working states are sensitive, and amenable to field control. When a memory qubit needs to perform a computation, a magnetic field can make it change hats. And it can do this without disturbing nearby memory qubits.
The NIST team demonstrated this approach in an array of atoms grouped into pairs, using the technique to address one member of each pair individually. Grouping the atoms into pairs, Lundblad says, allows the team to simplify the problem from selecting one qubit out of many to selecting one out of two – which, as they show in their paper, can be done by creating an effective magnetic field, not with electric current as is ordinarily done, but with a beam of polarized light.
The polarized-light technique, which the NIST team developed, can be extended to select specific qubits out of a large group, making it useful for addressing individual qubits in a quantum processor without affecting those nearby. "If a working quantum computer is ever to be built," Lundblad says, "these problems need to be addressed, and we think we've made a good case for how to do it." But, he adds, the long-term challenge to quantum computing remains: integrating all of the required ingredients into a single apparatus with many qubits.
Journal reference:
N. Lundblad, J.M. Obrecht, I.B. Spielman, and J.V. Porto. Field-sensitive addressing and control of field-insensitive neutral-atom qubits. Nature Physics, July 5, 2009
Adapted from materials provided by National Institute of Standards and Technology (NIST).

venerdì 3 luglio 2009

Computer Scientists Develop Model For Studying Arrangements Of Tissue Networks By Cell Division


ScienceDaily (July 3, 2009) — Computer scientists at Harvard have developed a framework for studying the arrangement of tissue networks created by cell division across a diverse set of organisms, including fruit flies, tadpoles, and plants.
The finding, published in the June 2009 issue of PLoS Computational Biology, could lead to insights about how multicellular systems achieve (or fail to achieve) robustness from the seemingly random behavior of groups of cells and provide a roadmap for researchers seeking to artificially emulate complex biological behavior.
"We developed a model that allows us to study the topologies of tissues, or how cells connect to each other, and understand how that connectivity network is created through generations of cell division," says senior author Radhika Nagpal, Assistant Professor of Computer Science at the Harvard School of Engineering and Applied Sciences (SEAS) and a core faculty member of the Wyss Institute for Biologically Inspired Engineering. "Given a cell division strategy, even if cells divide at random, very predictable 'signature' features emerge at the tissue level."
Using their computational model, Nagpal and her collaborators demonstrated that the regularity of the tissue, such as the percentage of hexagons and the overall cell shape distribution, can act as an indicator for inferring properties about the cell division mechanism itself. In the epithelial tissues of growing organisms, from fruit flies to humans, the ability to cope with often unpredictable variations (referred to as robustness) is critical for normal development. Rapid growth, entailing large amounts of cell division, must be balanced with the proper regulation of overall tissue and organ architecture.
"Even with modern imaging methods, we can rarely directly 'ask' the cell how it decided upon which way to divide. The computational tool allows us to generate and eliminate hypotheses about cell division. Looking at the final assembled tissue gives us a clue about what assembly process was used," explains Nagpal.
The model also sheds light on a prior discovery made by the team: that many proliferating epithelia, from plants to frogs, show a nearly identical cell shape distribution. While the reasons are not clear, the authors suggest that the high regularity observed in nature requires a strong correlation between how neighboring cells divide. While plants and fruit flies, for example, seem to have conserved cell shape distributions, the two organisms have, based on the computational and experimental evidence, evolved distinct ways of achieving such a pattern.
"Ultimately, the work offers a beautiful example of the way biological development can take advantage of very local and often random processes to create large-scale robust systems. Cells react to local context but still create organisms with incredible global predictability," says Nagpal.
In the future, the team plans to use their approach to detect and study various mutations that adversely affect cell division process in epithelial tissues. Epithelial tissues are common throughout animals and form important structures in humans from skin to the inner lining of organs. Deviations from normal division can result in abnormal growth during early development and to the formation of cancers in adults.
"One day we may even be able to use our model to help researchers understand other kinds of natural cellular networks, from tissues to geological crack formations, and, by taking inspiration from biology, design more robust computer networks," adds Nagpal.
Nagpal's collaborators included Ankit B. Patel and William T. Gibson, both at Harvard, and Dr. Matthew C. Gibson at Stower's Institute.
Adapted from materials provided by Harvard University, via EurekAlert!, a service of AAAS.

giovedì 2 luglio 2009

Optical Computer Closer: Optical Transistor Made From Single Molecule

SOURCE

ScienceDaily (July 2, 2009) — ETH Zurich researchers have successfully created an optical transistor from a single molecule. This has brought them one step closer to an optical computer.
Internet connections and computers need to be ever faster and more powerful nowadays. However, conventional central processing units (CPUs) limit the performance of computers, for example because they produce an enormous amount of heat. The millions of transistors that switch and amplify the electronic signals in the CPUs are responsible for this. One square centimeter of CPU can emit up to 125 watts of heat, which is more than ten times as much as a square centimeter of an electric hotplate.
Photons instead of electrons
This is why scientists have been trying for some time to find ways to produce integrated circuits that operate on the basis of photons instead of electrons. The reason is that photons do not only generate much less heat than electrons, but they also enable considerably higher data transfer rates.
Although a large part of telecommunications engineering nowadays is based on optical signal transmission, the necessary encoding of the information is generated using electronically controlled switches. A compact optical transistor is still a long way off. Vahid Sandoghdar, Professor at the Laboratory of Physical Chemistry of ETH Zurich, explains that, “Comparing the current state of this technology with that of electronics, we are somewhat closer to the vacuum tube amplifiers that were around in the fifties than we are to today’s integrated circuits.”
However, his research group has now achieved a decisive breakthrough by successfully creating an optical transistor with a single molecule. For this, they have made use of the fact that a molecule’s energy is quantized: when laser light strikes a molecule that is in its ground state, the light is absorbed. As a result, the laser beam is quenched. Conversely, it is possible to release the absorbed energy again in a targeted way with a second light beam. This occurs because the beam changes the molecule’s quantum state, with the result that the light beam is amplified. This so-called stimulated emission, which Albert Einstein described over 90 years ago, also forms the basis for the principle of the laser.
Focusing on a nano scale
Jaesuk Hwang, first author of the study and a scientific member of Sandoghdar’s nano-optics group, explains that, “Amplification in a conventional laser is achieved by an enormous number of molecules.” By focusing a laser beam on only a single tiny molecule, the ETH Zurich scientists have now been able to generate stimulated emission using just one molecule. They were helped in this by the fact that, at low temperatures, molecules seem to increase their apparent surface area for interaction with light . The researchers therefore needed to cool the molecule down to minus 272 degrees Celsius (minus 457.6 degrees Fahrenheit), i.e. one degree above absolute zero. In this case, the enlarged surface area corresponded approximately to the diameter of the focused laser beam.
Switching light with light
By using one laser beam to prepare the quantum state of a single molecule in a controlled fashion, scientists could significantly attenuate or amplify a second laser beam. This mode of operation is identical to that of a conventional transistor, in which electrical potential can be used to modulate a second signal.
Thus component parts such as the new single molecule transistor may also pave the way for a quantum computer. Sandoghdar says, “Many more years of research will still be needed before photons replace electrons in transistors. In the meantime, scientists will learn to manipulate and control quantum systems in a targeted way, moving them closer to the dream of a quantum computer.”
Journal reference:
J. Hwang, M. Pototschnig, R. Lettow, G. Zumofen, A. Renn, S. Götzinger, V. Sandoghda. A single-molecule opzical transistor. Nature, 460, 76-80 DOI: 10.1038/nature08134
Adapted from materials provided by ETH Zurich.

mercoledì 1 luglio 2009

Quantum Communications One Step Closer: Novel Ion Trap For Sensing Force And Light Developed

SOURCE

ScienceDaily (July 1, 2009) — Miniature devices for trapping ions (electrically charged atoms) are common components in atomic clocks and quantum computing research. Now, a novel ion trap geometry demonstrated at the National Institute of Standards and Technology (NIST) could usher in a new generation of applications because the device holds promise as a stylus for sensing very small forces or as an interface for efficient transfer of individual light particles for quantum communications.
The "stylus trap," built by physicists from NIST and Germany's University of Erlangen-Nuremberg, is described in Nature Physics. It uses fairly standard techniques to cool ions with laser light and trap them with electromagnetic fields. But whereas in conventional ion traps, the ions are surrounded by the trapping electrodes, in the stylus trap a single ion is captured above the tip of a set of steel electrodes, forming a point-like probe. The open trap geometry allows unprecedented access to the trapped ion, and the electrodes can be maneuvered close to surfaces. The researchers theoretically modeled and then built several different versions of the trap and characterized them using single magnesium ions.
The new trap, if used to measure forces with the ion as a stylus probe tip, is about one million times more sensitive than an atomic force microscope using a cantilever as a sensor because the ion is lighter in mass and reacts more strongly to small forces. In addition, ions offer combined sensitivity to both electric and magnetic fields or other force fields, producing a more versatile sensor than, for example, neutral atoms or quantum dots. By either scanning the ion trap near a surface or moving a sample near the trap, a user could map out the near-surface electric and magnetic fields. The ion is extremely sensitive to electric fields oscillating at between approximately 100 kilohertz and 10 megahertz.
The new trap also might be placed in the focus of a parabolic (cone-shaped) mirror so that light beams could be focused directly on the ion. Under the right conditions, single photons, particles of light, could be transferred between an optical fiber and the single ion with close to 95 percent efficiency. Efficient atom-fiber interfaces are crucial in long-distance quantum key cryptography (QKD), the best method known for protecting the privacy of a communications channel. In quantum computing research, fluorescent light emitted by ions could be collected with similar efficiency as a read-out signal. The new trap also could be used to compare heating rates of different electrode surfaces, a rapid approach to investigating a long-standing problem in the design of ion-trap quantum computers.
Research on the stylus trap was supported by the Intelligence Advanced Research Projects Activity.
Journal reference:
R. Maiwald, D. Leibfried, J. Britton, J.C. Bergquist, G. Leuchs, and D.J. Wineland. Stylus ion trap for enhanced access and sensing. Nature Physics, Online June 28
Adapted from materials provided by National Institute of Standards and Technology.

venerdì 19 giugno 2009

Sunspots Revealed In Striking Detail By Supercomputers

SOURCE

ScienceDaily (June 18, 2009) — In a breakthrough that will help scientists unlock mysteries of the Sun and its impacts on Earth, an international team of scientists led by the National Center for Atmospheric Research (NCAR) has created the first-ever comprehensive computer model of sunspots. The resulting visuals capture both scientific detail and remarkable beauty.
The high-resolution simulations of sunspot pairs open the way for researchers to learn more about the vast mysterious dark patches on the Sun's surface. Sunspots are associated with massive ejections of charged plasma that can cause geomagnetic storms and disrupt communications and navigational systems. They also contribute to variations in overall solar output, which can affect weather on Earth and exert a subtle influence on climate patterns.
The research, by scientists at NCAR and the Max Planck Institute for Solar System Research (MPS) in Germany, is being published June 18 in Science Express.
"This is the first time we have a model of an entire sunspot," says lead author Matthias Rempel, a scientist at NCAR's High Altitude Observatory. "If you want to understand all the drivers of Earth's atmospheric system, you have to understand how sunspots emerge and evolve. Our simulations will advance research into the inner workings of the Sun as well as connections between solar output and Earth's atmosphere."
Ever since outward flows from the center of sunspots were discovered 100 years ago, scientists have worked toward explaining the complex structure of sunspots, whose number peaks and wanes during the 11-year solar cycle. Sunspots encompass intense magnetic activity that is associated with solar flares and massive ejections of plasma that can buffet Earth's atmosphere. The resulting damage to power grids, satellites, and other sensitive technological systems takes an economic toll on a rising number of industries.
Creating such detailed simulations would not have been possible even as recently as a few years ago, before the latest generation of supercomputers and a growing array of instruments to observe the Sun. The model enables scientists to capture the convective flow and movement of energy that underlie the sunspots, which is not directly detectable by instruments.
The work was supported by the National Science Foundation, NCAR's sponsor. The research team improved a computer model, developed at MPS, that built upon numerical codes for magnetized fluids that had been created at the University of Chicago.
Computer model provides a unified physical explanation
The new simulations capture pairs of sunspots with opposite polarity. In striking detail, they reveal the dark central region, or umbra, with brighter umbral dots, as well as webs of elongated narrow filaments with flows of mass streaming away from the spots in the outer penumbral regions.
The model suggests that the magnetic fields within sunspots need to be inclined in certain directions in order to create such complex structures. The authors conclude that there is a unified physical explanation for the structure of sunspots in umbra and penumbra that is the consequence of convection in a magnetic field with varying properties.
The simulations can help scientists decipher the mysterious, subsurface forces in the Sun that cause sunspots. Such work may lead to an improved understanding of variations in solar output and their impacts on Earth.
Supercomputing at 76 trillion calculations per second
To create the model, the research team designed a virtual, three-dimensional domain that simulates an area on the Sun measuring about 31,000 miles by 62,000 miles and about 3,700 miles in depth - an expanse as long as eight times Earth's diameter and as deep as Earth's radius. The scientists then used a series of equations involving fundamental physical laws of energy transfer, fluid dynamics, magnetic induction and feedback, and other phenomena to simulate sunspot dynamics at 1.8 billion points within the virtual expanse, each spaced about 10 to 20 miles apart. For weeks, they solved the equations on NCAR's new bluefire supercomputer, an IBM machine that can perform 76 trillion calculations per second.
The work drew on increasingly detailed observations from a network of ground- and space-based instruments to verify that the model captured sunspots realistically.
The new model is far more detailed and realistic than previous simulations that failed to capture the complexities of the outer penumbral region. The researchers noted, however, that even their new model does not accurately capture the lengths of the filaments in parts of the penumbra. They can refine the model by placing the grid points even closer together, but that would require more computing power than is currently available.
"Advances in supercomputing power are enabling us to close in on some of the most fundamental processes of the Sun," says Michael Knoelker, director of NCAR's High Altitude Observatory and a co-author of the paper. "With this breakthrough simulation, an overall comprehensive physical picture is emerging for everything that observers have associated with the appearance, formation, dynamics, and the decay of sunspots on the Sun's surface."
The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation.
Adapted from materials provided by National Center for Atmospheric Research/University Corporation for Atmospheric Research.

Human Eye Inspires Advance In Computer Vision From Boston College Researchers


ScienceDaily (June 18, 2009) — Inspired by the behavior of the human eye, Boston College computer scientists have developed a technique that lets computers see objects as fleeting as a butterfly or tropical fish with nearly double the accuracy and 10 times the speed of earlier methods.
The linear solution to one of the most vexing challenges to advancing computer vision has direct applications in the fields of action and object recognition, surveillance, wide-base stereo microscopy and three-dimensional shape reconstruction, according to the researchers, who will report on their advance at the upcoming annual IEEE meeting on computer vision.
BC computer scientists Hao Jiang and Stella X. Yu developed a novel solution of linear algorithms to streamline the computer's work. Previously, computer visualization relied on software that captured the live image then hunted through millions of possible object configurations to find a match. Further compounding the challenge, even more images needed to be searched as objects moved, altering scale and orientation.
Rather than combing through the image bank – a time- and memory-consuming computing task – Jiang and Yu turned to the mechanics of the human eye to give computers better vision.
"When the human eye searches for an object it looks globally for the rough location, size and orientation of the object. Then it zeros in on the details," said Jiang, an assistant professor of computer science. "Our method behaves in a similar fashion, using a linear approximation to explore the search space globally and quickly; then it works to identify the moving object by frequently updating trust search regions."
Trust search regions act as visual touchstones the computer returns to again and again. Jiang and Yu's solution focuses on the mathematically-generated template of an image, which looks like a constellation when lines are drawn to connect the stars. Using the researchers' new algorithms, computer software identifies an object using the template of a trust search region. The program then adjusts the trust search regions as the object moves and finds its mathematical matches, relaying that shifting image to a memory bank or a computer screen to record or display the object.
Jiang says using linear approximation in a sequence of trust regions enables the new program to maintain spatial consistency as an object moves and reduces the number of variables that need to be optimized from several million to just a few hundred. That increased the speed of image matching 10 times over compared with previous methods, he said.
The researchers tested the software on a variety of images and videos – from a butterfly to a stuffed Teddy Bear – and report achieving a 95 percent detection rate at a fraction of the complexity. Previous so-called "greedy" methods of search and match achieved a detection rate of approximately 50 percent, Jiang said.
Jiang will present the team's findings at the IEEE Conference on Computer Vision and Pattern Recognition 2009, which takes place June 20-25 in Miami.
Adapted from materials provided by Boston College, via EurekAlert!, a service of AAAS.

Hybrid System Of Human-Machine Interaction Created


ScienceDaily (June 17, 2009) — Scientists at FAU have created a "hybrid" system to examine real-time interactions between humans and machines (virtual partners). By pitting human against machine, they open up the possibility of exploring and understanding a wide variety of interactions between minds and machines, and establishing the first step toward a much friendlier union of man and machine, and perhaps even creating a different kind of machine altogether.
For more than 25 years, scientists in the Center for Complex Systems and Brain Sciences (CCSBS) in Florida Atlantic University’s Charles E. Schmidt College of Science, and others around the world, have been trying to decipher the laws of coordinated behavior called “coordination dynamics”.
Unlike the laws of motion of physical bodies, the equations of coordination dynamics describe how the coordination states of a system evolve over time, as observed through special quantities called collective variables. These collective variables typically span the interaction of organism and environment. Imagine a machine whose behavior is based on the very equations that are supposed to govern human coordination. Then imagine a human interacting with such a machine whereby the human can modify the behavior of the machine and the machine can modify the behavior of the human.
In a groundbreaking study published in the June 3 issue of PLoS One and titled “Virtual Partner Interaction (VPI): exploring novel behaviors via coordination dynamics,” an interdisciplinary group of scientists in the CCSBS created VPI, a hybrid system of a human interacting with a machine. These scientists placed the equations of human coordination dynamics into the machine and studied real-time interactions between the human and virtual partners. Their findings open up the possibility of exploring and understanding a wide variety of interactions between minds and machines. VPI may be the first step toward establishing a much friendlier union of man and machine, and perhaps even creating a different kind of machine altogether.
“With VPI, a human and a ‘virtual partner’ are reciprocally coupled in real-time,” said Dr. J. A. Scott Kelso, the Glenwood and Martha Creech Eminent Scholar in Science at FAU and the lead author of the study. “The human acquires information about his partner’s behavior through perception, and the virtual partner continuously detects the human’s behavior through the input of sensors. Our approach is analogous to the dynamic clamp used to study the dynamics of interactions between neurons, but now scaled up to the level of behaving humans.”
In this first ever study of VPI, machine and human behaviors were chosen to be quite simple. Both partners were tasked to coordinate finger movements with one another. The human executed the task with the intention of performing in-phase coordination with the machine, thereby trying to synchronize his/her flexion and extension movements with those of the virtual partner’s.
The machine, on the other hand, executed the task with the competing goal of performing anti-phase coordination with the human, thereby trying to extend its finger when the human flexed and vice versa. Pitting machine against human through opposing task demands was a way the scientists chose to enhance the formation of emergent behavior, and also allowed them to examine each partner’s individual contribution to the coupled behavior. An intriguing outcome of the experiments was that human subjects ascribed intentions to the machine, reporting that it was “messing” with them.
“The symmetry between the human and the machine, and the fact that they carry the same laws of coordination dynamics, is a key to this novel scientific framework,” said co-author Dr. Gonzalo de Guzman, a physicist and research associate professor at the FAU center. “The design of the virtual partner mirrors the equations of motion of the human neurobehavioral system. The laws obtained from accumulated studies describe how the parts of the human body and brain self-organize, and address the issue of self-reference, a condition leading to complexity.”
One ready application of VPI is the study of the dynamics of complex brain processes such as those involved in social behavior. The extended parameter range opens up the possibility of systematically driving functional process of the brain (neuromarkers) to better understand their roles. The scientists in this study anticipate that just as many human skills are acquired by observing other human beings; human and machine will learn novel patterns of behavior by interacting with each other.
“Interactions with ever proliferating technological devices often place high skill demands on users who have little time to develop these skills,” said Kelso. “The opportunity presented through VPI is that equally useful and informative new behaviors may be uncovered despite the built-in asymmetry of the human-machine interaction.”
While stable and intermittent coordination behaviors emerged that had previously been observed in ordinary human social interactions, the scientists also discovered novel behaviors or strategies that have never previously been observed in human social behavior. The emergence of such novel behaviors demonstrates the scientific potential of the VPI human-machine framework.
Modifying the dynamics of the virtual partner with the purpose of inducing a desired human behavior, such as learning a new skill or as a tool for therapy and rehabilitation, are among several applications of VPI.
“The integration of complexity in to the behavioral and neural sciences has just begun,” said Dr. Emmanuelle Tognoli, research assistant professor in FAU’s CCSBS and co-author of the study. “VPI is a move away from simple protocols in which systems are ‘poked’ by virtue of ‘stimuli’ to understanding more complex, reciprocally connected systems where meaningful interactions occur.”
Research for this study was supported by the National Science Foundation program “Human and Social Dynamics,” the National Institute of Mental Health’s “Innovations Award,” “Basic and Translational Research Opportunities in the Social Neuroscience of Mental Health,” and the Office of Naval Research Code 30. Kelso’s research is also supported by the Pierre de Fermat Chaire d’Excellence and Tognoli’s research is supported by the Davimos Family Endowment for Excellence in Science.
Adapted from materials provided by Florida Atlantic University, via Newswise.

venerdì 5 giugno 2009

Endless Original Music: Computer Program Creates Music Based On Emotions


ScienceDaily (June 2, 2009) — A group of researchers from the University of Granada (UGR) has developed Inmamusys, a software program that can create music in response to emotions that arise in the listener. By using artificial intelligence (AI) techniques, the program enables original, copyright-free and emotion-inspiring music to be played continuously.
UGR researchers Miguel Delgado, Waldo Fajardo and Miguel Molina decided to design a software program that would enable a person who knew nothing about composition to create music. The system they devised, using AI, is called Inmamusys, an acronym for Intelligent Multiagent Music System, and is able to compose and play music in real time.
If successful, this prototype, which has been described recently in the journal Expert Systems with Applications, looks likely to bring about great changes in terms of the intrusive and repetitive canned music played in public places.
Miguel Molina, lead author of the study, says that while the repertoire of such canned music is very limited, the new invention can be used to create a pleasant, non-repetitive musical environment for anyone who has to be within earshot throughout the day.
Everyone's ears have suffered the effects of repetitively-played canned music, be it in workplaces, hospital environments or during phone calls made to directory inquiries numbers. On this basis, the research team decided that it would be "very interesting to design and build an intelligent system able to generate music automatically, ensuring the correct degree of emotiveness (in order to manage the environment created) and originality (guaranteeing that the tunes composed are not repeated, and are original and endless)."
Inmamusys has the necessary knowledge to compose emotive music through the use of AI techniques. In designing and developing the system, the researchers worked on the abstract representation of the concepts necessary to deal with emotions and feelings. To achieve this, Molina says, "we designed a modular system that includes, among other things, a two-level multiagent architecture."
A survey was used to evaluate the system, with the results showing that users are able to identify the type of music composed by the computer. A person with no musical knowledge whatsoever can use this artificial musical composer, because the user need do nothing more than decide on the type of music."
Beneath the system's ease of use, Miguel Molina reveals that a complex framework is at work to allow the computer to imitate a feature as human as creativity. Aside from creativity, music also requires specific knowledge.
According to Molina, this "is usually something done by human beings, although they do not understand how they do it. In reality, there are numerous processes involved in the creation of music and, unfortunately, we still do not understand many of them. Others are so complex that we cannot analyse them, despite the enormous power of current computing tools. Nowadays, thanks to the advances made in computer sciences, there are areas of research -- such as artificial intelligence -- that seek to reproduce human behaviour. One of the most difficult facets of all to reproduce is creativity."
Farewell to copyright payments
Commercial development of this prototype will not only change the way in which research is carried out into the relationship between computers and emotions, the means of interacting with music and structures by which music is composed in the future. It will also serve, say the study's authors, to reduce costs.
According to the researchers, "music is highly present in our leisure and working environments, and a large number of the places we visit have canned music systems. Playing these pieces of music involves copyright payments. Our system will make these music copyright payments a thing of the past."
Journal reference:
Miguel Delgado; Waldo Fajardo; Miguel Molina-Solana. Inmamusys: Intelligent multiagent music system. Expert Systems with Applications, 2009; 36 (3): 4574 DOI: 10.1016/j.eswa.2008.05.028
Adapted from materials provided by Plataforma SINC, via AlphaGalileo.

Computer Graphics Researchers Simulate The Sounds Of Water And Other Liquids

SOURCE

ScienceDaily (June 4, 2009) — Splash, splatter, babble, sploosh, drip, drop, bloop and ploop!
Those are some of the sounds that have been missing from computer graphic simulations of water and other fluids, according to researchers in Cornell's Department of Computer Science, who have come up with new algorithms to simulate such sounds to go with the images.
The work by Doug James, associate professor of computer science, and graduate student Changxi Zheng will be reported at the 2009 ACM SIGGRAPH conference Aug. 3-7 in New Orleans. It is the first step in a broader research program on sound synthesis supported by a $1.2 million grant from the Human Centered Computing Program of the National Science Foundation (NSF) to James, assistant professor Kavita Bala and associate professor Steve Marschner.
In computer-animated movies, sound can be added after the fact from recordings or by Foley artists. But as virtual worlds grow increasingly interactive and immersive, the researchers point out, sounds will need to be generated automatically to fit events that can't be predicted in advance. Recordings can be cued in, but can be repetitive and not always well matched to what's happening.
"We have no way to efficiently compute the sounds of water splashing, paper crumpling, hands clapping, wind in trees or a wine glass dropped onto the floor," the researchers said in their research proposal.
Along with fluid sounds, the research also will simulate sounds made by objects in contact, like a bin of Legos; the noisy vibrations of thin shells, like trash cans or cymbals; and the sounds of brittle fracture, like breaking glass and the clattering of the resulting debris.
All the simulations will be based on the physics of the objects being simulated in computer graphics, calculating how those objects would vibrate if they actually existed, and how those vibrations would produce acoustic waves in the air. Physics-based simulations also can be used in design, just as visual simulation is now, James said. "You can tell what it's going to sound like before you build it," he explained, noting that a lot of effort often goes into making things quieter.
In their SIGGRAPH paper, Zheng and James report that most of the sounds of water are created by tiny air bubbles that form as water pours and splashes. Moving water traps air bubbles on the scale of a millimeter or so. Surface tension contracts the bubbles, compressing the air inside until it pushes back and expands the bubble. The repeated expansion and contraction over milliseconds generates vibrations in the water that eventually make its surface vibrate, acting like a loudspeaker to create sound waves in the air.
The simulation method developed by the Cornell researchers starts with the geometry of the scene, figures out where the bubbles would be and how they're moving, computes the expected vibrations and finally the sounds they would produce. The simulation is done on a highly parallel computer, with each processor computing the effects of multiple bubbles. The researchers have fine-tuned the results by comparing their simulations with real water sounds.
Demonstration videos of simulations of falling, pouring, splashing and babbling water are available at http://www.cs.cornell.edu/projects/HarmonicFluids.
The current methods still require hours of offline computing time, and work best on compact sound sources, the researchers noted, but they said further development should make possible the real-time performance needed for interactive virtual environments and deal with larger sound sources such as swimming pools or perhaps even Niagara Falls. They also plan to approach the more complex collections of bubbles in foam or plumes.
The research reported in the SIGGRAPH paper was supported in part by an NSF Faculty Early Career Award to James, and by the Alfred P. Sloan Foundation, Pixar, Intel and Autodesk.
Adapted from materials provided by Cornell University. Original article written by Bill Steele.

venerdì 15 maggio 2009

Is a room-temperature, solid-state quantum computer mere fantasy?

Marshall StonehamLondon Centre for Nanotechnology and Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
Published April 27, 2009
Creating a practical solid-state quantum computer is seriously hard. Getting such a computer to operate at room temperature is even more challenging. Is such a quantum computer possible at all? If so, which schemes might have a chance of success?

In his 2008 Newton Medal talk, Anton Zeilinger of the University of Vienna said: “We have to find ways to build quantum computers in the solid state at room temperature—that’s the challenge.” [1] This challenge spawns further challenges: Why do we need a quantum computer anyway? What would constitute a quantum computer? Why does the solid state seem essential? And would a cooled system, perhaps with thermoelectric cooling, be good enough?
Some will say the answer is obvious. But these answers vary from “It’s been done already” to “It can’t be done at all.” Some of the “not at all” group believe high temperatures just don’t agree with quantum mechanics. Others recognize that their favored systems cannot work at room temperature. Some scientists doubt that serious quantum computing is possible anyway. Are there methods that might just be able to meet Zeilinger’s challenge?
The questions that challenge
What is a computer? Standard classical computers use bits for encoding numbers, and the bits are manipulated by the classical gates that can execute AND and OR operations, for example. A classical bit has a value of 0 or 1, according to whether some small subunit is electrically charged or uncharged. Other forms are possible: the bits for a classical spintronic computer might be spins along or opposite to a magnetic field. Even the most modest computers on sale today incorporate complex networks of a few types of gates to control huge numbers of bits. If there are so few bits that you can count them on your fingers, it can’t seriously be considered a computer.
What do we mean by quantum? Being sure a phenomenon is “quantum” isn’t simple. Quantum ideas aren’t intuitive yet. Could you convince your banker that quantum physics could improve her bank’s security? Perhaps three questions identify the issues. First, how do you describe the state of a system? The usual descriptors, wave functions and density matrices, underlie wavelike interference and entanglement. Entanglement describes the correlations between local measurements on two particles, which I call their “quantum dance.” Entanglement is the resource that could make quantum computing worthwhile. The enemy of entanglement is decoherence, just as friction is the enemy of mechanical computers. Second, how does this quantum state change if it is not observed? It evolves deterministically, described by the Schrödinger equation. The probabilistic results of measurements emerge when one asks the third question: how to describe observations and their effects. Measurement modifies entanglement, often destroying it, as it singles out a specific state. This is one way that you can tell if an eavesdropper intercepted your message in a quantum communications system.
Proposed quantum computers have qubits manipulated by a few types of quantum gates, in a complex network. But the parallels are not complete [2]. Each classical bit has a definite value, it can only be 0 or 1, it can be copied without changing its value, it can be read without changing its value and, when left alone, its value will not change significantly. Reading one classical bit does not affect other (unread) bits. You must run the computer to compute the result of a computation. Every one of those statements is false for qubits, even that last statement. There is a further difference. For a classical computer, the process is Load → Run → Read, whereas for a quantum computer, the steps are Prepare → Evolve → Measure, or, as in one case discussed later, merely Prepare → Measure.
Why do we need a quantum computer? The major reasons stem from challenges to mainstream silicon technology. Markets demand enhanced power efficiency, miniaturization, and speed. These enhancements have their limits. Future technology scenarios developed for the semiconductor industry’s own roadmap [3] imply that the number of electrons needed to switch a transistor should fall to just 1 (one single electron) before 2020. Should we follow this innovative yet incremental roadmap, and trust to new tricks, or should we seek a radical technology, with wholly novel quantum components operating alongside existing silicon and photonic technologies? Any device with nanoscale features inevitably displays some types of quantum behavior, so why not make a virtue of necessity and exploit quantum ideas? Quantum-based ideas may offer a major opportunity, just as the atom gave the chemical industry in the 19th century, and the electron gave microelectronics in the 20th century. Quantum sciences could transform 21st century technologies.
Why choose the solid state for quantum computing? Quantum devices nearly always mean nanoscale devices, ultimately because useful electronic wave functions are fairly compact [4]. Complex devices with controlled features at this scale need the incredible know-how we have acquired with silicon technology. Moreover, quantum computers will be operated by familiar silicon technology. Operation will be easier if classical controls can be integrated with the quantum device, and easiest if the quantum device is silicon compatible. And scaling up, the linking of many basic and extremely small units is a routine demand for silicon devices. With silicon technologies, there are also good ways to link electronics and photonics. So an ideal quantum device would not just meet quantum performance criteria, but would be based on silicon; it would use off-the-shelf techniques (even sophisticated ones) suitable for a near-future generation fabrication plant. A cloud on the horizon concerns decoherence: can entanglement be sustained long enough in a large enough system for a useful quantum calculation?
All the objections
It has been done already? Some beautiful work demonstrating critical steps, including initializing a spin system and transfer of quantum information, has been done at room temperature with nitrogen-vacancy (NV-) centers in diamond [5]. Very few qubits were involved, and scaling up to a useful computer seems unlikely without new ideas. But the combination of photons—intrinsically insensitive to temperature—with defects or dopants with long decoherence times leaves hope.
It can’t be done: serious quantum computing simply isn’t possible anyway. Could any quantum computer work at all? Is it credible that we can build a system big enough to be useful, yet one that isn’t defeated by loss of entanglement or degraded quantum coherence? Certainly there are doubters, who note how friction defeated 19th century mechanical computers. Others have given believable arguments that computing based on entanglement is possible [6]. Of course, it may prove that some hybrid, a sort of quantum-assisted classical computing, will prove the crucial step.
It can’t be done: quantum behavior disappears at higher temperatures. Confusion can arise because quantum phenomena show up in two ways. In quantum statistics, the quantal ħ appears as ħω/kT. When statistics matter most, near equilibrium, high temperatures T oppose the quantum effects of ħ. However, in quantum dynamics, ħ can appear unassociated with T, opening new channels of behavior. Quantum information processing relies on staying away from equilibrium, so the rates of many individual processes compete in complex ways: dynamics dominate. Whatever the practical problems, there is no intrinsic problem with quantum computing at high temperatures.
It can’t be done: the right qubits don’t exist. True, some qubits are not available at room temperature. These include superconducting qubits and those based on Bose-Einstein condensates. In Kane’s seminal approach [7], the high polarizability needed for phosphorus-doped silicon (Si:P) corresponds to a low ionization donor energy, so the qubits disappear (or decohere) at room temperature. In what follows, I shall look at methods without such problems.
What needs to be done: Implementing quantum computing
David DiVincenzo at IBM Research Labs devised a checklist [8] that conveniently defines minimal (but seriously challenging) needs for a credible quantum computer. There must be a well-defined set of quantum states, such as electron spin states, to use as qubits. One needs scalability, so that enough qubits (let’s say 20, though 200 would be better) linked by entanglement are available to make a serious quantum computer. Operation demands a means to initialize and prepare suitable pure quantum states, a means to manipulate qubits to carry out a desired quantum evolution, and means to read out the results. Decoherence must be slow enough to allow these operations.
What does this checklist imply for solid-state quantum computing? Are there solid-state systems with decoherence mechanisms, key energies, and qubit control systems that might work at useful temperatures, ideally room temperature? Solid-state technologies have good prospects for scalability. There is a good chance that there are ingenious ways to link the many qubits and quantum gates needed for almost any serious application. However, decoherence might be fast. This may be less of a problem than imagined, for fast operating speeds go hand in hand with fast decoherence. Fast processing needs strong interactions, and such strong interactions will usually cause decoherence [9].
For spin-based solid-state quantum computing, most routes to initialization group into four categories. First, there are optical methods (including microwaves), based on selection rules, such as those used for NV- experiments. Then there are spintronic approaches, using a source (perhaps a ferromagnet) of spin-polarized electrons or excitons. (Note that spins have been transferred over distances of nearly a micron at room temperature [10].) Then there are brute force methods aiming for thermal equilibrium in a very large magnetic field, where the ratio of Zeeman splitting to thermal energy kBT is large. And finally there are tricks involving extra qubits that are not used in calculations. Of these methods, the optical and spintronic concepts seem most promising for room-temperature operation.
For readout, there are two broad strategies. Most ideas for spin-based quantum information processing aim at the sequential readout of individual spins. However, there are other less-developed ideas in which the ensemble of relevant spins is looked at together, as in some neutron scattering studies of antiferromagnetic crystals. What methods are there for probing single spins, if the sequential strategy is chosen? First, there is direct frequency discrimination, including the use of Zeeman splitting, of hyperfine structure, and so on. Ideas from atom trap experiments suggest that one can continue to interrogate a spin with a sequence of photons that do not change the qubit [11]. Such methods might work at room temperature, at least if the relevant spectral lines remain sharp enough. Second, there are many ways to exploit spin-dependent rates of carrier scatter or trapping. One might examine how mobile polarized spins are scattered by a fixed spin that is to be measured. Or the spin of a mobile carrier might be measured by its propensity for capture or scatter by fixed spin, or by some combination of polarized mobile spins and interferometry. At room temperature, the problem is practice rather than principle, and acceptable methods seem possible. A third way is to use relative tunnel rates, where one spin state can be blocked. Tunneling-based methods can become very hard at higher temperatures. There are then various ideas, all of which seem to be both tricky and relatively slow, but I may be being pessimistic. These include the use of circularly polarized light and magneto-optics, the direct detection of spin resonance with a scanning tunneling microscope, the exploitation of large spin-orbit coupling, or the direct measurement of a force with a scanning probe having a magnetic tip.
For the manipulations during operation, probably the most important ways use electromagnetic radiation, whether optical, microwave or radio frequency. Other controls, such as ultrasonics or surface acoustic waves, are less flexible. Electromagnetic methods might well operate at room temperature. Other suggestions invoke nanoscale electrodes. I do not know of any that look both credible and scalable.
Hopes for higher temperature operation
In what follows, I shall concentrate on two proposals as examples, with apologies to those whose suggestions I am omitting. Both of the proposals use optical methods to control spins, but do so in wholly different ways. The first is a scheme for optically controlled spintronics that I, Andrew Fisher, and Thornton Greenland proposed [11, 12]. The second route exploits entanglement of states of distant atoms by interference [13] in the context of measurement-based quantum computing [14]. A broader discussion of the materials needed is given in Ref. [15].
Optically controlled spintronics [11, 12]. Think of a thin film of silicon, perhaps 10 nm thick, isotopically pure to avoid nuclear spins, on top of an oxide substrate (Fig. 1). The simple architecture described is essentially two dimensional. Now imagine the film randomly doped with two species of deep donor—one species as qubits, the other to control the qubits. In their ground states, these species should have negligible interactions. When a control donor is excited, the electron’s wave function spreads out more, and its overlap with two of the qubit donors will create an entangling interaction between those two qubits (Fig. 2). Shaped pulses of optical excitation of chosen control donors guide the quantum dance (entanglement) of chosen qubit donors [16].
For controlling entanglement in this way, typical donor spacings in silicon must be of the order of tens of nanometers. Optically, one can only address regions of the order of a wavelength across, say 1000 nm. The limit of optical spatial resolution is a factor 100 larger than donor spacings needed for entanglement. How can one address chosen pairs of qubits? The smallest area on which we can focus light contains many spins. The answer is to exploit the randomness inevitable in standard fabrication and doping. Within a given patch of the film a wavelength across, the optical absorptions will be inhomogeneously broadened from dopant randomness. Even the steps at the silicon interfaces are helpful because the film thickness variations shift transition energies from one dopant site to another. Light of different wavelengths will excite different control donors in this patch, and so manipulate the entanglements of different qubits. Reasonable assumptions suggest one might make use of perhaps 20 gates or so per patch. Controlled links among 20 qubits would be very good by present standards, though further scale up—the linking of patches—would be needed for a serious computer (Fig. 3). The optically controlled spintronics strategy [11, 12] separates the two roles: qubit spins store quantum information, and controls manipulate quantum information. These roles require different figures of merit.
To operate at room temperature, qubits must stay in their ground states, and their decoherence—loss of quantum information—must be slow enough. Shallow donors like Si:P or Si:Bi thermally ionize too readily for room-temperature operations, though one could demonstrate principles at low temperatures with these materials. Double donors like Si:Mg+ or Si:Se+ have ionization energies of about half the silicon band gap and might be deep enough. Most defects in diamond are stable at room temperature, including substitutional N in diamond and the NV- center on which so many experiments have been done.
What about decoherence? First, whatever enables entanglement also causes decoherence. This is why fast switching means fast decoherence, and slow decoherence implies slow switching. Optical control involves manipulation of the qubits by stimulated absorption and emission in controlled optical excitation sequences, so spontaneous emission will cause decoherence. For shallow donors, like Si:P, the excitation energy is less than the maximum silicon phonon energy; even at low temperatures, one-phonon emission causes rapid decoherence. Second, spin-lattice relaxation in qubit ground states destroys quantum information. Large spin-orbit coupling is bad news, so avoiding high atomic number species helps. Spin lattice relaxation data at room temperature are not yet available for those Si donors (like Si:Se+) where one-phonon processes are eliminated because their first excited state lies more than the maximum phonon energy above the ground state. In diamond at room temperature, the spin-lattice relaxation time for substitutional nitrogen is very good (~1 ms) and a number of other centers have times ~0.1 ms. Third, excited state processes can be problems, and two-photon ionization puts constraints on wavelengths and optical intensities. Fourth, the qubits could lose quantum information to the control atoms. This can be sorted out by choosing the right form of excitation pulses [16]. Fifth, interactions with other spins, including nuclear spins, set limits, but there are helpful strategies, like using isotopically pure silicon [17].
The control dopants require different criteria. The wave functions of electronically excited controls overlap and interact with two or more qubits to manipulate entanglements between these qubits. The transiently excited state wave function of the control must have the right spatial extent and lifetime. While centers like Si:As could be used to show the ideas, for room-temperature operation one would choose perhaps a double donor in silicon, or substitutional phosphorus in diamond. The control dopant must have sharp optical absorption lines, since what determines the number of independent gates available in a patch is the ratio of the spread of excitation energies, inhomogeneously broadened, to the (homogeneous) linewidth. The spread of excitation energies—inhomogeneous broadening is beneficial in this optical spintronics approach [11, 12]—has several causes, some controllable. Randomness of relative control-qubit positions and orientations is important, and it seems possible to improve the distribution by using self-organization to eliminate unusable close encounters. Steps on the silicon interfaces are also helpful, provided there are no unpaired spins. Overall, various experimental data and theoretical analyses indicate likely inhomogeneous widths are a few percent of the excitation energy.
A checklist of interesting systems as qubits or controls shows some significant gaps in knowledge of defects in solids. Surprisingly little is known about electronic excited states in diamond or silicon, apart from energies and (sometimes) symmetries. Little is known about spin lattice relaxation and excited state kinetics at temperatures above liquid nitrogen, except for the shallow donors that are unlikely to be good choices for a serious quantum computer. There are few studies of stabilities of several species present at one time. Can we be sure to have isolated P in diamond? Would it lose an electron to substitutional N to yield the useless species P+ and N- ? Will most P be found as the irrelevant (spin S=0) PV- center?
What limits the number of gates in a patch is the number of control atoms that can be resolved spectroscopically one from another. As the temperature rises, the lines get broader, so this number falls and scaling becomes harder. Note the zero phonon linewidth need not be simply related to the fraction of the intensity in the sidebands. Above liquid nitrogen temperatures, these homogeneous optical widths increase fast. Thus we have two clear limits to room-temperature operation. The first is qubit decoherence, especially from spin lattice relaxation. The second is control linewidths becoming too large, reducing scalability, which may prove a more powerful limit.
Entangled states of distant atoms or solid-state defects created by interference. A wholly different approach generates quantum entanglement between remote systems by performing measurements on them in a certain way [13]. The systems might be two diamonds, each containing a single NV- center prepared in specific electron spin states, the two centers tuned to have exactly the same optical energies (Fig. 4). The measurement involves “single shot” optical excitation. Both systems are exposed to a weak laser pulse that, on average, will achieve one excitation. The single system excited will emit a photon that, after passing though beam splitters and an interferometer, is detected without giving information as to which system was excited (Fig. 5). “Remote entanglement” is achieved, subject to some strong conditions. The electronic quantum information can be swapped to more robust nuclear states (a so-called brokering process). This brokered information can then be recovered when needed to implement a strategy of measurement-based quantum information processing [14].
The materials and equipment needs, while different from those of optically controlled spintronics, have features in common. For remote entanglement, a random distribution of centers is used, with one from each zone chosen because of their match to each other. The excitation energies of the two distant centers must stay equal very accurately, and this equality must be stable over time, but can be monitored. There are some challenges here, since there will be energy shifts when other defect species in any one of the systems change charge or spin state (the difficulty is present but less severe for the optical control approach). As for optically controlled spintronics [11, 12], scale-up requires narrow lines, and becomes harder at higher temperatures, though there are ways to reduce the problem. Remote entanglement needs interferometric stability, avoiding problems when there are different temperature fluctuations for the paths from the separate systems. Again, there are credible strategies to reduce the effects.
So is room-temperature quantum computing feasible?
Spectroscopy is a generic need for both optically controlled spintronics and remote entanglement approaches. Both need qubits (the electron qubit for the measurement-based approach) with slow decoherence, a significant multiple of switching times. Both need sharp optical transitions with weak phonon sidebands to avoid loss of quantum information. A few zero phonon lines do indeed remain sharp at room temperature. The sharp lines should have frequencies stable over extended times. This mix of properties is hard to meet, but by no means impossible.
Perhaps the hardest conditions have yet to be mentioned. A quantum gate is no more a quantum computer than a transistor is a classical computer. Putting all the components of a quantum computer together could prove really hard. System integration may be the ultimate challenge. Quantum information processing (QIP) will need to exploit standard silicon technology to run the quantum system; and QIP must work alongside a feasible laser optics system. The optical systems are seriously complicated, though each feature seems manageable. It may be necessary to go to architectures even more complicated than those I have described. It might even prove useful to combine elements of remote entanglement and optical spin control, whether this is regarded as using remote entanglement to link spin patches, or as having spin patches instead of NV- centers as nodes for remote entanglements. A short article like this has to miss out many features of importance, not least questions of error correction, but a major message is that, even in the most rudimentary approaches, we have to think through all of the system when talking of a possible computer.
And what would you do with a quantum computer if you had one? Proposals that do not demand room temperature range from probable, like decryption or directory searching, to the possible, like modeling quantum systems, and even to the difficult yet perhaps conceivable, like modeling turbulence. More frivolous applications, like the computer games that drive many of today’s developments, make much more sense if they work at ambient temperatures. And available quantum processing at room temperature would surely stimulate inventive new ideas, just as solid-state lasers led to compact disc technology.
Summing up, where do we stand? At liquid nitrogen temperatures, say 77 K, quantum computing is surely possible, if quantum computing is possible at all. At dry ice temperatures, say 195 K, quantum computing seems reasonably possible. At temperatures that can be reached by thermoelectric or thermomagnetic cooling, say 260 K, things are harder, but there is hope. Yet we know that small (say 2–3 qubit) quantum devices operate at room temperature. It seems likely, to me at least, that a quantum computer of say 20 qubits will operate at room temperature. I do not say it will be easy. Will such a QIP device be as portable as a laptop? I won’t rule that out, but the answer is not obvious on present designs.
Acknowledgments
This work was supported in part by EPSRC through its Basic Technologies program. I am especially grateful for input from Gabriel Aeppli, Polina Bayvell, Simon Benjamin, Ian Boyd, Andrea Del Duce, Andrew Fisher, Tony Harker, Andy Kerridge, Brendon Lovett, Stephen Lynch, Gavin Morley, Seb Savory, and Jason Smith. I am particularly grateful to Simon Benjamin and Stephen Lynch for preparing the initial versions of the figures.
References
http://www.iop.org/activity/awards/International%20Award/page_31978.html..
C. P. Williams and S. H. Clearwater, Ultimate Zero and One: Computing at the Quantum Frontier (Copernicus, New York, 2000)[Amazon][WorldCat].
International Technology Roadmap for Semiconductors, http://www.itrs.net/.
General discussions relevant here: R. W. Keyes, J. Phys. Condens. Matter 17, V9 (2005); R. W. Keyes, J. Phys. Condens. Matter 18, S703 (2006); T. P. Spiller and W. J. Munro, J. Phys. Condens. Matter 18, V1 (2006); R. Tsu, Int. J. High Speed Electronics and Systems 9, 145 (1998); R. W. Keyes, Appl. Phys. A 76, 737 (2003); M. I. Dyakonov, Future Trends in Microelectronics: Up the Nano Creek, edited by S. Luryi, J. Xu, and A. Zaslavsky (Wiley, Hoboken, NJ, 2007)[Amazon][WorldCat].
Examples include: E. van Oort, N. B. Manson, and M. Glasbeek, J. Phys. C 21, 4385 (1988); F. T. Charnock and T. A. Kennedy, Phys. Rev. B 64, 041201 (2001); J. Wrachtrup et al., Opt. Spectrosc. 91, 429 (2001); J. Wrachtrup and F. Jelezko, J. Phys. Condens. Matter 18, S807 (2006); R. Hanson, F. M. Mendoza, R. J. Epstein, and D. D. Awschalom, Phys. Rev. Lett. 97, 087601 (2006); A. D. Greentree, P. Olivero, M. Draganski, E. Trajkov, J. R. Rabeau, P. Reichart, B. C. Gibson, S. Rubanov, S. T. Huntington, D. N. Jamieson, and S. Prawer, J. Phys. Condens. Matter 18, S825 (2006).
M. B. Plenio and P. L. Knight, Philos. Trans. R. Soc. London A 453, 2017 (1997).
B. E. Kane, Nature 393, 133 (1998).
D. P. DiVincenzo and D. Loss, Superlattices Microstruct. 23, 419 (1998).
A. J. Fisher, Philos. Trans. R. Soc. London A 361, 1441 (2003); http://arxiv.org/abs/quant-ph/0211200v1..
V. Dediu, M. Murgia, F. C. Matacotta, C. Taliani, and S. Barbanera, Solid State Commun. 122, 181 (2002).
A. M. Stoneham, A. J. Fisher, and P. T. Greenland, J. Phys Condens. Matter 15, L447 (2003).
R. Rodriquez, A .J. Fisher, P. T. Greenland, and A. M. Stoneham, J. Phys. Condens. Matter 16, 2757 (2004).
C. Cabrillo, J. I. Cirac, P. García-Fernández, and P. Zoller, Phys. Rev. A 58, 1025 (1999).
S. C. Benjamin,B. W. Lovett and J. M. Smith, Laser Photonics Rev. (to be published).
A. M. Stoneham, Materials Today 11, 32 (2008).
A. Kerridge, A. H. Harker, and A. M. Stoneham, J. Phy. Condens. Matter 19, 282201 (2007); E. M. Gauger et al., New J. Phys. 10, 073027 (2008).
A. M. Tyryshkin, J. J. L. Morton, S. C. Benjamin, A. Ardavan, G. A. D. Briggs, J. W. Ager, and S. A. Lyon, J. Phys. Condens. Matter 18, S783 (2006).
About the Author
Marshall Stoneham
Marshall Stoneham is Emeritus Massey Professor of Physics at University College London. He is a Fellow of the Royal Society, and also of the American Physical Society and of the Institute of Physics. Before joining UCL in 1995, he was the Chief Scientist of the UK Atomic Energy Authority, which involved him in many areas of science and technology, from quantum diffusion to nuclear safety. He was awarded the Guthrie gold medal of the Institute of Physics in 2006, and the Royal Society’s Zeneca Prize in 1995. He is the author of over 500 papers, and of a number of books, including Theory of Defects in Solids, now an Oxford Classic, and The Wind Ensemble Sourcebook that won the 1997 Oldman Prize. Marshall Stoneham is based in the London Centre for Nanotechnology, where he finds the scope for new ideas especially stimulating. His scientific interests range from new routes to solid-state quantum computing through materials modeling to biological physics, where his work on the interaction of small scent molecules with receptors has attracted much attention. He is the co-founder of two physics-based firms.