domenica 8 giugno 2008

Moving Mountains With the Brain, Not a Joystick

Source:

STILL using a mouse, keyboard, joystick or motion sensor to control the action in a video game? It may be time to try brain power instead.

A new headset system picks up electrical activity from the brain, as well as from facial muscles and other spots, and translates it into on-screen commands. This lets players vanquish villains not with a click, but with a thought.
Put on the headset, made by Emotiv Systems in San Francisco, and when a giant boulder blocks the path in a game you are playing, you can levitate it — not by something as crude as a keystroke, but just by concentrating on raising it, said Tan Le, Emotiv’s president. The headset captures electrical signals when you concentrate; then the computer processes these signals and pairs a screen action with them, like lifting a stone or repairing a falling bridge.
The headset is the consumer cousin of brain-computer interfaces developed in research labs and used, for example, by monkeys who manipulate prosthetic arms with thoughts. The monkeys’ intentions are detected by sensors, translated into machine language and used to move the arm. In general, some interfaces use sensors implanted directly in the brain; others use electrode-studded caps.
For humans, Emotiv plans to have its noninvasive, wireless EPOC headset ($299) on sale in time for Christmas, Ms. Le said. With 16 sensors that lightly touch the head, it uses a standard technology, electroencephalography, or EEG, to pick up electrical signals from the scalp’s surface and convert them to actions that control or enhance what happens on screen.
To help players master the art of moving on-screen objects solely through concentration, the headset will come bundled with a game, set on a magical mountain, that includes practice exercises, said Geoffrey Mackellar, Emotiv’s research and development manager. “You clear the mind,” he said, and then do 30 to 40 seconds of training, by concentrating, for instance, on visualizing a block lifting from the earth. “On the first or second attempt, you can lift it at will.”
Other, harder challenges follow. In constant feedback, he said, the machine learns more about how users think just as users grow more skillful at concentrating.
Many game developers are incorporating the EPOC’s biofeedback abilities into their applications, Ms. Le said.
The system doesn’t just lift boulders. It can also detect some of a player’s facial expressions and emotional responses: smile, frown or wink, for instance, and an avatar on screen can do so, too. Grow bored during a battle, and the system can detect ennui and supply a few dragons, or change the music. The device tracks a total of about 30 responses.
A chip inside the headset collects the signals and sends them wirelessly to a receiver plugged into a U.S.B. port of the computer, where most of the processing occurs, Dr. Mackellar said.
The sleek Emotiv headset is a version of the EEG cap used for decades to record brain electrical activity, said Nathan Fox, a professor of human development at the University of Maryland.
“There can be as many as 256 electrodes at one time in a cap,” he said. ‘The placement corresponds in some rough approximation to brain areas that are underneath the scalp.”
Medical-grade EEG caps are used in research to eavesdrop on the brain as it plans motion and to translate these plans, for example, into cursor actions on a screen so paralyzed people can control a computer to write messages.
The Emotiv headset, too, taps the power of the mind, as well as using feedback from muscles, Dr. Mackellar said.
“We definitely read brain waves — no doubt about it — but we also read other things,” he said. “In classical EEG, movements of the face and muscles are regarded as noise. But we use some of it, rather than discard it.”
Anton Nijholt, a professor of computer science at the University of Twente in the Netherlands who does research on innovative interfaces for games, looks forward to the extra means of interaction that EEG headsets will provide. But he doesn’t think that all consumers will be able to use them to raise mountains.
“Not all people are able to display the mental activity necessary to move an object on a screen,” he said. “Some people may not be able to imagine movement in a way that EEG can detect.”
So far, Dr. Mackellar said, all 200 testers of the headset had indeed been able to move on-screen objects mentally.
ANOTHER headset, the Neural Impulse Actuator ($169), just released by the OCZ Technology Group in Sunnyvale, Calif., has three sensors in a headband that pick up electrical activity primarily from muscles and convert it into commands, said Michael Schuette, vice president for technology development. Players of shooting games, for instance, may use eye movement to trigger a shot, shaving milliseconds off of their response time and sparing their hands.
The exact source of the electrical activity the headset is picking up may not be important, said Dr. Jonathan Wolpaw, chief of the laboratory for nervous system disorders at the Wadsworth Center of the New York State Department of Health in Albany. He uses EEG caps as part of brain-computer interfaces for severely paralyzed people. His systems record brain activity alone, but for a consumer game device, a cap that picks up a mixture of brain and muscle activity may be acceptable.
“In a lot of these commercial uses, people don’t care if the activity is coming from the brain or forehead muscles,” he said. “It doesn’t matter to them so long as they can play the game.”

Fausto Intilla - www.oloscience.com

martedì 27 maggio 2008

New Image-recognition Software Could Let Computers 'See' Like Humans Do


Source:
ScienceDaily (May 26, 2008) — It takes surprisingly few pixels of information to be able to identify the subject of an image, a team led by an MIT researcher has found. The discovery could lead to great advances in the automated identification of online images and, ultimately, provide a basis for computers to see like humans do.
Antonio Torralba, assistant professor in MIT's Computer Science and Artificial Intelligence Laboratory, and colleagues have been trying to find out what is the smallest amount of information--that is, the shortest numerical representation--that can be derived from an image that will provide a useful indication of its content.
Deriving such a short representation would be an important step toward making it possible to catalog the billions of images on the Internet automatically. At present, the only ways to search for images are based on text captions that people have entered by hand for each picture, and many images lack such information. Automatic identification would also provide a way to index pictures people download from digital cameras onto their computers, without having to go through and caption each one by hand. And ultimately it could lead to true machine vision, which could someday allow robots to make sense of the data coming from their cameras and figure out where they are.
"We're trying to find very short codes for images," says Torralba, "so that if two images have a similar sequence [of numbers], they are probably similar--composed of roughly the same object, in roughly the same configuration." If one image has been identified with a caption or title, then other images that match its numerical code would likely show the same object (such as a car, tree, or person) and so the name associated with one picture can be transferred to the others.
"With very large amounts of images, even relatively simple algorithms are able to perform fairly well" in identifying images this way, says Torralba. He will be presenting his latest findings this June in Alaska at a conference on Computer Vision and Pattern Recognition. The work was done in collaboration with Rob Fergus at the Courant Institute in New York University and Yair Weiss of Hebrew University in Jerusalem.
To find out how little image information is needed for people to recognize the subject of a picture, Torralba and his co-authors tried reducing images to lower and lower resolution, and seeing how many images at each level people could identify.
"We are able to recognize what is in images, even if the resolution is very low, because we know so much about images," he says. "The amount of information you need to identify most images is about 32 by 32." By contrast, even the small "thumbnail" images shown in a Google search are typically 100 by 100.
Even an inexpensive current digital camera produces images consisting of several megapixels of data--and each pixel typically consists of 24 bits (zero or one) of data. But Torralba and his collaborators devised a mathematical system that can reduce the data from each picture even further, and it turns out that many images are recognizable even when coded into a numerical representation containing as little as 256 to 1024 bits of data.
Using such small amounts of data per image makes it possible to search for similar pictures through millions of images in a database, using an ordinary PC, in less than a second, Torralba says. And unlike other methods that require first breaking down an image into sections containing different objects, this method uses the entire image, making it simple to apply to large datasets without human intervention.
For example, using the coding system they developed, Torralba and his colleagues were able to represent a set of 12.9 million images from the Internet with just 600 megabytes of data--small enough to fit in the RAM memory of most current PCs, and to be stored on a memory stick. The image database and software to enable searches of the database, are being made publicly available on the web.
Of course, a system using drastically reduced amounts of information can't come close to perfect identification. At present, the matching works for the most common kinds of images. "Not all images are created equal," he says. The more complex or unusual an image is, the less likely it is to be correctly matched. But for the most common objects in pictures--people, cars, flowers, buildings--the results are quite impressive.
The work is part of research being carried out by hundreds of teams around the world, aimed at analyzing the content of visual information. Torralba has also collaborated on related work with other MIT researchers including William Freeman, a professor in the Department of Electrical Engineering and Computer Science; Aude Oliva, professor in the Department of Brain and Cognitive Sciences; and graduate students Bryan Russell and Ce Liu, in CSAIL. Torralba's work is supported in part by a grant from the National Science Foundation.
Torralba stresses that the research is still preliminary and that there will always be problems with identifying the more-unusual subjects. It's similar to the way we recognize language, Torralba says. "There are many words you hear very often, but no matter how long you have been living, there will always be one that you haven't heard before. You always need to be able to understand [something new] from one example."
Fausto Intilla - www.oloscience.com

mercoledì 21 maggio 2008

Diamond-Like Crystals Discovered In Brazilian Beetle Solve Issue For Future Optical Computers


Source:
ScienceDaily (May 21, 2008) — Researchers have been unable to build an ideal "photonic crystal" to manipulate visible light, impeding the dream of ultrafast optical computers. But now, University of Utah chemists have discovered that nature already has designed photonic crystals with the ideal, diamond-like structure: They are found in the shimmering, iridescent green scales of a beetle from Brazil.
"It appears that a simple creature like a beetle provides us with one of the technologically most sought-after structures for the next generation of computing," says study leader Michael Bartl, an assistant professor of chemistry and adjunct assistant professor of physics at the University of Utah. "Nature has simple ways of making structures and materials that are still unobtainable with our million-dollar instruments and engineering strategies."
The study by Bartl, University of Utah chemistry doctoral student Jeremy Galusha and colleagues is set to be published in a forthcoming edition of the journal Physical Review E.
The beetle is an inch-long weevil named Lamprocyphus augustus. The discovery of its scales' crystal structure represents the first time scientists have been able to work with a material with the ideal or "champion" architecture for a photonic crystal.
"Nature uses very simple strategies to design structures to manipulate light -- structures that are beyond the reach of our current abilities," Galusha says.
Bartl and Galusha now are trying to design a synthetic version of the beetle's photonic crystals, using scale material as a mold to make the crystals from a transparent semiconductor.
The scales can't be used in technological devices because they are made of fingernail-like chitin, which is not stable enough for long-term use, is not semiconducting and doesn't bend light adequately.
The University of Utah chemists conducted the study with coauthors Lauren Richey, a former Springville High School student now attending Brigham Young University; BYU biology Professor John Gardner; and Jennifer Cha, of IBM's Almaden Research Center in San Jose, Calif.
Quest for the Ideal or 'Champion' Photonic Crystal
Researchers are seeking photonic crystals as they aim to develop optical computers that run on light (photons) instead of electricity (electrons). Right now, light in near-infrared and visible wavelengths can carry data and communications through fiberoptic cables, but the data must be converted from light back to electricity before being processed in a computer.
The goal -- still years away -- is an ultrahigh-speed computer with optical integrated circuits or chips that run on light instead of electricity.
"You would be able to solve certain problems that we are not able to solve now," Bartl says. "For certain problems, an optical computer could do in seconds what regular computers need years for."
Researchers also are seeking ideal photonic crystals to amplify light and thus make solar cells more efficient, to capture light that would catalyze chemical reactions, and to generate tiny laser beams that would serve as light sources on optical chips.
"Photonic crystals are a new type of optical materials that manipulate light in non-classic ways," Bartl says. Some colors of light can pass through a photonic crystal at various speeds, while other wavelengths are reflected as the crystal acts like a mirror.
Bartl says there are many proposals for how light could be manipulated and controlled in new ways by photonic crystals, "however we still lack the proper materials that would allow us to create ideal photonic crystals to manipulate visible light. A material like this doesn't exist artificially or synthetically."
The ideal photonic crystal -- dubbed the "champion" crystal -- was described by scientists elsewhere in 1990. They showed that the optimal photonic crystal -- one that could manipulate light most efficiently -- would have the same crystal structure as the lattice of carbon atoms in diamond. Diamonds cannot be used as photonic crystals because their atoms are packed too tightly together to manipulate visible light.
When made from an appropriate material, a diamond-like structure would create a large "photonic bandgap," meaning the crystalline structure prevents the propagation of light of a certain range of wavelengths. Materials with such bandgaps are necessary if researchers are to engineer optical circuits that can manipulate visible light.
On the Path of the Beetle: From BYU to Belgium and Brazil
The new study has its roots in Richey's science fair project on iridescence in biology when she was a student at Utah's Springville High School. Gardner's group at BYU was helping her at the same time Galusha was using an electron microscope there and learned of Richey's project.
Richey wanted to examine an iridescent beetle, but lacked a complete specimen. So the researchers ordered Brazil's Lamprocyphus augustus from a Belgian insect dealer.
The beetle's shiny, sparkling green color is produced by the crystal structure of its scales, not by any pigment, Bartl says. The scales are made of chitin, which forms the external skeleton, or exoskeleton, of most insects and is similar to fingernail material. The scales are affixed to the beetle's exoskeleton. Each measures 200 microns (millionths of a meter) long by 100 microns wide. A human hair is about 100 microns thick.
Green light -- which has a wavelength of about 500 to 550 nanometers, or billionths of a meter -- cannot penetrate the scales' crystal structure, which acts like mirrors to reflect the green light, making the beetle appear iridescent green.
Bartl says the beetle was interesting because it was iridescent regardless of the angle from which it was viewed -- unlike most iridescent objects -- and because a preliminary electron microscope examination showed its scales did not have the structure typical of artificial photonic crystals.
"The color and structure looked interesting," Bartl says. "The question was: What was the exact three-dimensional structure that produces these unique optical properties?"
The Utah team's study is the first to show that "just as atoms are arranged in diamond crystals, so is the chitin structure of beetle scales," he says.
Galusha determined the 3-D structure of the scales using a scanning electron microscope. He cut a cross section of a scale, and then took an electron microscope image of it. Then he used a focused ion beam -- sort of a tiny sandblaster that shoots a beam of gallium ions -- to shave off the exposed end of the scale, and then took another image, doing so repeatedly until he had images of 150 cross-sections from the same scale.
Then the researchers "stacked" the images together in a computer, and determined the crystal structure of the scale material: a diamond-like or "champion" architecture, but with building blocks of chitin and air instead of the carbon atoms in diamond.
Next, Galusha and Bartl used optical studies and theory to predict optical properties of the scales' structure. The prediction matched reality: green iridescence.
Many iridescent objects appear that way only when viewed at certain angles, but the beetle remains iridescent from any angle. Bartl says the way the beetle does that is an "ingenious engineering strategy" that approximates a technology for controlling the propagation of visible light.
A single beetle scale is not a continuous crystal, but includes some 200 pieces of chitin, each with the diamond-based crystal structure but each oriented a different direction. So each piece reflects a slightly different wavelength or shade of green.
"Each piece is too small to be seen individually by your eye, so what you see is a composite effect," with the beetle appearing green from any angle, Bartl explains.
Scientists don't know how the beetle uses its color, but "because it is an unnatural green, it's likely not for camouflage," Bartl says. "It could be to attract mates."
The study was funded by the National Science Foundation, American Chemical Society, the University of Utah and Brigham Young University.
Fausto Intilla - www.oloscience.com

lunedì 19 maggio 2008

Researchers teach 'Second Life' avatar to think

Source:

TROY, N.Y. (AP) - Edd Hifeng barely merits a second glance in 'Second Life.' A steel-gray robot with lanky limbs and linebacker shoulders, he looks like a typical avatar in the popular virtual world.But Edd is different.His actions are animated not by a person at a keyboard but by a computer.
Edd is a creation of artificial intelligence, or AI, by researchers at Rensselaer Polytechnic Institute, who endowed him with a limited ability to converse and reason. It turns out 'Second Life' is more than a place where pixelated avatars chat, interact and fly about. It's also a frontier in AI research because it's a controllable environment where testing intelligent creations is easier.'It's a very inexpensive way to test out our technologies right now,' said Selmer Bringsjord, director of the Rensselaer Artificial Intelligence and Reasoning Laboratory.Bringsjord sees Edd as a forerunner to more sophisticated creations that could interact with people inside three-dimensional projections of settings like subway stops or city streets. He said the holographic illusions could be used to train emergency workers or solve mysteries.But first, a virtual reality check.Edd is not running rampant through the cyber streets of 'Second Life.' He goes only where Bringsjord and his graduate students place him for tests. He can answer questions like 'Where are you from?' but understands only English that has previously been translated into mathematical logic.'Second Life' is attractive to researchers in part because virtual reality is less messy than plain-old reality. Researchers don't have to worry about wind, rain or coffee spills.And virtual worlds can push along AI research without forcing scientists to solve the most difficult problems -- like, say, creating a virtual human -- right away, said Michael Mateas, a computer science professor at the University of California at Santa Cruz.Researching in virtual realities has become increasingly popular the past couple years, said Mateas, leader of the school's Expressive Intelligence Studio for AI and gaming.'It's a fantastic sweet spot -- not too simple, not too complicated, high cultural value,' he said.Bringsjord is careful to point out that the computations for Edd's mental feats have been done on workstations and are not sapping 'Second Life' servers. The calculations will soon be performed on a supercomputer at Rensselaer with support from research co-sponsor IBM Corp.Operators of 'Second Life' don't seem concerned about synthetic agents lurking in their world. John Lester, Boston operations manager for Linden Lab, said the San Francisco-based company sees
a 'fascinating' opportunity for AI to evolve.
'I think the real future for this is when people take these AI-controlled avatars and let them free in 'Second Life,'' Lester said, ' ... let them randomly walk the grid.'That is years off by most experts' estimations. Edd's most sophisticated cognitive feat so far -- played out in 'Second Life' and posted on the Web -- involves him witnessing a gun being switched from one briefcase to another. Edd was able to infer that another 'Second Life' character who left the room during the switch would incorrectly think the gun was still in the first suitcase.This ability to make inferences about the thoughts of others is significant for an AI agent, though it puts Edd on par with a 4-year-old -- and the calculus required 'under the hood' to achieve this feat is mind-numbingly complex.A computer program smart enough to fool someone into thinking they're interacting with another person -- the traditional Holy Grail for AI researchers -- has been elusive. One huge problem is getting computers to understand concepts imparted in language, said Jeremy Bailenson, director of the Virtual Human Interaction Lab at Stanford University.AI agents do best in tightly controlled environments: Think of automated phone programs that recognize your responses when you say 'operator' or 'repair.'Bringsjord sees 'Second Life' as a way station. He eventually wants to create other environments where more sophisticated creations could display courage or deceive people, which would be the first step in developing technology to detect deception.The avatars could be projected at RPI's $145 million Experimental Media and Performing Arts Center, opening in October, which will include spaces for holographic projections. Officials call them 'holodecks' in homage to the virtual reality room on the 'Star Trek' television series.That sort of visual fidelity is many years down the line, just like complex AI. John Kolb, RPI's chief information officer, said the best three-dimensional effects still require viewers to wear special light-polarizing glasses.'If you want to do texture mapping on a wall for instance, that's easy. We can do that today,' Kolb said. 'If you want to start to build cognitive abilities into avatars, well, that's going to take a bit more work.'

Fausto Intilla - www.oloscience.com

lunedì 12 maggio 2008

Braille Converter Bridges The Information Gap

Source:

ScienceDaily (May 12, 2008) — A free, e-mail-based service that translates text into Braille and audio recordings is helping to bridge the information gap for blind and visually impaired people, giving them quick and easy access to books, news articles and web pages.
Developed by European researchers, the RoboBraille service offers a unique solution to the problem of converting text into Braille and audio without the need for users to operate complicated software.
“We started working in this field 20 years ago, developing software to translate text into Braille, but we discovered that users found the programs difficult to use – we therefore searched for a simpler solution,” explains project coordinator Lars Ballieu Christensen, who also works for Synscenter Refsnaes, a Danish centre for visually impaired children.
The result of the EU-funded project was RoboBraille, a service that requires no more skill with a computer than the ability to send an e-mail.
Users simply attach a text they want to translate in one of several recognised formats, from plain text and Word documents to HTML and XML. They then e-mail the text to the service’s server. Software agents then automatically begin the process of translating the text into Braille or converting it into an audio recording through a text-to-speech engine.
“The type of output and the language depends on the e-mail address the user sends the text to,” Christensen says. “A document sent to .org would be converted into spoken British English while a text sent to .org would be translated from Portuguese into six-dot Braille.”
The user then receives the translation back by e-mail, which can be read on a Braille printer or on a tactile display, a device connected to the computer with a series of pins that are raised or lowered to represent Braille characters.
RoboBraille can currently translate text written in English, Danish, Italian, Greek and Portuguese into Braille and speech. The service can also handle text-to-speech conversions in French and Lithuanian.
Christensen notes that the RoboBraille partners are constantly working on adding new languages to the service and plan to start providing Braille and audio translations for Russian, Spanish, German and Arabic. They are also working on making the service compatible with PDF documents and text scanned from images.
Up to 14,000 translations a day
At present, the service translates an average of 500 documents a day, although it could handle as many as 14,000. RoboBraille can return a simple text in Braille in under a minute while taking as long as 10 hours to provide an audio recording of a book.
As of January, the RoboBraille system had carried out 250,000 translations since it first went online.
The team have won widespread recognition for their work, receiving the 2007 Social Contribution Award from the British Computer Society in December while in April they were awarded the 2008 award for technological innovation from Milan-based Well-Tech.
“We initially started offering the service only in Denmark but to make it viable commercially we needed to broaden our horizons. Hence the eTen project which allowed us to involve other organisations across Europe in developing and expanding the service, not only geographically but also in terms of users,” Christensen says.
In addition to the blind and visually impaired, the service can also help dyslexics, people with reading difficulties and the illiterate. The project partners plan to continue to offer the service for free to such users and other individuals, while in parallel developing commercial services for companies and public institutions.
“Pharmaceutical companies in Europe will soon be required to ensure all medicine packaging is labelled in Braille and we are currently working with three big firms to provide that service,” Christensen explains. “Banks and insurance companies are also interested in using it to provide statements in Braille as too is the Danish tax office. In Italy there is interest in using it in the tourism sector.”
The RoboBraille team, which recently received an €1.1 million grant over four years from the Danish government, expect the service to be profitable within four or five years.
And although they are not actively seeking investors, they are interested in partnerships with organisations interested in collaborating on specific social projects.
RoboBraille was funded under the EU's eTEN programme for market validation and implementation.

Fausto Intilla - www.oloscience.com

Quantum Cryptography Cracked?

Source:

29 April 2008—Quantum cryptography, touted by scientists as the ultimate unbreakable code, may turn out to be susceptible to eavesdropping after all when implemented practically, according to a Swedish duo.
“Quantum codes are supposed to guarantee 100 percent security,” says Jan-Ake Larsson, associate professor of mathematics at Linkoeping University, in Sweden. “If they don't live up to that promise, that's a problem.”
Larsson and his former graduate student Jorgen Cederlof, who now works for Google, say they have spotted a flaw in practical quantum codes. Their report on this flaw and a patch for the problem appear in the April issue of the IEEE Transactions on Information Theory.
The most secure codes currently in use rely on public-key cryptography, whose security stems from the fact that computers today cannot factor very large numbers within a useful time period. However, in theory, given sufficiently powerful computers, these codes can be cracked.
Quantum cryptography, in contrast, is supposed to be unbreakable, even in theory, because its security is based on a fundamental tenet of quantum mechanics. It turns out that the very act of measurement in quantum mechanics changes the nature of the quantum system being observed. Thus, if an eavesdropper listens in on a quantum message between two parties, he or she changes the message in a way that is detectable. Through a multistep process, quantum encryption systems—and there are at least three on the market now—use the security of quantum mechanics to generate cryptographic keys. These quantum keys are ciphers used to encode and decode messages.
The process of key generation, though based on quantum physics, also requires exchanging some information on a regular “classical” channel. Eavesdropping on the classical channel cannot be detected. One of the final steps in setting up a quantum key is to authenticate the communicating parties—determining that Bob is really talking to Alice, not some eavesdropper.
If there is no authentication, Alice and Bob will be open to a “man in the middle” attack, as it is termed by code breakers. The attack would work like this, Cederlof explains: “Now Eve comes along, buys a couple of [quantum encryption] devices identical to the ones Alice and Bob have, cuts the cables between Alice and Bob, and connects her devices at both ends. Now Alice will think she is talking to Bob, but in reality she is talking to Eve. Eve just acts as Bob would have, and after a while Alice and Eve have created a shared secret key. The same thing happens between Eve and Bob. When Alice tries to send an encrypted message to Bob, she will encrypt it with a key known only to Eve (but which Alice thinks only Bob knows). Eve intercepts the message, decrypts it, reads it, encrypts it with the key she shares with Bob, and sends it to Bob. Alice and Bob never suspect anything.”
The way around this is to communicate classically and make sure Alice is really talking to Bob. But that is exactly where the vulnerability lies.
“To our surprise, the authentication was not secure,” says Larsson. He and Cederlof say that it is difficult to eavesdrop, but the possibility does exist. In their paper they suggest a patch. “The modification we propose is basically an extra exchange of a small amount of random bits on the classical channel,” says Larsson.
According to Tassos Nakassis a computer scientist at the National Institute of Standards and Technology (NIST), in Gaithersburg, Md., the error may have originated because quantum cryptography is an emerging interdisciplinary field that combines advanced quantum physics with traditional code making. Authentication and its weaknesses may have gotten lost in the conversation between quantum physicists and classical cryptographers.
The Swedes went looking in just the right place for a vulnerability, according to Bruce Schneier, an expert in cryptography and chief technology officer at BT Counterpane, in Santa Clara, Calif. “Authentication has always been a problem with quantum crypto,” he says.
Audrius Berzanskis, chief operating officer at the quantum cryptography systems firm MagiQ Technologies, in New York City, claims his firm's systems are immune to this kind of attack, because they are overly conservative with respect to how they treat errors in the quantum channel—whether or not the errors are caused by an eavesdropper. This conservatism comes at the cost of the rate at which quantum keys are generated. And Berzanskis adds that Larsson and Cederlof's patch might allow the key rate to increase. Experts from outside quantum cryptography companies agree that the vulnerability is real, but most think it would be impractical to exploit.
“This is an interesting issue and worthy of the awareness of the community,” says physicist Joshua Bienfang, who works on quantum cryptography at NIST. But he notes that Larsson and Cederlof correctly emphasize that the attack relies on Eve capitalizing on opportunities that occur with very low probability. In their worst-case scenario, with a computationally omnipotent Eve, they estimate it would take something on the order of nine months to break the system. And he says that the patch offered should “firmly shut the door on this type of attack.”
Norbert Lutkenhaus, a physicist at the Institute for Quantum Computing, in Canada, summed it up. “Practically, I don't think it is a threat of any kind,” he says. “But it is good to know about the vulnerability.”

Fausto Intilla - www.oloscience.com

mercoledì 16 aprile 2008

Location Spoofing Possible With WiFi Devices: Positioning System Used By IPhone/iPod Breached


ScienceDaily (Apr. 16, 2008) — Apple iPhone and iPod (touch) support a new self-localization feature that uses known locations of wireless access points as well as the device's own ability to detect access points. Now researchers at ETH Zurich/Swiss Federal Institute of Technology have demonstrated that positions displayed by the devices using this system can be falsified, making the use of this self-localization system unsuitable in a number of security- and safety-critical applications.
In January, Skyhook Wireless Inc. announced that Apple would use Skyhook's WiFi Positioning System (WPS) for its popular Map applications. The WPS database contains information on access points throughout the world. Skyhook itself provides most of the data in the database, with users contributing via direct entries to the database, and requests for localization. ETH Zurich Professor Srdjan Capkun of the Department of Computer Science and his team of researchers analysed the security of Skyhook's positioning system. The team's results demonstrate the vulnerability of Skyhook's and similar public WLAN positioning systems to location spoofing attacks.Impersonation and eliminationWhen an Apple iPod or iPhone wants to find its position, it detects its neighbouring access points, and sends this information to Skyhook servers. The servers then return the access point locations to the device. Based on this data, the device computes its location. To attack this localization process, Professor Capkun's team decided to use a dual approach. First, access points from a known remote location were impersonated. Second, signals sent by access points in the vicinity were eliminated by jamming. These actions created the illusion in localized devices that their locations were different from their actual physical locations.Simple falsificationSkyhook's WPS works by requiring a device to report the Media Access Control (MAC) addresses that it detects. However, since MAC addresses can be forged by rogue access points, they can be easily impersonated. Furthermore, access point signals can be jammed and signals from access points in the vicinity of the device can thus be eliminated. These two actions make location spoofing attacks possible.Compromised usageProfessor Capkun explained that by demonstrating these attacks, the team hoped to point out the limitations, despite guarantees, of public WLAN-based localization services as well as of applications for such services. He said "Given the relative simplicity of the performed attacks, it is clear that the use of WLAN-based public localization systems, such as Skyhook's WPS, should be restricted in security and safety-critical applications."Adapted from materials provided by ETH Zurich/Swiss Federal Institute of Technology, via EurekAlert!, a service of AAAS.

Fausto Intilla - www.oloscience.com

martedì 15 aprile 2008

Getting Wired For Terahertz Computing


Source:
ScienceDaily (Apr. 15, 2008) — University of Utah engineers took an early step toward building superfast computers that run on far-infrared light instead of electricity: They made the equivalent of wires that carried and bent this form of light, also known as terahertz radiation, which is the last unexploited portion of the electromagnetic spectrum.
"We have taken a first step to making circuits that can harness or guide terahertz radiation," says Ajay Nahata, study leader and associate professor of electrical and computer engineering. "Eventually -- in a minimum of 10 years -- this will allow the development of superfast circuits, computers and communications."
Electricity is carried through metal wires. Light used for communication is transmitted through fiberoptic cables and split into different colors or "channels" of information using devices called waveguides. In a study to be published April 18 in the online journal Optics Express, Nahata and colleagues report they designed stainless steel foil sheets with patterns of perforations that successfully served as wire-like waveguides to transmit, bend, split or combine terahertz radiation.
"A waveguide is something that allows you to transport electromagnetic radiation from one point to another point, or distribute it across a circuit," Nahata says.
If terahertz radiation is to be used in computing and communication, it not only must be transmitted from one device to another, "but you have to process it," he adds. "This is where terahertz circuits are important. The long-term goal is to develop capabilities to create circuits that run faster than modern-day electronic circuits so we can have faster computers and faster data transfer via the Internet."
Nahata conducted the study with two doctoral students in electrical and computer engineering: Wenqi Zhu and Amit Agrawal.
Developing Terahertz Technology
The electromagnetic spectrum, which ranges from high to low frequencies (or short to long wavelengths), includes: gamma rays, X-rays, ultraviolet light, visible light (violet, blue, green, yellow, orange and red), infrared light (including radiant heat and terahertz radiation), microwaves, FM radio waves, television, short wave and AM radio.
Fiberoptic phone and data lines now use near-infrared light and some visible light. The only part of the spectrum not now used for communications or other practical purposes is terahertz-frequency or far-infrared radiation -- also nicknamed T-rays -- located on the spectrum between mid-infrared and microwaves.
With so much of the spectrum clogged by existing communications, engineers would like to harness terahertz frequencies for communication, much faster computing and even for anti-terrorism scanners and sensors able to detect biological, chemical or other weapons. Nahata says the new study is relevant mainly to computers that would use terahertz radiation to run at speeds much faster than current computers.
In March 2007, Nahata, Agrawal and others published a study in the journal Nature showing it was possible to control a signal of terahertz radiation using thin stainless steel foils perforated with round holes arranged in semi-regular patterns.
This February, British researchers reported they used computer simulations and some experiments to show that indentations punched across an entire sheet of copper-clad polymer could hold terahertz radiation close to the sheet's surface. That led them to conclude the far-infrared light could be guided along such a material's surface.
But the London researchers did not actually manipulate the direction the terahertz radiation moved, such as by bending or splitting it.
"We have demonstrated the ability to do this, which is a necessary requirement for making terahertz guided-wave circuits," Nahata says.
Circuits: From Electrical to Optical to Terahertz
Wires act as waveguides for electricity. Wires connect active devices such as transistors, which switch or adjust the electric signal. That is the basis for how computers work today. An electronic integrated circuit is a computer processor made of wires, transistors, resistors and capacitors on a semiconductor chip made of silicon.
In optical communications, the waveguides carry laser-generated light in fiberoptic cables and lines etched or deposited on an insulator or semiconductor surface. Nahata says photonic integrated circuits now are used for phone and Internet communications, mainly for combining or "multiplexing" different colors or channels of light entering a fiber-optic cable and separating or "demultiplexing" the different wavelengths exiting the cable.
"Electronic circuits today work at gigahertz frequencies -- billions of cycles per second. Electronic devices like a computer chip can operate at gigahertz," Nahata says. "What people would like to do is develop capabilities to transport and manipulate data at terahertz frequencies [trillions of hertz.] It's a speed issue. People want to be able to transfer data at higher speeds. People would like to download a movie in a few seconds."
"In this study, we've demonstrated the first step toward making circuits that use terahertz radiation and ultimately might work at terahertz speeds," or a thousand times faster than today's gigahertz-speed computers, Nahata says.
Channeling, Bending, Splitting and Coupling T-Rays
"People have been working on terahertz waveguides for a decade," he says. "We've shown how to make these waveguides on a flat surface so that you can make circuits just like electronic circuits on silicon chips."
The researchers used pieces of stainless steel foil about 4 inches long, 1 inch wide and 625 microns thick, or 6.25 times the thickness of a human hair. They perforated the metal with rectangular holes, each measuring 500 microns (five human hair widths) by 50 microns (a half a hair width). The rectangular holes were arranged side by side in three different patterns to form "wires" for terahertz radiation:
One line of rectangles that served as a "wire" and carried terahertz radiation.
A line that becomes two lines -- like the letter Y -- to split the far-infrared light, similar to a splitter used to route a home cable TV signal to separate television sets.
Two lines that curve close to each other in the middle -- like an X where the two lines come close but don't touch -- so the radiation could be "coupled," or moved from one line or "wire" to another.
The straight pattern successfully carried terahertz radiation in a straight line. The other two patterns "changed the direction the terahertz radiation was moving" by splitting it or coupling it, Nahata says. The study showed the terahertz radiation was closely confined both vertically (within 1.69 millimeters of the foil's surface) and horizontally (within 2 millimeters of the pattern of rectangles as it moved over them).
"All we've done is made the wires" for terahertz circuits, Nahata says. "Now the issue is how do we make devices [such as switches, transistors and modulators] at terahertz frequencies?"
When terahertz radiation is fed into the stainless steel waveguides, it spans a range of frequencies. One frequency is guided across the steel surface. That frequency is determined by the size of perforations in the foil. The engineers chose a frequency they could generate and measure: about 0.3 terahertz, or 300 gigahertz. Terahertz radiation is defined as ranging from 0.1 terahertz (or 100 gigahertz) to 10 terahertz.
The design of the waveguide means that it carries terahertz radiation in the form of surface plasma waves -- also known as plasmons or plasmon polaritons -- which are analogous to electrons in electrical devices or photons of light in optical devices. The surface plasma waves are waves of electromagnetic radiation at a terahertz frequency that are bound to the surface of the steel foil because they are interacting with moving electrons in the metal, Nahata says.
Adapted from materials provided by University of Utah, via EurekAlert!, a service of AAAS.
Fausto Intilla

lunedì 14 aprile 2008

Supercomputers Simulating As Close As Possible To Reality


ScienceDaily (Apr. 14, 2008) — Supercomputers simulate products and manufacturing processes with-in minutes. In the Computer Aided Robust Design CAROD project, Fraunhofer researchers are developing new methods and software that significantly improve the quality of the virtual components.
Trucks drive thousands of kilometers through Europe every month, taking oranges from Greece to Scandinavia, delivering Spanish vegetables to German wholesalers, and collecting milk from farms in the region to take it to central dairies. To make sure the tires, wheel rims and other parts will survive the many kilometers without breaking down, the manufacturers test prototypes in test rigs to discover their service life.
Such a test often lasts several weeks, yet it can be rendered useless by malfunctions, such as when bearings or sensors wear out. In the Computer Aided Robust Design (CAROD) project, research scientists from seven Fraunhofer Institutes are devising methods with which malfunctions of this nature can be simulated ahead of time. The researchers are using the results to develop sturdy test rigs for life-cycle tests.
“Today the development and testing of prototypes – be they entire cars or individual components – takes place mainly in the computer,” says Andreas Burblies, spokesman for the Fraunhofer Numerical Simulation Alliance. But this simulation only reflects reality to a limited extent. “As a rule, there are no parts or manufacturing processes in which all product or process properties are identical.
But the developers always get the same simulation results if they enter the same pa-rameters.” This is where the researchers come in with their Computer Aided Robust Design. The goal is to develop new methods and software that makes it possible to factor the real deviations into the simulation calculations. In this way mechatronic systems, crash tests or laser processing methods can be made even less vulnerable to errors and variations.
One of the pillars of the new technology is the Taguchi method. The Japanese scientist Genichi Taguchi developed a method of making products, processes and systems resistant to interference. It is already applied in quality management, enabling the industry to achieve the optimum product quality. The task of CAROD is to improve quality by taking faults, variations and breakdowns into account during the virtual design phase.
“We are aiming to get as close to the natural manufacturing conditions as possible with our simulations,” says Dr. Tanja Clees, project manager at the Fraunhofer Institute for Algorithms and Scientific Computing SCAI in Sankt Augustin. Right now it is still early days for the new simulation software, but the experts are confident of achieving good results very soon. CAROD can be seen at the Hannover Messe in Germany from April 21 through 25.
Adapted from materials provided by Fraunhofer-Gesellschaft.
Fausto Intilla - www.oloscience.com