domenica 23 dicembre 2007

Next-generation RAM: Remembering The Future

Source:

ScienceDaily (Dec. 23, 2007) — As electronics designers cram more and more components onto each chip, current technologies for making random-access memory (RAM) are running out of room. European researchers have a strong position in a new technology known as resistive RAM (RRAM) that could soon be replacing flash RAM in USB drives and other portable gadgets.
On the ‘semiconductor road map’ setting out the future of the microchip industry, current memory technologies are nearing the end of the road. Future computers and electronic gadgets will need memory chips that are smaller, faster and cheaper than those of today –and that means going back to basics.
Today’s random-access memory (RAM) falls mainly into three classes: static RAM (SRAM), dynamic RAM (DRAM), and flash memory. Each has its advantages and drawbacks; flash, for instance, is the only one to retain data when the power is switched off, but is slower.
According to Professor Paul Heremans of the University of Leuven in Belgium, circuit designers looking for the best performance often have to combine several memory types on the same chip. This adds complexity and cost.
A more serious issue is scalability. As designers pack more components onto each chip, the width of the smallest features is shrinking, from 130 nanometers (nm) in 2000 to 45 nm today. Existing memory technologies are good for several more generations, Heremans says, but are unlikely to make the transition to 22 nm (scheduled for 2011) or 16 nm (2018).
So we need new memory technologies that can be made smaller than those of today, as well as preferably being faster, power saving and non-volatile. The runners in the global memory technology race form a veritable alphabet soup of acronyms including MRAM, RRAM, FeRAM, Z-RAM, SONOS, and nano-RAM.
No universal solution
Early in 2004, Heremans became the coordinator of an EU-supported project that included two of Europe’s biggest semiconductor manufacturers: STMicroelectronics of Italy and Philips of the Netherlands. Heremans’ own institution, IMEC, is a leading independent research centre in microelectronics and nanotechnology. The Polish Academy of Sciences was the fourth partner in the project.
The Nosce Memorias (Latin for ‘Know your memories’) project started out to develop a universal memory that was fast, non-volatile, and flexible enough to replace several existing types. It had to be compatible with CMOS, the current standard chip manufacturing technology, and scalable for several generations below 45 nm.
As the research progressed it became clear that a universal memory would require too many compromises, notes Heremans. Instead, the team targeted a non-volatile memory that would have better performance and scalability than current flash technology.
Flash memory, used for USB ‘key-ring’ drives and digital cameras, can store data for years using transistors to retain electric charge. The technology can be scaled down for several more generations, Heremans says, but sooner or later it will reach a limit. Flash memory is also slow to read and needs high voltages to operate.
Exploring resistive memory
The hopes of Nosce Memorias rested on a technology known as resistive RAM (RRAM). Instead of storing information in transistors (flash memory) or capacitors (DRAM), RRAM relies on the ability to alter the electrical resistance of certain materials by applying an external voltage or current. RRAM is non-volatile, and its simple structure is ideal for future generations of CMOS chips.
The project looked at three types of RRAM. The first, known as a ferroelectric Schottky diode, was abandoned when the researchers realised they were unlikely to be able to create starting materials with the required properties.
The second technology studied was a metal-organic charge-transfer material called CuTCNQ. Although CuTCNQ has been known for around 20 years, its precise mode of operation was unclear, Heremans says. The team learned a lot about how this material works, developed new ways of preparing it, and succeeded in creating the smallest organic memory cells ever made, at 100 nm across.
Lastly, the team looked at RRAM based on organic semiconductors. Because this work did not start until halfway through the project, the results did not reach the same level as those for CuTCNQ, but significant progress was made.
EMMA carries on
When Nosce Memorias ended in March 2007, plenty of work remained to be done to create a workable RRAM.
The challenge was taken up by EMMA (Emerging Materials for Mass-storage Architectures), another EU-supported project that runs until September 2009. Like Nosce Memorias, EMMA is coordinated by IMEC and has STMicroelectronics as a member, though the other partners are different.
EMMA is working on the CuTCNQ developed by Nosce Memorias, as well as on metal oxides. For CuTCNQ, Heremans explains, the goals are to make the material more durable through better control of the switching mechanism, now that this is understood.
Extended working life is also important for the polymer semiconductors pioneered by Nosce Memorias. Low-cost polymer memory could be important in RFID tags (also called ORFID) for the remote identification of goods, equipment and people.
Adapted from materials provided by ICT Results.

Fausto Intilla
www.oloscience.com

martedì 6 novembre 2007

New Computer Program Automates Chip Debugging

Source:

ScienceDaily (Nov. 6, 2007) — Fixing design bugs and wrong wire connections in computer chips after they've been fabricated in silicon is a tedious, trial-and-error process that often costs companies millions of dollars and months of time-to-market.
Engineering researchers at the University of Michigan say it doesn't have to be that way. They've developed a new technology to automate "post-silicon debugging."
"Today's silicon technology has reached such levels of small-scale fabrication and of sheer complexity that it is almost impossible to produce computer chips that work correctly under all scenarios," said Valeria Bertacco, assistant professor of electrical engineering and computer science and co-investigator in the new technology. "Almost all manufacturers must produce several prototypes of a given design before they attain a working chip."
FogClear, as the new method is called, uses puzzle-solving search algorithms to diagnose problems early on and automatically adjust the blueprint for the chip. It reduces parts of the process from days to hours.
"Practically all complicated chips have bugs and finding all bugs is intractable," said Igor Markov, associate professor of computer science and electrical engineering and another of FogClear's developers. "It's a paradox. Today, manufacturers are producing chips that must work for almost all applications, from e-mail to chess, but they cannot be validated for every possible condition. It's physically impossible."
In the current system, a chip design is first validated in simulations. Then a draft is cast in silicon, and this first prototype undergoes additional verification with more realistic applications. If a bug is detected at this stage, an engineer must narrow down the cause of the problem and then craft a fix that does not disrupt the delicate balance of all other components of the system. This can take several days. Engineers then produce new prototypes incorporating all the fixes. This process repeats until they arrive at a prototype that is free of bugs. For modern chips, the process of making sure a chip is free of bugs takes as much time as production.
"Bugs found post-silicon are often very difficult to diagnose and repair because it is difficult to monitor and control the signals that are buried inside a silicon die, or chip. Up until now engineers have handled post-silicon debugging more as an art than a science," said Kai-Hui Chang, a recent doctoral graduate who will present a paper on FogClear at the upcoming International Conference on Computer-Aided Design.
FogClear automates this debugging process. The computer-aided design tool can catch subtle errors that several months of simulations would still miss. Some bugs might take days or weeks before causing any miscomputation, and they might only do so under very rare circumstances, such as operating at high temperature. The new application searches for and finds the simplest way to fix a bug, the one that has the least impact on the working parts of the chip. The solution usually requires reconnecting certain wires, and does not affect transistors.
Chang, who received his doctorate in electrical engineering and computer science from U-M in August, will present Nov. 6 at the International Conference on Computer-Aided Design in San Jose, California. The paper is titled "Automating Post-Silicon Debugging and Repair." Markov and Bertacco are co-authors with Chang.
Adapted from materials provided by University of Michigan.

Fausto Intilla
www.oloscience.com

venerdì 19 ottobre 2007

Computers With 'Common Sense'


Source:

ScienceDaily (Oct. 18, 2007) — Using a little-known Google Labs widget, computer scientists from UC San Diego and UCLA have brought common sense to an automated image labeling system. This common sense is the ability to use context to help identify objects in photographs.
For example, if a conventional automated object identifier has labeled a person, a tennis racket, a tennis court and a lemon in a photo, the new post-processing context check will re-label the lemon as a tennis ball.
“We think our paper is the first to bring external semantic context to the problem of object recognition,” said computer science professor Serge Belongie from UC San Diego.
The researchers show that the Google Labs tool called Google Sets can be used to provide external contextual information to automated object identifiers.
Google Sets generates lists of related items or objects from just a few examples. If you type in John, Paul and George, it will return the words Ringo, Beatles and John Lennon. If you type “neon” and “argon” it will give you the rest of the noble gasses.
“In some ways, Google Sets is a proxy for common sense. In our paper, we showed that you can use this common sense to provide contextual information that improves the accuracy of automated image labeling systems,” said Belongie.
The image labeling system is a three step process. First, an automated system splits the image up into different regions through the process of image segmentation. In the photo above, image segmentation separates the person, the court, the racket and the yellow sphere.
Next, an automated system provides a ranked list of probable labels for each of these image regions.
Finally, the system adds a dose of context by processing all the different possible combinations of labels within the image and maximizing the contextual agreement among the labeled objects within each picture.
It is during this step that Google Sets can be used as a source of context that helps the system turn a lemon into a tennis ball. In this case, these “semantic context constraints” helped the system disambiguate between visually similar objects.
In another example, the researchers show that an object originally labeled as a cow is (correctly) re-labeled as a boat when the other objects in the image – sky, tree, building and water – are considered during the post-processing context step. In this case, the semantic context constraints helped to correct an entirely wrong image label. The context information came from the co-occurrence of object labels in the training sets rather than from Google Sets.
The computer scientists also highlight other advances they bring to automated object identification. First, instead of doing just one image segmentation, the researchers generated a collection of image segmentations and put together a shortlist of stable image segmentations. This increases the accuracy of the segmentation process and provides an implicit shape description for each of the image regions.
Second, the researchers ran their object categorization model on each of the segmentations, rather than on individual pixels. This dramatically reduced the computational demands on the object categorization model.
In the two sets of images that the researchers tested, the categorization results improved considerably with inclusion of context. For one image dataset, the average categorization accuracy increased more than 10 percent using the semantic context provided by Google Sets. In a second dataset, the average categorization accuracy improved by about 2 percent using the semantic context provided by Google Sets. The improvements were higher when the researchers gleaned context information from data on co-occurrence of object labels in the training data set for the object identifier.
Right now, the researchers are exploring ways to extend context beyond the presence of objects in the same image. For example, they want to make explicit use of absolute and relative geometric relationships between objects in an image – such as “above” or “inside” relationships. This would mean that if a person were sitting on top of an animal, the system would consider the animal to be more likely a horse than a dog.
Reference: “Objects in Context,” by Andrew Rabinovich, Carolina Galleguillos, Eric Wiewiora and Serge Belongie from the Department of Computer Science and Engineering at the UCSD Jacobs School of Engineering. Andrea Vedaldi from the Department of Computer Science, UCLA.
The paper will be presented on Thursday 18 October 2007 at ICCV 2007 – the 11th IEEE International Conference on Computer Vision in Rio de Janeiro, Brazil.
Funders: National Science Foundation, Afred P. Sloan Research Fellowship, Air Force Office of Scientific Research, Office of Naval Research.
Adapted from materials provided by University of California - San Diego.

Fausto Intilla

martedì 16 ottobre 2007

Thwarting The Growth Of Internet Black Markets

Source:
Science Daily — Carnegie Mellon University's Adrian Perrig and Jason Franklin, working in conjunction with Vern Paxson of the International Computer Science Institute and Stefan Savage of the University of California, San Diego, have designed new computer tools to better understand and potentially thwart the growth of Internet black markets, where attackers use well-developed business practices to hawk viruses, stolen data and attack services.
"These troublesome entrepreneurs even offer tech support and free updates for their malicious creations that run the gamut from denial of service attacks designed to overwhelm Web sites and servers to data stealing Trojan viruses," said Perrig, an associate professor of electrical and computer engineering and engineering and public policy.
In order to understand the millions of lines of data derived from monitoring the underground markets for more than seven months, Carnegie Mellon researchers developed automated techniques to measure and catalogue the activities of the shadowy online crooks who profit from spewed spam, virus-laden PCs and identity theft. The researchers estimate that the total value of the illegal materials available for sale in the seven-month period could total more than $37 million.
"Our research monitoring found that more than 80,000 potential credit card numbers were available through these illicit underground web economies," said Franklin, a Ph.D. student in computer science. However, the researchers warned that because checking the validity of the card numbers was not possible without credit card company assistance, the cards seen may not have been valid when they were observed.
Whatever the purchases, a buyer will typically contact the black market vendor privately using email, or in some cases, a private instant message. Money generally changes hands through non-bank payment services such as e-gold, making the criminals difficult to track.
To stem the flow of stolen credit cards and identity data, Carnegie Mellon researchers proposed two technical approaches to reduce the number of successful market transactions, including a slander attack and another technique, which were aimed at undercutting the cyber-crooks verification or reputation system.
"Just like you need to verify that individuals are honest on E-bay, online criminals need to verify that they are dealing with 'honest' criminals," Franklin said.
In a slander attack, an attacker eliminates the verified status of a buyer or seller through false defamation. "By eliminating the verified status of the honest individuals, an attacker establishes a lemon market where buyers are unable to distinguish the quality of the goods or services," Franklin said.
The researchers also propose to undercut the burgeoning black market activity by creating a deceptive sales environment.
Perrig's team developed a technique to establish fake verified-status identities that are difficult to distinguish from other-verified status sellers making it hard for buyers to identify the honest verified-status sellers from dishonest verified-status sellers.
"So, when the unwary buyer tries to collect the goods and services promised, the seller fails to provide the goods and services. Such behavior is known as 'ripping.' And it is the goal of all black market site's verification systems to minimize such behavior," said Franklin.
There have been successful takedowns against known black market sites, such as the U.S. Secret Service-run Operation Firewall three years ago. That operation against the notorious Shadowcrew resulted in 28 arrests around the globe, Carnegie Mellon researchers reported.
"The scary thing about all this is that you do not have to be in the know to find black markets, they are easy to find, easy to join and just a mouse click away," Franklin said.
"We believe these black markets are growing, so we will have even more incidents to monitor and study in the future," Perrig said.
That growth is also reflected in the latest Computer Security Institute (CSI) Computer Crime and Security Survey that shows average cyber-losses more than doubled after a five-year decline. The 2007 CSI survey reported that U.S. companies on average lost more than $300,000 to cyber crooks compared to $168,000 last year.
Note: This story has been adapted from material provided by Carnegie Mellon University.

Fausto Intilla
www.oloscience.com

Computer Security Can Double As Help For The Blind


Source:

Science Daily — Before you can post a comment to most blogs, you have to type in a series of distorted letters and numbers (a CAPTCHA) to prove that you are a person and not a computer attempting to add comment spam to the blog.
What if -- instead of wasting your time and energy typing something meaningless like SGO9DXG -- you could label an image or perform some other quick task that will help someone who is visually impaired do their grocery shopping?
In a position paper presented at Interactive Computer Vision (ICV) 2007 on October 15 in Rio de Janeiro, computer scientists from UC San Diego led by professor Serge Belongie outline a grid system that would allow CAPTCHAs to be used for this purpose -- and an endless number of other good causes.
"One of the application areas for my research is assistive technologyfor the blind. For example, there is an enormous amount of data that needs to be labeled for our grocery shopping aid to work. We are developing a wearable computer with a camera that can lead a visually impaired user to a desired product in a grocery store by analyzing the video stream. Our paper describes a way that people who are looking to prove that they are humans and not computers can help label still shots from video streams in real time," said Belongie.
The researchers call their system a "Soylent grid" which is a reference to the 1973 film Soylent Green (see more on this reference at the end of the article).
"The degree to which human beings could participate in the system (as remote sighted guides) ranges from none at all to virtually unlimited. If no human user is involved in the loop, only computer vision algorithms solve the identification problem. But in principle, if there were an unlimited number of humans in the loop, all the video frames could be submitted to a SOYLENT GRID, be solved immediately and sent back to the device to guide the user," the authors write in their paper.
From the front end, users who want to post a comment on a blog would be asked to perform a variety of tasks, instead of typing in a string of misshapen letters and numbers.
"You might be asked to click on the peanut butter jar or click the Cheetos bag in an image," said Belongie. "This would be one of the so called 'Where's Waldo' object detection tasks."
The task list also includes "Name that Thing" (object recognition), "Trace This" (image segmentation) and "Hot or Not" (choosing visually pleasing images).
"Our research on the personal shopper for the visually impaired -- called Grozi -- is a big motivation for this project. When we started the Grozi project, one of the students, Michele Merler -- who is now working on a Ph.D. at Columbia University -- captured 45 minutes of video footage from the campus grocery store and then endured weeks of manually intensive labor, drawing bounding boxes and identifying the 120 products we focused on. This is work the soylent grid could do," said Belongie.
From the back end, researchers and others who need images labeled would interact with clients (like a blog hosting company) that need to take advantage of the CAPTCHA and spam filtering capabilities of the grid.
"Getting this done is going to take an innovative collaboration between academia and industry. Calit2 could be uniquely instrumental in this project," said Belongie. "Right now we are working on a proposal that will outline exactly what we need -- access to X number of CAPTCHA requests in one week, for example. With this, we'll do a case study and demonstrate just how much data can be labeled with 99 percent reliability through the soylent grid. I'm hoping for people to say, 'Wow, I didn't know that kind of computation was available.'"
This work incorporates recent work from a variety of researchers, including computer scientist Luis von Ahn from Carnegie Mellon University. His reCAPTCHA project uses CAPTCHAs to digitize books.
Soylent Grid?
The researchers call their system a "Soylent grid" and titled their paper "Soylent Grid: it's Made of People! Both the grid name and paper name are references to the 1973 cult classic film Soylent Green, a dystopian science fiction film set in an overpopulated world in which the masses are reduced to eating different varieties of "soylent" -- a synthetic food that suggests both soybeans and lentils. The line from the movie that inspired the title of this paper comes is delivered when someone discovers that soylent green is actually made of cadavers from a government sponsored euthanasia program -- prompting the phrase "Soylent green, it's made of people!" The computer scientists are playing off this famous phrase with their title: "Soylent Grid: it's Made of People!" The idea being that people from all over the world need to jump through anti-spam hoops such as CAPTCHAs, and the power of these people can be harnessed through a grid structure to do some good in the world.
Article: "Soylent Grid: it's Made of People!" by Stephan Steinbach, Vincent Rabaud and Serge Belongie
Note: This story has been adapted from material provided by University of California - San Diego.

Fausto Intilla

martedì 9 ottobre 2007

Quantum Computing Possibilites Enhanced With New Material


Source:

Science Daily — Scientists at Florida State University's National High Magnetic Field Laboratory and the university's Department of Chemistry and Biochemistry have introduced a new material that could be to computers of the future what silicon is to the computers of today.
The material -- a compound made from the elements potassium, niobium and oxygen, along with chromium ions -- could provide a technological breakthrough that leads to the development of new quantum computing technologies. Quantum computers would harness the power of atoms and molecules to perform memory and processing tasks on a scale far beyond those of current computers.
"The field of quantum information technology is in its infancy, and our work is another step forward in this fascinating field," said Saritha Nellutla, a postdoctoral associate at the magnet lab and lead author of the paper published in Physical Review Letters.
Semiconductor technology is close to reaching its performance limit. Over the years, processors have shrunk to their current size, with the components of a computer chip more than 1,000 times smaller than the thickness of a human hair. At those very small scales, quantum effects -- behaviors in matter that occur at the atomic and subatomic levels -- can start playing a role. By exploiting those behaviors, scientists hope to take computing to the next level.
In current computers, the basic unit of information is the "bit," which can have a value of 0 or 1. In so-called quantum computers, which currently exist only in theory, the basic unit is the "qubit" (short for quantum bit). A qubit can have not only a value of 0 or 1, but also all kinds of combinations of 0 and 1 -- including 0 and 1 at the same time -- meaning quantum computers could perform certain kinds of calculations much more effectively than current ones.
How scientists realize the promise of the theoretical qubit is not clear. Various designs and paths have been proposed, and one very promising idea is to use tiny magnetic fields, called "spins." Spins are associated with electrons and various atomic nuclei.
Magnet lab scientists used high magnetic fields and microwave radiation to "operate" on the spins in the new material they developed to get an indication of how long the spin could be controlled. Based on their experiments, the material could enable 500 operations in 10 microseconds before losing its ability to retain information, making it a good candidate for a qubit.
Putting this spin to work would usher in a technological revolution, because the spin state of an electron, in addition to its charge, could be used to carry, manipulate and store information.
"This material is very promising," said Naresh Dalal, a professor of chemistry and biochemistry at FSU and one of the paper's authors. "But additional synthetic and magnetic characterization work is needed before it could be made suitable for use in a device."
Dalal also serves as an adviser to FSU chemistry graduate student Mekhala Pati, who created the material.
Note: This story has been adapted from material provided by Florida State University.

Fausto Intilla

giovedì 4 ottobre 2007

Running Shipwreck Simulations Backwards Helps Identify Dangerous Waves

Source:
Science Daily — Big waves in fierce storms have long been the focus of ship designers in simulations testing new vessels.
But a new computer program and method of analysis by University of Michigan researchers makes it easy to see that a series of smaller waves—a situation much more likely to occur—could be just as dangerous.
"Like the Edmund Fitzgerald that sank in Michigan in 1975, many of the casualties that happen occur in circumstances that aren't completely understood, and therefore they are difficult to design for," said Armin Troesch, professor of naval architecture and marine engineering. "This analysis method and program gives ship designers a clearer picture of what they're up against."
Troesch and doctoral candidate Laura Alford will present a paper on their findings Oct. 2 at the International Symposium on Practical Design of Ships and Other Floating Structures, also known as PRADS 2007.
Today's ship design computer modeling programs are a lot like real life, in that they go from cause to effect. A scientist tells the computer what type of environmental conditions to simulate, asking, in essence, "What would waves like this do to this ship?" The computer answers with how the boat is likely to perform.
Alford and Troesch's method goes backwards, from effect to cause. To use their program, a scientist enters a particular ship response, perhaps the worst case scenario. The question this time is more like, "What are the possible wave configurations that could make this ship experience the worst case scenario?" The computer answers with a list of water conditions.
What struck the researchers when they performed their analysis was that quite often, the biggest ship response is not caused by the biggest waves. Wave height is only one contributing factor. Others are wave grouping, wave period (the amount of time between wave crests), and wave direction.
"In a lot of cases, you could have a rare response, but when we looked at just the wave heights that caused that response, we found they're not so rare," Alford said. "This is about operational conditions and what you can be safely sailing in. The safe wave height might be lower than we thought."
This new method is much faster than current simulations. Computational fluid dynamics modeling in use now works by subjecting the virtual ship to random waves. This method is extremely computationally intensive and a ship designer would have to go through months of data to pinpoint the worst case scenario.
Alford and Troesch's program and method of analysis takes about an hour. And it gives multiple possible wave configurations that could have statistically caused the end result.
There's an outcry in the shipping industry for advanced ship concepts, including designs with more than one hull, Troesch said. But because ships are so large and expensive to build, prototypes are uncommon. This new method is meant to be used in the early stages of design to rule out problematic architectures. And it is expected to help spur innovation.
A majority of international goods are still transported by ship, Troesch said.
The paper is called "A Methodology for Creating Design Ship Responses."
Note: This story has been adapted from material provided by University of Michigan.

Fausto Intilla
www.oloscience.com

Software 'Chipper' Speeds Debugging

Source:

Science Daily — Computer scientists at UC Davis have developed a technique to speed up program debugging by automatically "chipping" the software into smaller pieces so that bugs can be isolated more easily.
Computer programs consist of thousands, tens or even hundreds of thousands of lines of code. To isolate a bug in the code, programmers often break it into smaller pieces until they can pin down the error in a smaller stretch that is easier to manage. UC Davis graduate student Chad Sterling and Ron Olsson, professor of computer science, set out to automate that process.
"It's really tedious to go through thousands of lines of code," Olsson said.
The "Chipper" tools developed by Sterling and Olsson chip off pieces of software while preserving the program structure.
"The pieces have to work after they are cut down," Olsson said. "You can't just cut in mid-sentence."
In a recent paper in the journal "Software -- Practice and Experience," Olsson and Sterling describe ChipperJ, a version developed for the Java programming language. ChipperJ was able to reduce large programs to 20 to 35 percent of their former size in under an hour.
More information about automated program chipping is available on Olsson's Web site at http://www.cs.ucdavis.edu/~olsson/
Note: This story has been adapted from material provided by University of California, Davis.

Fausto Intilla
www.oloscience.com

mercoledì 3 ottobre 2007

'Dead Time' Limits Quantum Cryptography Speeds

Source:
Science DailyQuantum cryptography is potentially the most secure method of sending encrypted information, but does it have a speed limit" According to a new paper* by researchers at the National Institute of Standards and Technology (NIST) and the Joint Quantum Institute** (JQI), technological and security issues will stall maximum transmission rates at levels comparable to that of a single broadband connection, such as a cable modem, unless researchers reduce "dead times" in the detectors that receive quantum-encrypted messages.
In quantum cryptography, a sender, usually designated Alice, transmits single photons, or particles of light, encoding 0s and 1s to a recipient, "Bob." The photons Bob receives and correctly measures make up the secret "key" that is used to decode a subsequent message. Because of the quantum rules, an eavesdropper, "Eve," cannot listen in on the key transmission without being detected, but she could monitor a more traditional communication (such as a phone call) that must take place between Alice and Bob to complete their communication.
Modern telecommunications hardware easily allows Alice to transmit photons at rates much faster than any Internet connection. But at least 90 percent (and more commonly 99.9 percent) of the photons do not make it to Bob's detectors, so that he receives only a small fraction of the photons sent by Alice. Alice can send more photons to Bob by cranking up the speed of her transmitter, but then, they'll run into problems with the detector's "dead time," the period during which the detector needs to recover after it detects a photon. Commercially available single-photon detectors need about 50-100 nanoseconds to recover before they can detect another photon, much slower than the 1 nanosecond between photons in a 1-Ghz transmission.
Not only does dead time limit the transmission rate of a message, but it also raises security issues for systems that use different detectors for 0s and 1s. In that important "phone call," Bob must report the time of each detection event. If he reports two detections occurring within the dead time of his detectors, then Eve can deduce that they could not have come from the same detector and correspond to opposite bit values.
Sure, Bob can choose not to report the second, closely spaced photon, but this further decreases the key production rate. And for the most secure type of encryption, known as a one-time pad, the key has to have as many bits of information as the message itself.
The speed limit would go up, says NIST physicist Joshua Bienfang, if researchers reduce the dead time in single-photon detectors, something that several groups are trying to do. According to Bienfang, higher speeds also would be useful for wireless cryptography between a ground station and a satellite in low-Earth orbit. Since the two only would be close enough to communicate for a small part of the day, it would be beneficial to send as much information as possible during a short time window.
* D.J. Rogers, J.C. Bienfang, A. Nakassis, H. Xu and C.W. Clark, Detector dead-time effects and paralyzability in high-speed quantum key distribution, New Journal of Physics (September 2007);EJ/abstract/-kwd=nj-2f2/1367-2630/9/9/319.
**The JQI is a research partnership that includes NIST and the University of Maryland.
Note: This story has been adapted from material provided by National Institute of Standards and Technology.

Fausto Intilla
www.oloscience.com

Technology Could Enable Computers To 'Read The Minds' Of Users

Source:
Science DailyTufts University researchers are developing techniques that could allow computers to respond to users' thoughts of frustration -- too much work -- or boredom--too little work. Applying non-invasive and easily portable imaging technology in new ways, they hope to gain real-time insight into the brain's more subtle emotional cues and help provide a more efficient way to get work done.
"New evaluation techniques that monitor user experiences while working with computers are increasingly necessary," said Robert Jacob, computer science professor and researcher. "One moment a user may be bored, and the next moment, the same user may be overwhelmed. Measuring mental workload, frustration and distraction is typically limited to qualitatively observing computer users or to administering surveys after completion of a task, potentially missing valuable insight into the users' changing experiences."
Sergio Fantini, biomedical engineering professor, in conjunction with Jacob's human-computer interaction (HCI) group, is studying functional near-infrared spectroscopy (fNIRS) technology that uses light to monitor brain blood flow as a proxy for workload stress a user may experience when performing an increasingly difficult task. A $445,000 grant from the National Science Foundation will allow the interdisciplinary team to incorporate real-time biomedical data with machine learning to produce a more in-tune computer user experience.
Lighting up the brain
"fNIRS is an emerging non-invasive, lightweight imaging tool which can measure blood oxygenation levels in the brain," said Fantini, also an associate dean for graduate education at Tufts' School of Engineering.
The fNIRS device, which looks like a futuristic headband, uses laser diodes to send near-infrared light through the forehead at a relatively shallow depth--only two to three centimeters--to interact with the brain's frontal lobe. Light usually passes through the body's tissues, except when it encounters oxygenated or deoxygenated hemoglobin in the blood. Light waves are absorbed by the active, blood-filled areas of the brain and any remaining light is diffusely reflected to the fNIRS detectors.
"fNIRS, like MRI, uses the idea that blood flow changes to compensate for the increased metabolic demands of the area of the brain that's being used," said Erin Solovey, a graduate researcher at the School of Engineering.
"We don't know how specific we can be about identifying users' different emotional states," said Fantini. "However, the particular area of the brain where the blood flow change occurs should provide indications of the brain metabolic changes and by extension workload, which could be a proxy for emotions like frustration."
In the initial experiments, Jacob and Fantini's groups determined how accurately fNIRS could register users' workload. While wearing the fNIRS device, test subjects viewed a multicolored cube consisting of eight smaller cubes with two, three or four different colors. As the cube rotated onscreen, subjects counted the number of colored squares in a series of 30 tasks. The fNIRS device and subsequent user surveys reflected greater difficulty as users kept track of increasing numbers of colors. The fNIRS data agreed with user surveys up to 83 percent of the time.
The Tufts group will present its initial results on using fNIRS to detect the user workload experience at the Association for Computing Machinery (ACM) symposium on user interface software and technology, to be held Oct. 7 through 10 in Newport, R.I.
"It seems that we can predict, with relatively high confidence, whether the subject was experiencing no workload, low workload, or high workload," said Leanne Hirshfield, a graduate researcher and lead author on the poster paper to be presented at the ACM symposium.
Note: This story has been adapted from material provided by Tufts University.

Fausto Intilla
www.oloscience.com

sabato 29 settembre 2007

Any Digital Camera Can Take Multibillion-pixel Shots With New Device


Source:

Science Daily — Researchers at Carnegie Mellon University, in collaboration with scientists at NASA's Ames Research Center, have built a low-cost robotic device that enables any digital camera to produce breathtaking gigapixel (billions of pixels) panoramas, called GigaPans.
The technology gives people a new way to make and share images of their environment. It is being used by students to document their communities and by the Commonwealth of Pennsylvania to make Civil War sites accessible on the Web. To promote further sharing of this imagery, Carnegie Mellon has launched a public Web site, http://www.gigapan.org/, where people can upload and interactively explore panoramic images of any format.
In cooperation with Google, researchers also have created a GigaPan layer on Google Earth. Anyone using Google Earth can now fly into these GigaPan panoramas in the context of exploring the world.
Researchers have begun a public beta process with the GigaPan hardware, Web site, and software. The hardware technology enabling GigaPan images is a robotic camera mount, jointly designed and manufactured by Charmed Labs of Austin Texas. The tripod-like mount makes it possible for a digital camera to take hundreds of overlapping images of landscapes, buildings or rooms. Then, using software developed by Carnegie Mellon and Ames, these images can be arranged in a grid and digitally stitched together into a single image that could consist of tens of billions of pixels.
These huge image files can then be explored by zooming in on features of interest in a manner similar to Google Earth. "We have taken imagery and made it a new tool for exploration and for enhancing global understanding," said Illah Nourbakhsh, associate professor in the School of Computer Science's Robotics Institute. Nourbakhsh and Randy Sargent, senior systems scientist at Carnegie Mellon West in Moffett Field, Calif., led GigaPan's development. "An ordinary photo makes it possible to cross language barriers," Nourbakhsh explained. "But a GigaPan provides so much information that it leads to conversations between the person who took the panoramas and the people who are exploring it and discovering new details."
Last spring, the Pennsylvania Board of Tourism began to use GigaPan to enable people to virtually explore Civil War sites. The technology is also being used for Robot250, an arts-based robotics program in the Pittsburgh area. Robot250 will increase technical literacy by teaching students, artists and other members of the public how to build customized robots.
Nourbakhsh and his colleagues recently began to work with UNESCO's International Bureau of Education and its Associated Schools Network on a project that will link school children in different parts of the world in exploring issues of cultural identity through a classroom project. Middle school children from Pittsburgh to South Africa to Trinidad and Tobago will use the GigaPan camera to share images of their neighborhoods, lives and cultures. "This project will explore curriculum development from the local to the global level," said IBE Director Clementina Acedo.
"It is an extraordinary opportunity to link a school-community based educational practice with high-end technology in the service of children's innovative learning, personal development and world communication. Plans call for the experiences of these children from poorer and richer countries to be presented at the 48th session of the International Conference of Education scheduled to take place in Geneva in November 2008.
Besides being a tool for education, Nourbakhsh and Sargent see the GigaPan system as an important tool for ecologists, biologists and other scientists. They plan to foster this effort by making several dozen GigaPans available to leading scientists with support from the Fine Foundation of Pittsburgh.
Nourbakhsh hopes the non-commercial GigaPan site will help to develop a community of GigaPan producers and users. "We're not interested in becoming just another photo-sharing site," he said. "We want as many people as possible involved. GigaPan is not just about the vision of the person who makes the image. People who explore the image can make discoveries and gain insights in ways that may be just as important."
Sargent got the idea for GigaPan when he was a technical staff member at Ames Research Center, helping to develop software for combining images from NASA's Mars Exploration Rovers into panoramas. He became convinced that the same technology could open people's eyes to the diversity of their own planet. "It is increasingly important to give people a broad view of the world, particularly to help us understand different cultures and different environments," he said. "It's too easy to have blinders on and to only see and understand what is local."
The GigaPan camera system is part of a larger effort known as the Global Connection Project, led by Nourbakhsh and Sargent. Its purpose is to make people all over the world more aware of their neighbors.
Note: This story has been adapted from material provided by Carnegie Mellon University.

Fausto Intilla

giovedì 27 settembre 2007

Superconducting Quantum Computing Cable Created


Source:

Science Daily — Physicists at the National Institute of Standards and Technology (NIST) have transferred information between two "artificial atoms" by way of electronic vibrations on a microfabricated aluminum cable, demonstrating a new component for potential ultra-powerful quantum computers of the future.
The setup resembles a miniature version of a cable-television transmission line, but with some powerful added features, including superconducting circuits with zero electrical resistance, and multi-tasking data bits that obey the unusual rules of quantum physics.
The resonant cable might someday be used in quantum computers, which would rely on quantum behavior to carry out certain functions, such as code-breaking and database searches, exponentially faster than today's most powerful computers.
Moreover, the superconducting components in the NIST demonstration offer the possibility of being easier to manufacture and scale up to a practical size than many competing candidates, such as individual atoms, for storing and transporting data in quantum computers.
Unlike traditional electronic devices, which store information in the form of digital bits that each possess a value of either 0 or 1, each superconducting circuit acts as a quantum bit, or qubit, which can hold values of 0 and 1 at the same time. Qubits in this "superposition" of both values may allow many more calculations to be performed simultaneously than is possible with traditional digital bits, offering the possibility of faster and more powerful computing devices. The resonant section of cable shuttling the information between the two superconducting circuits is known to engineers as a "quantum bus," and it could transport data between two or more qubits.
The NIST work is featured on the cover of the Sept. 27 issue of Nature. The scientists encoded information in one qubit, transferred this information as microwave energy to the resonant section of cable for a short storage time of 10 nanoseconds, and then successfully shuttled the information to a second qubit.
"We tested a new element for quantum information systems," says NIST physicist Ray Simmonds. "It's really significant because it means we can couple more qubits together and transfer information between them easily using one simple element."
The NIST work, together with another letter in the same issue of Nature by a Yale University group, is the first demonstration of a superconducting quantum bus. Whereas the NIST scientists used the bus to store and transfer information between independent qubits, the Yale group used it to enable an interaction of two qubits, creating a combined superposition state. These three actions, demonstrated collectively by the two groups, are essential for performing the basic functions needed in a superconductor-based quantum information processor of the future.
In addition to storing and transferring information, NIST's resonant cable also offers a means of "refreshing" superconducting qubits, which normally can maintain the same delicate quantum state for only half a microsecond. Disturbances such as electric or magnetic noise in the circuit can rapidly destroy a qubit's superposition state. With design improvements, the NIST technology might be used to repeatedly refresh the data and extend qubit lifetime more than 100-fold, sufficient to create a viable short-term quantum computer memory, Simmonds says. NIST's resonant cable might also be used to transfer quantum information between matter and light -- microwave energy is a low-frequency form of light -- and thus link quantum computers to ultra-secure quantum communications systems.
If they can be built, quantum computers -- harnessing the unusual rules of quantum mechanics, the principles governing nature's smallest particles -- might be used for applications such as fast and efficient code breaking, optimizing complex systems such as airline schedules, making counterfeit-proof money, and solving complex mathematical problems. Quantum information technology in general allows for custom-designed systems for fundamental tests of quantum physics and as-yet-unknown futuristic applications.
A superconducting qubit is about the width of a human hair. NIST researchers fabricate two qubits on a sapphire microchip, which sits in a shielded box about 8 cubic millimeters in size. The resonant section of cable is 7 millimeters long, similar to the coaxial wiring used in cable television but much thinner and flatter, zig-zagging around the 1.1 mm space between the two qubits. Like a guitar string, the resonant cable can be stimulated so that it hums or "resonates" at a particular tone or frequency in the microwave range. Quantum information is stored as energy in the form of microwave particles or photons.
The NIST research was supported in part by the Disruptive Technology Office.
*M.A. Sillanpää, J.I. Park, and R.W. Simmonds. 2007. Coherent quantum state storage and transfer between two phase qubits via a resonant cavity. Nature, Sept. 27.
Note: This story has been adapted from a news release issued by National Institute of Standards and Technology.

Fausto Intilla

Two Giant Steps In Advancement Of Quantum Computing Achieved


Source:

Science Daily — Two major steps toward putting quantum computers into real practice -- sending a photon signal on demand from a qubit onto wires and transmitting the signal to a second, distant qubit -- have been brought about by a team of scientists at Yale.
The accomplishments are reported in sequential issues of Nature on September 20 and September 27, on which it is highlighted as the cover along with complementary work from a group at the National Institute of Standards and Technologies.
Over the past several years, the research team of Professors Robert Schoelkopf in applied physics and Steven Girvin in physics has explored the use of solid-state devices resembling microchips as the basic building blocks in the design of a quantum computer. Now, for the first time, they report that superconducting qubits, or artificial atoms, have been able to communicate information not only to their nearest neighbor, but also to a distant qubit on the chip.
This research now moves quantum computing from "having information" to "communicating information." In the past information had only been transferred directly from qubit to qubit in a superconducting system. Schoelkopf and Girvin's team has engineered a superconducting communication 'bus' to store and transfer information between distant quantum bits, or qubits, on a chip. This work, according to Schoelkopf, is the first step to making the fundamentals of quantum computing useful.
The first breakthrough reported is the ability to produce on demand -- and control -- single, discrete microwave photons as the carriers of encoded quantum information. While microwave energy is used in cell phones and ovens, their sources do not produce just one photon. This new system creates a certainty of producing individual photons.
"It is not very difficult to generate signals with one photon on average, but, it is quite difficult to generate exactly one photon each time. To encode quantum information on photons, you want there to be exactly one," according to postdoctoral associates Andrew Houck and David Schuster who are lead co-authors on the first paper.
"We are reporting the first such source for producing discrete microwave photons, and the first source to generate and guide photons entirely within an electrical circuit," said Schoelkopf.
In order to successfully perform these experiments, the researchers had to control electrical signals corresponding to one single photon. In comparison, a cell phone emits about 10^23 (100,000,000,000,000,000,000,000) photons per second. Further, the extremely low energy of microwave photons mandates the use of highly sensitive detectors and experiment temperatures just above absolute zero.
"In this work we demonstrate only the first half of quantum communication on a chip -- quantum information efficiently transferred from a stationary quantum bit to a photon or 'flying qubit,'" says Schoelkopf. "However, for on-chip quantum communication to become a reality, we need to be able to transfer information from the photon back to a qubit."
This is exactly what the researchers go on to report in the second breakthrough. Postdoctoral associate Johannes Majer and graduate student Jerry Chow, lead co-authors of the second paper, added a second qubit and used the photon to transfer a quantum state from one qubit to another. This was possible because the microwave photon could be guided on wires -- similarly to the way fiber optics can guide visible light -- and carried directly to the target qubit. "A novel feature of this experiment is that the photon used is only virtual," said Majer and Chow, "winking into existence for only the briefest instant before disappearing."
To allow the crucial communication between the many elements of a conventional computer, engineers wire them all together to form a data "bus," which is a key element of any computing scheme. Together the new Yale research constitutes the first demonstration of a "quantum bus" for a solid-state electronic system. This approach can in principle be extended to multiple qubits, and to connecting the parts of a future, more complex quantum computer.
However, Schoelkopf likened the current stage of development of quantum computing to conventional computing in the 1950's, when individual transistors were first being built. Standard computer microprocessors are now made up of a billion transistors, but first it took decades for physicists and engineers to develop integrated circuits with transistors that could be mass produced.
Schoelkopf and Girvin are members of the newly formed Yale Institute for Nanoscience and Quantum Engineering (YINQE), a broad interdisciplinary activity among faculty and students from across the university.
Other Yale authors involved in the research are J.M. Gambetta, J.A. Schreier, J. Koch, B.R. Johnson, L. Frunzio, A. Wallraff, A. Blais and Michel Devoret. Funding for the research was from the National Security Agency under the Army Research Office, the National Science Foundation and Yale University.
Citation: Nature 449, 328-331 (20 September 2007) doi:10.1038/nature06126 , Nature 450, 443-447 (27 September 2007) doi:10.1038/nature06184
Note: This story has been adapted from a news release issued by Yale University.

Fausto Intilla

'Printers' That Can Make 3-D Solid Objects Soon To Enter Mainstream

Source:
Science Daily — It is a simple matter to print an E-book or other document directly from your computer, whether that document is on your hard drive, at a web site or in an email. But, imagine being able to 'print' solid objects, a piece of sports equipment, say, or a kitchen utensil, or even a prototype car design for wind tunnel tests. US researchers suggest such 3-D printer technology will soon enter the mainstream once a killer application emerges.
Such technology already exists and is maturing rapidly so that high-tech designers and others can share solid designs almost as quickly as sending a fax. The systems available are based on bath of liquid plastic which is solidified by laser light. The movements of the laser are controlled by a computer that reads a digitized 3D map of the solid object or design.
Writing in the Inderscience publication International Journal of Technology Marketing, US researchers discuss how this technology might eventually move into the mainstream allowing work environments to 3-D print equipment, whether that is plastic paperclips, teacups, or components that can be joined to make sophisticated devices, perhaps bolted together with printed nuts and bolts.
Physicist Phil Anderson of the School of Theoretical and Applied Science working with Cherie Ann Sherman of the Anisfield School of Business, both at Ramapo College of New Jersey, in Mahwah, New Jersey, explain how this technology, which is known formally as 'rapid prototyping' could revolutionize the way people buy goods.
It will allow them to buy or obtain a digital file representing a physical product electronically and then produce the object at a time and place convenient to them. The technology will be revolutionary in the same way that music downloads have shaken up the music industry. "This technology has the potential to generate a variety of new business models, which would enhance the average consumer's lifestyle," say the paper's authors.
The team discusses the current advanced applications of rapid prototyping which exist in the military where missing and damaged components can be produced at the site of action. Education too can make use of 3-D printing to allow students to make solid their experimental designs.
Also, product developers can share tangible prototypes by transferring the digitized design without the delay of shipping a solid object between sites, which may be separated by thousands of miles. The possibilities for consumer goods, individualized custom products, replacement components, and quick fixes for broken objects, are almost unlimited, the authors suggest.
From the business perspective, e-commerce sites will essentially become digital download sites with physical stores, retail employees, and shipping eliminated. It is only a matter of time before the 'killer application,' the 3-D equivalent of the mp3 music file, one might say, arrives to make owning a 3-D printer as necessary to the modern lifestyle as owning a microwave oven, a TV, or indeed a personal computer.
Note: This story has been adapted from a news release issued by Inderscience Publishers.

Fausto Intilla
www.oloscience.com

mercoledì 19 settembre 2007

Computer Memory Designed In Nanoscale Can Retrieve Data 1,000 Times Faster

Source:

Science Daily — Scientists from the University of Pennsylvania have developed nanowires capable of storing computer data for 100,000 years and retrieving that data a thousand times faster than existing portable memory devices such as Flash memory and micro-drives, all using less power and space than current memory technologies.
Ritesh Agarwal, an assistant professor in the Department of Materials Science and Engineering, and colleagues developed a self-assembling nanowire of germanium antimony telluride, a phase-changing material that switches between amorphous and crystalline structures, the key to read/write computer memory. Fabrication of the nanoscale devices, roughly 100 atoms in diameter, was performed without conventional lithography, the blunt, top-down manufacturing process that employs strong chemicals and often produces unusable materials with space, size and efficiency limitations.
Instead, researchers used self-assembly, a process by which chemical reactants crystallize at lower temperatures mediated by nanoscale metal catalysts to spontaneously form nanowires that were 30-50 nanometers in diameter and 10 micrometers in length, and then they fabricated memory devices on silicon substrates.
"We measured the resulting nanowires for write-current amplitude, switching speed between amorphous and crystalline phases, long-term durability and data retention time," Agarwal said.
Tests showed extremely low power consumption for data encoding (0.7mW per bit). They also indicated the data writing, erasing and retrieval (50 nanoseconds) to be 1,000 times faster than conventional Flash memory and indicated the device would not lose data even after approximately 100,000 years of use, all with the potential to realize terabit-level nonvolatile memory device density.
"This new form of memory has the potential to revolutionize the way we share information, transfer data and even download entertainment as consumers," Agarwal said. "This represents a potential sea-change in the way we access and store data."
Phase-change memory in general features faster read/write, better durability and simpler construction compared with other memory technologies such as Flash. The challenge has been to reduce the size of phase change materials by conventional lithographic techniques without damaging their useful properties. Self-assembled phase-change nanowires, as created by Penn researchers, operate with less power and are easier to scale, providing a useful new strategy for ideal memory that provides efficient and durable control of memory several orders of magnitude greater than current technologies.
"The atomic scale of the nanodevices may represent the ultimate size limit in current-induced phase transition systems for non-volatile memory applications," Agarwal said.
Current solid-state technology for products like memory cards, digital cameras and personal data assistants traditionally utilize Flash memory, a non-volatile and durable computer memory that can be erased and reprogrammed electronically. Data on Flash drives provides most battery-powered devices with acceptable levels of durability and moderately fast data access. Yet the technology's limits are apparent. Digital cameras can't snap rapid-fire photos because it takes precious seconds to store the last photo to memory. If the memory device is fast, as in DRAM and SRAM used in computers, then it is volatile; if the plug on a desktop computer is pulled, all recent data entry is lost.
Therefore, a universal memory device is desired that can be scalable, fast, durable and nonvolatile, a difficult set of requirements which have now been demonstrated at Penn.
"Imagine being able to store hundreds of high-resolution movies in a small drive, downloading them and playing them without wasting time on data buffering, or imagine booting your laptop computer in a few seconds as you wouldn't need to transfer the operating system to active memory" Agarwal said.
The research was performed by Agarwal, Se-Ho Lee and Yeonwoong Jung of the Department of Materials Science and Engineering in the School of Engineering and Applied Science at Penn. The findings appear online in the journal Nature Nanotechnology and in the October print edition.
The research was supported by the Materials Research Science and Engineering Center at Penn, the University of Pennsylvania Research Foundation award and a grant from the National Science Foundation.
Note: This story has been adapted from a news release issued by University of Pennsylvania.

Fausto Intilla
www.oloscience.com

martedì 11 settembre 2007

Getting There Faster With Virtual Reality


Source:

Science Daily — Is the navigation system too complex? Does it distract the driver’s attention from the traffic? To test electronic assistants, their developers have to build numerous prototypes – an expensive and time-consuming business. Tests in a virtual world make prototypes unnecessary.
The engineer stares intently at the display on the virtual dashboard. His task is to test the new driver assistance system from the user’s perspective. How seriously does it distract a driver to listen to a text message while negotiating a roundabout?
How does the driver apprehend a collision warning in the fog? Developers of electronic assistants have to build large numbers of prototypes and test countless functions. A great deal of time and money must therefore be invested before the product is ready to go on the market. Tomorrow’s engineers will have a much easier time: They can simply create virtual prototypes and simulate all the functions in a virtual world.
Car manufacturers and suppliers will be the chief beneficiaries of Personal Immersion® in future. Developed by the Fraunhofer Institute for Industrial Engineering IAO in Stuttgart, this virtual reality and stereoscopic interactive simulation system makes it possible to display tailored virtual environments for purposes such as the development of driver assistance systems.
“Our VR system not only simulates the instruments,” explains IAO project manager Manfred Dangelmaier. “Every level of this system is virtual. The user is seated in a virtual driving simulator, surrounded by a virtual world, facing a virtual dashboard with a virtual control system.” This allows the engineers to simulate every conceivable situation in order to test the man-machine interfaces. Whatever traffic situation is to be illustrated, and whatever demands the driver may make on the vehicle electronics, such as retrieving up-to-date traffic jam warnings – there are no limits to the imagination when testing these systems.
“Interactive simulation of this kind significantly cuts development time and costs,” says Dangelmaier. Virtual reality also facilitates communication within the interdisciplinary teams engaged in immersive design.
Up to now, a major problem in portraying virtual worlds was the projector resolution. “In technical terms, it is not easy to achieve a satisfactory portrayal of both the full-size surroundings and the close-up details at the same time in a virtual environment,” says Dangelmaier. But the researchers have solved the problem: Instead of the two projectors customary in VR systems, their systems operate with four projectors in a complex stereo projection setup. The scientists will be presenting potential applications at the International Motor Show (IAA) in Frankfurt on September 13 through 23.
Note: This story has been adapted from a news release issued by Fraunhofer-Gesellschaft.

Fausto Intilla

venerdì 7 settembre 2007

Computerized Treatment Of Manuscripts

Source:

Science Daily — Researchers at the UAB Computer Vision Centre working on the automatic recognition of manuscript documents have designed a new system that is more efficient and reliable than currently existing ones.
The BSM (acronym for "Blurred Shape Model") has been designed to work with ancient, damaged or difficult to read manuscripts, handwritten scores and architectural drawings. It represents at the same time an effective human machine interface in automatically reproducing documents while they are being written or drawn.
Researchers based their work on the biological process of the human mind and its ability to see and interpret all types of images (recognition of shapes, structures, dimensions, etc.) to create description and classification models of handwritten symbols. However, this computerised system differs from others since it can detect variations, elastic deformations and uneven distortions that can appear when manually reproducing any type of symbol (letters, signs, drawings, etc.). Another advantage is the possibility to work in real time, only a few seconds after the document has been introduced into the computer.
The BSM differs from other existing systems which follow the same process when deciphering different types of symbols, since a standard process makes it more difficult to recognise the symbols after they have been introduced. In contrast, the methodology developed by the Computer Vision Centre can be adapted to each of the areas it is applied to.
To be able to analyse and recognise symbols, the system divides image regions into sub regions - with the help of a grid - and saves the information from each grid square, while registering even the smallest of differences (e.g. between p and b). Depending on the shape introduced, the system undergoes a process to distinguish the shape and also any possible deformations (the letter P for example would be registered as being rounder or having a shorter or longer stem, etc.). It then stores this information and classifies it automatically.
Researchers decided to test the efficiency of the system by experimenting with two application areas. They created a database of musical notes and a database of architectural symbols. The first was created from a collection of modern and ancient musical scores (from the 18th and 19th centuries) from the archives of the Barcelona Seminary, which included a total of 2,128 examples of three types of musical notes drawn by 24 different people. The second database included 2,762 examples of handwritten architectural symbols belonging to 14 different groups. Each group contained approximately 200 types of symbols drawn by 13 different people.
In order to compare the performance and reliability of the BSM, the same data was introduced into other similar systems. The BSM was capable of recognising musical notes with an exactness of over 98% and architectural symbols with an exactness of 90%.
Researchers at the Computer Vision Centre who developed the BSM were awarded the first prize in the third edition of the Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA) which took place last June.
Note: This story has been adapted from a news release issued by Universitat Autonoma de Barcelona.

Fausto Intilla
www.oloscience.com

Computer Scientists Take The 'Why' Out Of WiFi


Source:

Science Daily — “People expect WiFi to work, but there is also a general understanding that it’s just kind of flakey,” said Stefan Savage, one of the UCSD computer science professors who led development of an automated, enterprise-scale WiFi troubleshooting system for UCSD’s computer science building. The system is described in a paper presented in August in Kyoto, Japan at ACM SIGCOMM, one of the world’s premier networking conferences.
“If you have a wireless problem in our building, our system automatically analyzes the behavior of your connection – each wireless protocol, each wired network service and the many interactions between them. In the end, we can say ‘it’s because of this that your wireless is slow or has stopped working’ – and we can tell you immediately,” said Savage.
For humans, diagnosing problems in the now ubiquitous 802.11-based wireless access networks requires a huge amount of data, expertise and time. In addition to the myriad complexities of the wired network, wireless networks face the additional challenges of shared spectrum, user mobility and authentication management. Finally, the interaction between wired and wireless networks is itself a source of many problems.
“Wireless networks are hooked on to the wired part of the Internet with a bunch of ‘Scotch tape and bailing wire’ – protocols that really weren’t designed for WiFi,” explained Savage. “If one of these components has a glitch, you may not be able to use the Internet even though the network itself is working fine.”
There are so many moving pieces, so many things you can not see. Within this soup, everything has to work just right. When it doesn’t, trying to identify which piece wasn’t working is tough and requires sifting through a lot of data. For example, someone using the microwave oven two rooms away may cause enough interference to disrupt your connection.
“Today, if you ask your network administrator why it takes minutes to connect to the network or why your WiFi connection is slow, they’re unlikely to know the answer,” explained Yu-Chung Cheng, a computer science Ph.D. student at UCSD and lead author on the paper. “Many problems are transient – they’re gone before you can even get an admin to look at them – and the number of possible reasons is huge,” explained Cheng, who recently defended his dissertation and will join Google this fall.
“Few organizations have the expertise, data or tools to decompose the underlying problems and interactions responsible for transient outages or performance degradations,” the authors write in their SIGCOMM paper.
The computer scientists from UCSD’s Jacobs School of Engineering presented a set of modeling techniques for automatically characterizing the source of such problems. In particular, they focus on data transfer delays unique to 802.11 networks – media access dynamics and mobility management latency.
The UCSD system runs 24 hours a day, constantly churning through the flood of data relevant to the wireless network and catching transient problems.
“We’ve created a virtual wireless expert who is always at work,” said Cheng.
Within the UCSD Computer Science building, all the wireless help-desk issues go through the new automated system, which has been running for about 9 months. The data collection has been going on for almost 2 years.
One of the big take-away lessons is that there is no one thing that affects wireless network performance. Instead, there are a lot of little things that interact and go wrong in ways you might not anticipate.
“I look at this as an engineering effort. In the future, I think that enterprise wireless networks will have sophisticated diagnostics and repair capabilities built in. How much these will draw from our work is hard to tell today. You never know the impact you are going to have when you do the work,” said Savage. “In the meantime, our system is the ultimate laboratory for testing new wireless gadgets and new approaches to building wireless systems. We just started looking at WiFi-based Voice-Over-IP (VOIP) phones. We learn something new every week.”
Paper citation: "Automating Cross-Layer Diagnosis of Enterprise Wireless Networks," by Yu-Chung Cheng, Mikhail Afanasyev, Patrick Verkaik, Jennifer Chiang, Alex C. Snoeren, Stefan Savage, and Geoffrey M. Voelker from the Department of Computer Science and Engineering at UCSD's Jacobs School of Engineering; Péter Benkö from the Traffic Analysis and Network Performance Laboratory (TrafficLab) at Ericsson Research, Budapest, Hungary
Funding was provided by UCSD Center for Networked Systems (CNS), Ericsson, National Science Foundation (NSF), and a UC Discovery Grant.
Note: This story has been adapted from a news release issued by University of California - San Diego.

Fausto Intilla
www.oloscience.com

martedì 4 settembre 2007

Internet Map Looks Like A Digital Dandelion


Source:

Science Daily — What looks like the head of a digital dandelion is a map of the Internet generated by new algorithms from computer scientists at UC San Diego. This map features Internet nodes – the red dots – and linkages – the green lines.
But it is no ordinary map. It is a (mostly) randomly generated graph that retains the essential characteristics of a specific corner of the Internet but doubles the number of nodes.
On August 30 in Kyoto, Japan at ACM SIGCOMM, the premier computer networking conference, UCSD computer scientists presented techniques for producing annotated, Internet router graphs of different sizes – based on observations of Internet characteristics.
The graph annotations include information about the relevant peer-to-peer business relationships that help to determine the paths that packets of information take as they travel across the Internet. Generating these kinds of graphs is critical for a wide range of computer science research.
“Defending against denial of service attacks and large-scale worm outbreaks depends on network topology. Our work allows computer scientists to experiment with a range of random graphs that match Internet characteristics. This work is also useful for determining the sensitivity of particular techniques – like routing protocols and congestion controls – to network topology and to variations in network topology,” said Priya Mahadevan, the first author on the SIGCOMM 2007 paper. Mahadevan just completed her computer science Ph.D. at UCSD’s Jacobs School of Engineering. In October, she will join Hewlett Packard Laboratories in Palo Alto, CA.
“We’re saying, ‘here is what the Internet looks like, and here is our recreation of it on a larger scale.’ Our algorithm produces random graphs that maintain the important interconnectivity characteristics of the original. The goal is to produce a topology generator capable of outputting a range of annotated Internet topologies of varying sizes based on available measurements of network connectivity and characteristics,” said Amin Vahdat, the senior author on the paper, a computer science professor at UCSD and the Director of UCSD’s Center for Networked Systems (CNS) – an industrial/academic collaboration investigating emerging issues in computing systems that are both very large (planetary scale) and very small (the scale of wireless sensor networks).
The authors are making the source code for their topology generator publicly available and hope that it will benefit a range of studies.
“The techniques we have developed for characterizing and recreating Internet characteristics are generally applicable to a broad range of disciplines that consider networks, including physics, biology, chemistry, neuroscience and sociology,” said Vahdat.
Citation: Orbis: Rescaling Degree Correlations to Generate Annotated Internet Topologies, Priya Mahadevan, Calvin Hubble, Bradley Huffaker, Dimitri Krioukov, and Amin Vahdat, Proceedings of the ACM SIGCOMM Conference, Kyoto, Japan, August 2007.
Funding was provided by the National Science Foundation (NSF) and UCSD’s Center for Networked Systems (CNS)
Note: This story has been adapted from a news release issued by University of California - San Diego.

Fausto Intilla

lunedì 3 settembre 2007

Making Internet Bandwidth A Global Currency


Source:

Science Daily — Computer scientists at Harvard's School of Engineering and Applied Sciences, in collaboration with colleagues from the Netherlands, are using a novel peer-to-peer video sharing application to explore a next-generation model for safe and legal electronic commerce that uses Internet bandwidth as a global currency.
The application is an enhanced version of a program called Tribler, originally created by scientists at the Delft University of Technology and Vrije Universiteit, Amsterdam to study video file sharing. The software exploits the power of peer-to-peer technology, which is based on forming networks among individual users.
“Successful peer-to-peer systems rely on designing rules that promote fair sharing of resources amongst users. Thus, they are both efficient and powerful computational and economic systems,” says David Parkes, John L. Loeb Associate Professor of the Natural Sciences at Harvard. "Peer-to-peer has received a bad rap, however, because of its frequent association with illegal music or software downloads.”
Unlike traditional, centralized approaches, peer-to-peer systems are incredibly robust, as they can scale smoothly since the software adjusts to the number and behavior of individual users. The researchers were inspired to use a version of the Tribler video sharing software as a model for an e-commerce system because of such flexibility, speed, and reliability.
“Our platform will provide fast downloads by ensuring sufficient uploads,” explains Johan Pouwelse, an assistant professor at Delft University of Technology and the technical director of Tribler. “The next generation of peer-to-peer systems will provide an ideal marketplace not just for content, but for bandwidth in general.”
The researchers envision an e-commerce model that connects users to a single global market, without any controlling company, network, or bank. They see bandwidth as the first true Internet “currency” for such a market. For example, the more a user uploads now (i.e. earns) and the higher the quality of the contributions, the more s/he would be able to download later (i.e. spend) and the faster the download speed. More broadly, this paradigm empowers individuals or groups of users to run their own “marketplace” for any computer resource or service.
Another idea the researchers believe has enormous but untapped potential is the combination of social network technology with peer-to-peer systems. “In the case of sharing and playing video, our network-based system already allows a group of ‘friends’ to pool their collective upload ‘reserve’ to slash download times. For Internet-based television this means a true instant, on-demand video experience,” explains Pouwelse.
The researchers concede that the greatest challenge to any peer-to-peer backed e-commerce system is implementing proper regulation in a decentralized environment. To keep an eye on the virtual economy, Parkes and Pouwelse envision creating a “web of trust,” or a network between friends used to evaluate the trustworthiness of fellow users and aimed at preventing content theft, counterfeiting, and cyber attacks.
To do so they will use a feature already included in the enhanced version of the Tribler software, the ability for users to “gossip” or report on the behavior of other peers. Their eventual goal is to find a way to create accurate personal assessments or trust metrics as a form of internal regulation.
“This idea is not new, but previous implementations have been costly and are dependent on a company and/or website being the enforcer. Addressing the ‘trust issue’ within open peer-to-peer technology could lead to future online economies that are legal, dynamic and scaleable, have very low start-up costs, and minimal downtime,” says Parkes.
By studying user behavior within an operational “Internet currency” system, with a particular focus on understanding how and why attacks, fraud, and abuse occur and how trust can be established and maintained, the researchers imagine future improvements to everything from on-demand television to online auctions to open content encyclopedias.
The application is available for free download at http://tv.seas.harvard.edu/.
Note: This story has been adapted from a news release issued by Harvard University.

Fausto Intilla