sabato 29 settembre 2007

Any Digital Camera Can Take Multibillion-pixel Shots With New Device


Source:

Science Daily — Researchers at Carnegie Mellon University, in collaboration with scientists at NASA's Ames Research Center, have built a low-cost robotic device that enables any digital camera to produce breathtaking gigapixel (billions of pixels) panoramas, called GigaPans.
The technology gives people a new way to make and share images of their environment. It is being used by students to document their communities and by the Commonwealth of Pennsylvania to make Civil War sites accessible on the Web. To promote further sharing of this imagery, Carnegie Mellon has launched a public Web site, http://www.gigapan.org/, where people can upload and interactively explore panoramic images of any format.
In cooperation with Google, researchers also have created a GigaPan layer on Google Earth. Anyone using Google Earth can now fly into these GigaPan panoramas in the context of exploring the world.
Researchers have begun a public beta process with the GigaPan hardware, Web site, and software. The hardware technology enabling GigaPan images is a robotic camera mount, jointly designed and manufactured by Charmed Labs of Austin Texas. The tripod-like mount makes it possible for a digital camera to take hundreds of overlapping images of landscapes, buildings or rooms. Then, using software developed by Carnegie Mellon and Ames, these images can be arranged in a grid and digitally stitched together into a single image that could consist of tens of billions of pixels.
These huge image files can then be explored by zooming in on features of interest in a manner similar to Google Earth. "We have taken imagery and made it a new tool for exploration and for enhancing global understanding," said Illah Nourbakhsh, associate professor in the School of Computer Science's Robotics Institute. Nourbakhsh and Randy Sargent, senior systems scientist at Carnegie Mellon West in Moffett Field, Calif., led GigaPan's development. "An ordinary photo makes it possible to cross language barriers," Nourbakhsh explained. "But a GigaPan provides so much information that it leads to conversations between the person who took the panoramas and the people who are exploring it and discovering new details."
Last spring, the Pennsylvania Board of Tourism began to use GigaPan to enable people to virtually explore Civil War sites. The technology is also being used for Robot250, an arts-based robotics program in the Pittsburgh area. Robot250 will increase technical literacy by teaching students, artists and other members of the public how to build customized robots.
Nourbakhsh and his colleagues recently began to work with UNESCO's International Bureau of Education and its Associated Schools Network on a project that will link school children in different parts of the world in exploring issues of cultural identity through a classroom project. Middle school children from Pittsburgh to South Africa to Trinidad and Tobago will use the GigaPan camera to share images of their neighborhoods, lives and cultures. "This project will explore curriculum development from the local to the global level," said IBE Director Clementina Acedo.
"It is an extraordinary opportunity to link a school-community based educational practice with high-end technology in the service of children's innovative learning, personal development and world communication. Plans call for the experiences of these children from poorer and richer countries to be presented at the 48th session of the International Conference of Education scheduled to take place in Geneva in November 2008.
Besides being a tool for education, Nourbakhsh and Sargent see the GigaPan system as an important tool for ecologists, biologists and other scientists. They plan to foster this effort by making several dozen GigaPans available to leading scientists with support from the Fine Foundation of Pittsburgh.
Nourbakhsh hopes the non-commercial GigaPan site will help to develop a community of GigaPan producers and users. "We're not interested in becoming just another photo-sharing site," he said. "We want as many people as possible involved. GigaPan is not just about the vision of the person who makes the image. People who explore the image can make discoveries and gain insights in ways that may be just as important."
Sargent got the idea for GigaPan when he was a technical staff member at Ames Research Center, helping to develop software for combining images from NASA's Mars Exploration Rovers into panoramas. He became convinced that the same technology could open people's eyes to the diversity of their own planet. "It is increasingly important to give people a broad view of the world, particularly to help us understand different cultures and different environments," he said. "It's too easy to have blinders on and to only see and understand what is local."
The GigaPan camera system is part of a larger effort known as the Global Connection Project, led by Nourbakhsh and Sargent. Its purpose is to make people all over the world more aware of their neighbors.
Note: This story has been adapted from material provided by Carnegie Mellon University.

Fausto Intilla

giovedì 27 settembre 2007

Superconducting Quantum Computing Cable Created


Source:

Science Daily — Physicists at the National Institute of Standards and Technology (NIST) have transferred information between two "artificial atoms" by way of electronic vibrations on a microfabricated aluminum cable, demonstrating a new component for potential ultra-powerful quantum computers of the future.
The setup resembles a miniature version of a cable-television transmission line, but with some powerful added features, including superconducting circuits with zero electrical resistance, and multi-tasking data bits that obey the unusual rules of quantum physics.
The resonant cable might someday be used in quantum computers, which would rely on quantum behavior to carry out certain functions, such as code-breaking and database searches, exponentially faster than today's most powerful computers.
Moreover, the superconducting components in the NIST demonstration offer the possibility of being easier to manufacture and scale up to a practical size than many competing candidates, such as individual atoms, for storing and transporting data in quantum computers.
Unlike traditional electronic devices, which store information in the form of digital bits that each possess a value of either 0 or 1, each superconducting circuit acts as a quantum bit, or qubit, which can hold values of 0 and 1 at the same time. Qubits in this "superposition" of both values may allow many more calculations to be performed simultaneously than is possible with traditional digital bits, offering the possibility of faster and more powerful computing devices. The resonant section of cable shuttling the information between the two superconducting circuits is known to engineers as a "quantum bus," and it could transport data between two or more qubits.
The NIST work is featured on the cover of the Sept. 27 issue of Nature. The scientists encoded information in one qubit, transferred this information as microwave energy to the resonant section of cable for a short storage time of 10 nanoseconds, and then successfully shuttled the information to a second qubit.
"We tested a new element for quantum information systems," says NIST physicist Ray Simmonds. "It's really significant because it means we can couple more qubits together and transfer information between them easily using one simple element."
The NIST work, together with another letter in the same issue of Nature by a Yale University group, is the first demonstration of a superconducting quantum bus. Whereas the NIST scientists used the bus to store and transfer information between independent qubits, the Yale group used it to enable an interaction of two qubits, creating a combined superposition state. These three actions, demonstrated collectively by the two groups, are essential for performing the basic functions needed in a superconductor-based quantum information processor of the future.
In addition to storing and transferring information, NIST's resonant cable also offers a means of "refreshing" superconducting qubits, which normally can maintain the same delicate quantum state for only half a microsecond. Disturbances such as electric or magnetic noise in the circuit can rapidly destroy a qubit's superposition state. With design improvements, the NIST technology might be used to repeatedly refresh the data and extend qubit lifetime more than 100-fold, sufficient to create a viable short-term quantum computer memory, Simmonds says. NIST's resonant cable might also be used to transfer quantum information between matter and light -- microwave energy is a low-frequency form of light -- and thus link quantum computers to ultra-secure quantum communications systems.
If they can be built, quantum computers -- harnessing the unusual rules of quantum mechanics, the principles governing nature's smallest particles -- might be used for applications such as fast and efficient code breaking, optimizing complex systems such as airline schedules, making counterfeit-proof money, and solving complex mathematical problems. Quantum information technology in general allows for custom-designed systems for fundamental tests of quantum physics and as-yet-unknown futuristic applications.
A superconducting qubit is about the width of a human hair. NIST researchers fabricate two qubits on a sapphire microchip, which sits in a shielded box about 8 cubic millimeters in size. The resonant section of cable is 7 millimeters long, similar to the coaxial wiring used in cable television but much thinner and flatter, zig-zagging around the 1.1 mm space between the two qubits. Like a guitar string, the resonant cable can be stimulated so that it hums or "resonates" at a particular tone or frequency in the microwave range. Quantum information is stored as energy in the form of microwave particles or photons.
The NIST research was supported in part by the Disruptive Technology Office.
*M.A. Sillanpää, J.I. Park, and R.W. Simmonds. 2007. Coherent quantum state storage and transfer between two phase qubits via a resonant cavity. Nature, Sept. 27.
Note: This story has been adapted from a news release issued by National Institute of Standards and Technology.

Fausto Intilla

Two Giant Steps In Advancement Of Quantum Computing Achieved


Source:

Science Daily — Two major steps toward putting quantum computers into real practice -- sending a photon signal on demand from a qubit onto wires and transmitting the signal to a second, distant qubit -- have been brought about by a team of scientists at Yale.
The accomplishments are reported in sequential issues of Nature on September 20 and September 27, on which it is highlighted as the cover along with complementary work from a group at the National Institute of Standards and Technologies.
Over the past several years, the research team of Professors Robert Schoelkopf in applied physics and Steven Girvin in physics has explored the use of solid-state devices resembling microchips as the basic building blocks in the design of a quantum computer. Now, for the first time, they report that superconducting qubits, or artificial atoms, have been able to communicate information not only to their nearest neighbor, but also to a distant qubit on the chip.
This research now moves quantum computing from "having information" to "communicating information." In the past information had only been transferred directly from qubit to qubit in a superconducting system. Schoelkopf and Girvin's team has engineered a superconducting communication 'bus' to store and transfer information between distant quantum bits, or qubits, on a chip. This work, according to Schoelkopf, is the first step to making the fundamentals of quantum computing useful.
The first breakthrough reported is the ability to produce on demand -- and control -- single, discrete microwave photons as the carriers of encoded quantum information. While microwave energy is used in cell phones and ovens, their sources do not produce just one photon. This new system creates a certainty of producing individual photons.
"It is not very difficult to generate signals with one photon on average, but, it is quite difficult to generate exactly one photon each time. To encode quantum information on photons, you want there to be exactly one," according to postdoctoral associates Andrew Houck and David Schuster who are lead co-authors on the first paper.
"We are reporting the first such source for producing discrete microwave photons, and the first source to generate and guide photons entirely within an electrical circuit," said Schoelkopf.
In order to successfully perform these experiments, the researchers had to control electrical signals corresponding to one single photon. In comparison, a cell phone emits about 10^23 (100,000,000,000,000,000,000,000) photons per second. Further, the extremely low energy of microwave photons mandates the use of highly sensitive detectors and experiment temperatures just above absolute zero.
"In this work we demonstrate only the first half of quantum communication on a chip -- quantum information efficiently transferred from a stationary quantum bit to a photon or 'flying qubit,'" says Schoelkopf. "However, for on-chip quantum communication to become a reality, we need to be able to transfer information from the photon back to a qubit."
This is exactly what the researchers go on to report in the second breakthrough. Postdoctoral associate Johannes Majer and graduate student Jerry Chow, lead co-authors of the second paper, added a second qubit and used the photon to transfer a quantum state from one qubit to another. This was possible because the microwave photon could be guided on wires -- similarly to the way fiber optics can guide visible light -- and carried directly to the target qubit. "A novel feature of this experiment is that the photon used is only virtual," said Majer and Chow, "winking into existence for only the briefest instant before disappearing."
To allow the crucial communication between the many elements of a conventional computer, engineers wire them all together to form a data "bus," which is a key element of any computing scheme. Together the new Yale research constitutes the first demonstration of a "quantum bus" for a solid-state electronic system. This approach can in principle be extended to multiple qubits, and to connecting the parts of a future, more complex quantum computer.
However, Schoelkopf likened the current stage of development of quantum computing to conventional computing in the 1950's, when individual transistors were first being built. Standard computer microprocessors are now made up of a billion transistors, but first it took decades for physicists and engineers to develop integrated circuits with transistors that could be mass produced.
Schoelkopf and Girvin are members of the newly formed Yale Institute for Nanoscience and Quantum Engineering (YINQE), a broad interdisciplinary activity among faculty and students from across the university.
Other Yale authors involved in the research are J.M. Gambetta, J.A. Schreier, J. Koch, B.R. Johnson, L. Frunzio, A. Wallraff, A. Blais and Michel Devoret. Funding for the research was from the National Security Agency under the Army Research Office, the National Science Foundation and Yale University.
Citation: Nature 449, 328-331 (20 September 2007) doi:10.1038/nature06126 , Nature 450, 443-447 (27 September 2007) doi:10.1038/nature06184
Note: This story has been adapted from a news release issued by Yale University.

Fausto Intilla

'Printers' That Can Make 3-D Solid Objects Soon To Enter Mainstream

Source:
Science Daily — It is a simple matter to print an E-book or other document directly from your computer, whether that document is on your hard drive, at a web site or in an email. But, imagine being able to 'print' solid objects, a piece of sports equipment, say, or a kitchen utensil, or even a prototype car design for wind tunnel tests. US researchers suggest such 3-D printer technology will soon enter the mainstream once a killer application emerges.
Such technology already exists and is maturing rapidly so that high-tech designers and others can share solid designs almost as quickly as sending a fax. The systems available are based on bath of liquid plastic which is solidified by laser light. The movements of the laser are controlled by a computer that reads a digitized 3D map of the solid object or design.
Writing in the Inderscience publication International Journal of Technology Marketing, US researchers discuss how this technology might eventually move into the mainstream allowing work environments to 3-D print equipment, whether that is plastic paperclips, teacups, or components that can be joined to make sophisticated devices, perhaps bolted together with printed nuts and bolts.
Physicist Phil Anderson of the School of Theoretical and Applied Science working with Cherie Ann Sherman of the Anisfield School of Business, both at Ramapo College of New Jersey, in Mahwah, New Jersey, explain how this technology, which is known formally as 'rapid prototyping' could revolutionize the way people buy goods.
It will allow them to buy or obtain a digital file representing a physical product electronically and then produce the object at a time and place convenient to them. The technology will be revolutionary in the same way that music downloads have shaken up the music industry. "This technology has the potential to generate a variety of new business models, which would enhance the average consumer's lifestyle," say the paper's authors.
The team discusses the current advanced applications of rapid prototyping which exist in the military where missing and damaged components can be produced at the site of action. Education too can make use of 3-D printing to allow students to make solid their experimental designs.
Also, product developers can share tangible prototypes by transferring the digitized design without the delay of shipping a solid object between sites, which may be separated by thousands of miles. The possibilities for consumer goods, individualized custom products, replacement components, and quick fixes for broken objects, are almost unlimited, the authors suggest.
From the business perspective, e-commerce sites will essentially become digital download sites with physical stores, retail employees, and shipping eliminated. It is only a matter of time before the 'killer application,' the 3-D equivalent of the mp3 music file, one might say, arrives to make owning a 3-D printer as necessary to the modern lifestyle as owning a microwave oven, a TV, or indeed a personal computer.
Note: This story has been adapted from a news release issued by Inderscience Publishers.

Fausto Intilla
www.oloscience.com

mercoledì 19 settembre 2007

Computer Memory Designed In Nanoscale Can Retrieve Data 1,000 Times Faster

Source:

Science Daily — Scientists from the University of Pennsylvania have developed nanowires capable of storing computer data for 100,000 years and retrieving that data a thousand times faster than existing portable memory devices such as Flash memory and micro-drives, all using less power and space than current memory technologies.
Ritesh Agarwal, an assistant professor in the Department of Materials Science and Engineering, and colleagues developed a self-assembling nanowire of germanium antimony telluride, a phase-changing material that switches between amorphous and crystalline structures, the key to read/write computer memory. Fabrication of the nanoscale devices, roughly 100 atoms in diameter, was performed without conventional lithography, the blunt, top-down manufacturing process that employs strong chemicals and often produces unusable materials with space, size and efficiency limitations.
Instead, researchers used self-assembly, a process by which chemical reactants crystallize at lower temperatures mediated by nanoscale metal catalysts to spontaneously form nanowires that were 30-50 nanometers in diameter and 10 micrometers in length, and then they fabricated memory devices on silicon substrates.
"We measured the resulting nanowires for write-current amplitude, switching speed between amorphous and crystalline phases, long-term durability and data retention time," Agarwal said.
Tests showed extremely low power consumption for data encoding (0.7mW per bit). They also indicated the data writing, erasing and retrieval (50 nanoseconds) to be 1,000 times faster than conventional Flash memory and indicated the device would not lose data even after approximately 100,000 years of use, all with the potential to realize terabit-level nonvolatile memory device density.
"This new form of memory has the potential to revolutionize the way we share information, transfer data and even download entertainment as consumers," Agarwal said. "This represents a potential sea-change in the way we access and store data."
Phase-change memory in general features faster read/write, better durability and simpler construction compared with other memory technologies such as Flash. The challenge has been to reduce the size of phase change materials by conventional lithographic techniques without damaging their useful properties. Self-assembled phase-change nanowires, as created by Penn researchers, operate with less power and are easier to scale, providing a useful new strategy for ideal memory that provides efficient and durable control of memory several orders of magnitude greater than current technologies.
"The atomic scale of the nanodevices may represent the ultimate size limit in current-induced phase transition systems for non-volatile memory applications," Agarwal said.
Current solid-state technology for products like memory cards, digital cameras and personal data assistants traditionally utilize Flash memory, a non-volatile and durable computer memory that can be erased and reprogrammed electronically. Data on Flash drives provides most battery-powered devices with acceptable levels of durability and moderately fast data access. Yet the technology's limits are apparent. Digital cameras can't snap rapid-fire photos because it takes precious seconds to store the last photo to memory. If the memory device is fast, as in DRAM and SRAM used in computers, then it is volatile; if the plug on a desktop computer is pulled, all recent data entry is lost.
Therefore, a universal memory device is desired that can be scalable, fast, durable and nonvolatile, a difficult set of requirements which have now been demonstrated at Penn.
"Imagine being able to store hundreds of high-resolution movies in a small drive, downloading them and playing them without wasting time on data buffering, or imagine booting your laptop computer in a few seconds as you wouldn't need to transfer the operating system to active memory" Agarwal said.
The research was performed by Agarwal, Se-Ho Lee and Yeonwoong Jung of the Department of Materials Science and Engineering in the School of Engineering and Applied Science at Penn. The findings appear online in the journal Nature Nanotechnology and in the October print edition.
The research was supported by the Materials Research Science and Engineering Center at Penn, the University of Pennsylvania Research Foundation award and a grant from the National Science Foundation.
Note: This story has been adapted from a news release issued by University of Pennsylvania.

Fausto Intilla
www.oloscience.com

martedì 11 settembre 2007

Getting There Faster With Virtual Reality


Source:

Science Daily — Is the navigation system too complex? Does it distract the driver’s attention from the traffic? To test electronic assistants, their developers have to build numerous prototypes – an expensive and time-consuming business. Tests in a virtual world make prototypes unnecessary.
The engineer stares intently at the display on the virtual dashboard. His task is to test the new driver assistance system from the user’s perspective. How seriously does it distract a driver to listen to a text message while negotiating a roundabout?
How does the driver apprehend a collision warning in the fog? Developers of electronic assistants have to build large numbers of prototypes and test countless functions. A great deal of time and money must therefore be invested before the product is ready to go on the market. Tomorrow’s engineers will have a much easier time: They can simply create virtual prototypes and simulate all the functions in a virtual world.
Car manufacturers and suppliers will be the chief beneficiaries of Personal Immersion® in future. Developed by the Fraunhofer Institute for Industrial Engineering IAO in Stuttgart, this virtual reality and stereoscopic interactive simulation system makes it possible to display tailored virtual environments for purposes such as the development of driver assistance systems.
“Our VR system not only simulates the instruments,” explains IAO project manager Manfred Dangelmaier. “Every level of this system is virtual. The user is seated in a virtual driving simulator, surrounded by a virtual world, facing a virtual dashboard with a virtual control system.” This allows the engineers to simulate every conceivable situation in order to test the man-machine interfaces. Whatever traffic situation is to be illustrated, and whatever demands the driver may make on the vehicle electronics, such as retrieving up-to-date traffic jam warnings – there are no limits to the imagination when testing these systems.
“Interactive simulation of this kind significantly cuts development time and costs,” says Dangelmaier. Virtual reality also facilitates communication within the interdisciplinary teams engaged in immersive design.
Up to now, a major problem in portraying virtual worlds was the projector resolution. “In technical terms, it is not easy to achieve a satisfactory portrayal of both the full-size surroundings and the close-up details at the same time in a virtual environment,” says Dangelmaier. But the researchers have solved the problem: Instead of the two projectors customary in VR systems, their systems operate with four projectors in a complex stereo projection setup. The scientists will be presenting potential applications at the International Motor Show (IAA) in Frankfurt on September 13 through 23.
Note: This story has been adapted from a news release issued by Fraunhofer-Gesellschaft.

Fausto Intilla

venerdì 7 settembre 2007

Computerized Treatment Of Manuscripts

Source:

Science Daily — Researchers at the UAB Computer Vision Centre working on the automatic recognition of manuscript documents have designed a new system that is more efficient and reliable than currently existing ones.
The BSM (acronym for "Blurred Shape Model") has been designed to work with ancient, damaged or difficult to read manuscripts, handwritten scores and architectural drawings. It represents at the same time an effective human machine interface in automatically reproducing documents while they are being written or drawn.
Researchers based their work on the biological process of the human mind and its ability to see and interpret all types of images (recognition of shapes, structures, dimensions, etc.) to create description and classification models of handwritten symbols. However, this computerised system differs from others since it can detect variations, elastic deformations and uneven distortions that can appear when manually reproducing any type of symbol (letters, signs, drawings, etc.). Another advantage is the possibility to work in real time, only a few seconds after the document has been introduced into the computer.
The BSM differs from other existing systems which follow the same process when deciphering different types of symbols, since a standard process makes it more difficult to recognise the symbols after they have been introduced. In contrast, the methodology developed by the Computer Vision Centre can be adapted to each of the areas it is applied to.
To be able to analyse and recognise symbols, the system divides image regions into sub regions - with the help of a grid - and saves the information from each grid square, while registering even the smallest of differences (e.g. between p and b). Depending on the shape introduced, the system undergoes a process to distinguish the shape and also any possible deformations (the letter P for example would be registered as being rounder or having a shorter or longer stem, etc.). It then stores this information and classifies it automatically.
Researchers decided to test the efficiency of the system by experimenting with two application areas. They created a database of musical notes and a database of architectural symbols. The first was created from a collection of modern and ancient musical scores (from the 18th and 19th centuries) from the archives of the Barcelona Seminary, which included a total of 2,128 examples of three types of musical notes drawn by 24 different people. The second database included 2,762 examples of handwritten architectural symbols belonging to 14 different groups. Each group contained approximately 200 types of symbols drawn by 13 different people.
In order to compare the performance and reliability of the BSM, the same data was introduced into other similar systems. The BSM was capable of recognising musical notes with an exactness of over 98% and architectural symbols with an exactness of 90%.
Researchers at the Computer Vision Centre who developed the BSM were awarded the first prize in the third edition of the Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA) which took place last June.
Note: This story has been adapted from a news release issued by Universitat Autonoma de Barcelona.

Fausto Intilla
www.oloscience.com

Computer Scientists Take The 'Why' Out Of WiFi


Source:

Science Daily — “People expect WiFi to work, but there is also a general understanding that it’s just kind of flakey,” said Stefan Savage, one of the UCSD computer science professors who led development of an automated, enterprise-scale WiFi troubleshooting system for UCSD’s computer science building. The system is described in a paper presented in August in Kyoto, Japan at ACM SIGCOMM, one of the world’s premier networking conferences.
“If you have a wireless problem in our building, our system automatically analyzes the behavior of your connection – each wireless protocol, each wired network service and the many interactions between them. In the end, we can say ‘it’s because of this that your wireless is slow or has stopped working’ – and we can tell you immediately,” said Savage.
For humans, diagnosing problems in the now ubiquitous 802.11-based wireless access networks requires a huge amount of data, expertise and time. In addition to the myriad complexities of the wired network, wireless networks face the additional challenges of shared spectrum, user mobility and authentication management. Finally, the interaction between wired and wireless networks is itself a source of many problems.
“Wireless networks are hooked on to the wired part of the Internet with a bunch of ‘Scotch tape and bailing wire’ – protocols that really weren’t designed for WiFi,” explained Savage. “If one of these components has a glitch, you may not be able to use the Internet even though the network itself is working fine.”
There are so many moving pieces, so many things you can not see. Within this soup, everything has to work just right. When it doesn’t, trying to identify which piece wasn’t working is tough and requires sifting through a lot of data. For example, someone using the microwave oven two rooms away may cause enough interference to disrupt your connection.
“Today, if you ask your network administrator why it takes minutes to connect to the network or why your WiFi connection is slow, they’re unlikely to know the answer,” explained Yu-Chung Cheng, a computer science Ph.D. student at UCSD and lead author on the paper. “Many problems are transient – they’re gone before you can even get an admin to look at them – and the number of possible reasons is huge,” explained Cheng, who recently defended his dissertation and will join Google this fall.
“Few organizations have the expertise, data or tools to decompose the underlying problems and interactions responsible for transient outages or performance degradations,” the authors write in their SIGCOMM paper.
The computer scientists from UCSD’s Jacobs School of Engineering presented a set of modeling techniques for automatically characterizing the source of such problems. In particular, they focus on data transfer delays unique to 802.11 networks – media access dynamics and mobility management latency.
The UCSD system runs 24 hours a day, constantly churning through the flood of data relevant to the wireless network and catching transient problems.
“We’ve created a virtual wireless expert who is always at work,” said Cheng.
Within the UCSD Computer Science building, all the wireless help-desk issues go through the new automated system, which has been running for about 9 months. The data collection has been going on for almost 2 years.
One of the big take-away lessons is that there is no one thing that affects wireless network performance. Instead, there are a lot of little things that interact and go wrong in ways you might not anticipate.
“I look at this as an engineering effort. In the future, I think that enterprise wireless networks will have sophisticated diagnostics and repair capabilities built in. How much these will draw from our work is hard to tell today. You never know the impact you are going to have when you do the work,” said Savage. “In the meantime, our system is the ultimate laboratory for testing new wireless gadgets and new approaches to building wireless systems. We just started looking at WiFi-based Voice-Over-IP (VOIP) phones. We learn something new every week.”
Paper citation: "Automating Cross-Layer Diagnosis of Enterprise Wireless Networks," by Yu-Chung Cheng, Mikhail Afanasyev, Patrick Verkaik, Jennifer Chiang, Alex C. Snoeren, Stefan Savage, and Geoffrey M. Voelker from the Department of Computer Science and Engineering at UCSD's Jacobs School of Engineering; Péter Benkö from the Traffic Analysis and Network Performance Laboratory (TrafficLab) at Ericsson Research, Budapest, Hungary
Funding was provided by UCSD Center for Networked Systems (CNS), Ericsson, National Science Foundation (NSF), and a UC Discovery Grant.
Note: This story has been adapted from a news release issued by University of California - San Diego.

Fausto Intilla
www.oloscience.com

martedì 4 settembre 2007

Internet Map Looks Like A Digital Dandelion


Source:

Science Daily — What looks like the head of a digital dandelion is a map of the Internet generated by new algorithms from computer scientists at UC San Diego. This map features Internet nodes – the red dots – and linkages – the green lines.
But it is no ordinary map. It is a (mostly) randomly generated graph that retains the essential characteristics of a specific corner of the Internet but doubles the number of nodes.
On August 30 in Kyoto, Japan at ACM SIGCOMM, the premier computer networking conference, UCSD computer scientists presented techniques for producing annotated, Internet router graphs of different sizes – based on observations of Internet characteristics.
The graph annotations include information about the relevant peer-to-peer business relationships that help to determine the paths that packets of information take as they travel across the Internet. Generating these kinds of graphs is critical for a wide range of computer science research.
“Defending against denial of service attacks and large-scale worm outbreaks depends on network topology. Our work allows computer scientists to experiment with a range of random graphs that match Internet characteristics. This work is also useful for determining the sensitivity of particular techniques – like routing protocols and congestion controls – to network topology and to variations in network topology,” said Priya Mahadevan, the first author on the SIGCOMM 2007 paper. Mahadevan just completed her computer science Ph.D. at UCSD’s Jacobs School of Engineering. In October, she will join Hewlett Packard Laboratories in Palo Alto, CA.
“We’re saying, ‘here is what the Internet looks like, and here is our recreation of it on a larger scale.’ Our algorithm produces random graphs that maintain the important interconnectivity characteristics of the original. The goal is to produce a topology generator capable of outputting a range of annotated Internet topologies of varying sizes based on available measurements of network connectivity and characteristics,” said Amin Vahdat, the senior author on the paper, a computer science professor at UCSD and the Director of UCSD’s Center for Networked Systems (CNS) – an industrial/academic collaboration investigating emerging issues in computing systems that are both very large (planetary scale) and very small (the scale of wireless sensor networks).
The authors are making the source code for their topology generator publicly available and hope that it will benefit a range of studies.
“The techniques we have developed for characterizing and recreating Internet characteristics are generally applicable to a broad range of disciplines that consider networks, including physics, biology, chemistry, neuroscience and sociology,” said Vahdat.
Citation: Orbis: Rescaling Degree Correlations to Generate Annotated Internet Topologies, Priya Mahadevan, Calvin Hubble, Bradley Huffaker, Dimitri Krioukov, and Amin Vahdat, Proceedings of the ACM SIGCOMM Conference, Kyoto, Japan, August 2007.
Funding was provided by the National Science Foundation (NSF) and UCSD’s Center for Networked Systems (CNS)
Note: This story has been adapted from a news release issued by University of California - San Diego.

Fausto Intilla

lunedì 3 settembre 2007

Making Internet Bandwidth A Global Currency


Source:

Science Daily — Computer scientists at Harvard's School of Engineering and Applied Sciences, in collaboration with colleagues from the Netherlands, are using a novel peer-to-peer video sharing application to explore a next-generation model for safe and legal electronic commerce that uses Internet bandwidth as a global currency.
The application is an enhanced version of a program called Tribler, originally created by scientists at the Delft University of Technology and Vrije Universiteit, Amsterdam to study video file sharing. The software exploits the power of peer-to-peer technology, which is based on forming networks among individual users.
“Successful peer-to-peer systems rely on designing rules that promote fair sharing of resources amongst users. Thus, they are both efficient and powerful computational and economic systems,” says David Parkes, John L. Loeb Associate Professor of the Natural Sciences at Harvard. "Peer-to-peer has received a bad rap, however, because of its frequent association with illegal music or software downloads.”
Unlike traditional, centralized approaches, peer-to-peer systems are incredibly robust, as they can scale smoothly since the software adjusts to the number and behavior of individual users. The researchers were inspired to use a version of the Tribler video sharing software as a model for an e-commerce system because of such flexibility, speed, and reliability.
“Our platform will provide fast downloads by ensuring sufficient uploads,” explains Johan Pouwelse, an assistant professor at Delft University of Technology and the technical director of Tribler. “The next generation of peer-to-peer systems will provide an ideal marketplace not just for content, but for bandwidth in general.”
The researchers envision an e-commerce model that connects users to a single global market, without any controlling company, network, or bank. They see bandwidth as the first true Internet “currency” for such a market. For example, the more a user uploads now (i.e. earns) and the higher the quality of the contributions, the more s/he would be able to download later (i.e. spend) and the faster the download speed. More broadly, this paradigm empowers individuals or groups of users to run their own “marketplace” for any computer resource or service.
Another idea the researchers believe has enormous but untapped potential is the combination of social network technology with peer-to-peer systems. “In the case of sharing and playing video, our network-based system already allows a group of ‘friends’ to pool their collective upload ‘reserve’ to slash download times. For Internet-based television this means a true instant, on-demand video experience,” explains Pouwelse.
The researchers concede that the greatest challenge to any peer-to-peer backed e-commerce system is implementing proper regulation in a decentralized environment. To keep an eye on the virtual economy, Parkes and Pouwelse envision creating a “web of trust,” or a network between friends used to evaluate the trustworthiness of fellow users and aimed at preventing content theft, counterfeiting, and cyber attacks.
To do so they will use a feature already included in the enhanced version of the Tribler software, the ability for users to “gossip” or report on the behavior of other peers. Their eventual goal is to find a way to create accurate personal assessments or trust metrics as a form of internal regulation.
“This idea is not new, but previous implementations have been costly and are dependent on a company and/or website being the enforcer. Addressing the ‘trust issue’ within open peer-to-peer technology could lead to future online economies that are legal, dynamic and scaleable, have very low start-up costs, and minimal downtime,” says Parkes.
By studying user behavior within an operational “Internet currency” system, with a particular focus on understanding how and why attacks, fraud, and abuse occur and how trust can be established and maintained, the researchers imagine future improvements to everything from on-demand television to online auctions to open content encyclopedias.
The application is available for free download at http://tv.seas.harvard.edu/.
Note: This story has been adapted from a news release issued by Harvard University.

Fausto Intilla

domenica 2 settembre 2007

I.B.M. Researchers Advancing Computer Processing Ability


By JOHN MARKOFF
Published: August 31, 2007

SAN FRANCISCO, Aug. 30 — Researchers at I.B.M. laboratories say they have made progress toward storing information and computing at the level of individual atoms.
The scientists documented their work in two papers appearing on Friday in the journal Science. Both papers are focused on new understanding of the behavior of magnetism at the tiny scale of nanotechnology, where scientists hope to develop electronics made from components that are far smaller than today’s transistors and wires.
In one paper the researchers describe a technique for reading and writing digital ones and zeroes onto a handful of atoms, or even individual atoms. The second paper describes the ability to use a single molecule as a switch, replicating the behavior of today’s transistors.
The papers are the latest indication that computing technology is beginning to emerge that could replace today’s microelectronics materials in the next decade.
R. Stanley Williams, a Hewlett-Packard physicist, said this week that his group had begun manufacturing prototypes of a silicon chip that combines both conventional microelectronics and molecular scale components. Their first hybrid device is a circuit called a field programmable gate array, or F.P.G.A., using molecular-scale components as the configuration circuitry, an approach that will save tremendous space in the chip design.
A team of I.B.M. researchers at the company’s Almaden Research Center in San Jose, Calif., were able to use a scanning tunneling microscope to observe the magnetic orientation of iron and manganese atoms at low temperatures. Controlling magnetic direction is a crucial technique that is used in reading and writing digital information on magnetic storage disks like standard hard drives.
In addition to the potential storage applications, the researchers noted that atomic-scale magnetic structures are also of scientific interest because they may be harnessed for quantum computing, a technology that would be far faster than current computers for some specialized uses.
A second group of I.B.M. scientists in Zurich were able to place two hydrogen atoms in an ultrathin insulating film and switch them back and forth between two states, creating the equivalent of the ones and zeroes used in standard chips. They were also able to use the same switching process to inject an electric charge into one molecule and link the effect to a neighboring molecule. That suggests it might be possible to extend the effect into a fabric of trillions of atom-size switches in the future.
The laboratory advances are far from being ready to commercialize, but they provide hope for the electronics industry, which has grown steadily because of the continuous shrinking in size and falling cost of components for more than four decades.


Fausto Intilla