venerdì 19 ottobre 2007

Computers With 'Common Sense'


Source:

ScienceDaily (Oct. 18, 2007) — Using a little-known Google Labs widget, computer scientists from UC San Diego and UCLA have brought common sense to an automated image labeling system. This common sense is the ability to use context to help identify objects in photographs.
For example, if a conventional automated object identifier has labeled a person, a tennis racket, a tennis court and a lemon in a photo, the new post-processing context check will re-label the lemon as a tennis ball.
“We think our paper is the first to bring external semantic context to the problem of object recognition,” said computer science professor Serge Belongie from UC San Diego.
The researchers show that the Google Labs tool called Google Sets can be used to provide external contextual information to automated object identifiers.
Google Sets generates lists of related items or objects from just a few examples. If you type in John, Paul and George, it will return the words Ringo, Beatles and John Lennon. If you type “neon” and “argon” it will give you the rest of the noble gasses.
“In some ways, Google Sets is a proxy for common sense. In our paper, we showed that you can use this common sense to provide contextual information that improves the accuracy of automated image labeling systems,” said Belongie.
The image labeling system is a three step process. First, an automated system splits the image up into different regions through the process of image segmentation. In the photo above, image segmentation separates the person, the court, the racket and the yellow sphere.
Next, an automated system provides a ranked list of probable labels for each of these image regions.
Finally, the system adds a dose of context by processing all the different possible combinations of labels within the image and maximizing the contextual agreement among the labeled objects within each picture.
It is during this step that Google Sets can be used as a source of context that helps the system turn a lemon into a tennis ball. In this case, these “semantic context constraints” helped the system disambiguate between visually similar objects.
In another example, the researchers show that an object originally labeled as a cow is (correctly) re-labeled as a boat when the other objects in the image – sky, tree, building and water – are considered during the post-processing context step. In this case, the semantic context constraints helped to correct an entirely wrong image label. The context information came from the co-occurrence of object labels in the training sets rather than from Google Sets.
The computer scientists also highlight other advances they bring to automated object identification. First, instead of doing just one image segmentation, the researchers generated a collection of image segmentations and put together a shortlist of stable image segmentations. This increases the accuracy of the segmentation process and provides an implicit shape description for each of the image regions.
Second, the researchers ran their object categorization model on each of the segmentations, rather than on individual pixels. This dramatically reduced the computational demands on the object categorization model.
In the two sets of images that the researchers tested, the categorization results improved considerably with inclusion of context. For one image dataset, the average categorization accuracy increased more than 10 percent using the semantic context provided by Google Sets. In a second dataset, the average categorization accuracy improved by about 2 percent using the semantic context provided by Google Sets. The improvements were higher when the researchers gleaned context information from data on co-occurrence of object labels in the training data set for the object identifier.
Right now, the researchers are exploring ways to extend context beyond the presence of objects in the same image. For example, they want to make explicit use of absolute and relative geometric relationships between objects in an image – such as “above” or “inside” relationships. This would mean that if a person were sitting on top of an animal, the system would consider the animal to be more likely a horse than a dog.
Reference: “Objects in Context,” by Andrew Rabinovich, Carolina Galleguillos, Eric Wiewiora and Serge Belongie from the Department of Computer Science and Engineering at the UCSD Jacobs School of Engineering. Andrea Vedaldi from the Department of Computer Science, UCLA.
The paper will be presented on Thursday 18 October 2007 at ICCV 2007 – the 11th IEEE International Conference on Computer Vision in Rio de Janeiro, Brazil.
Funders: National Science Foundation, Afred P. Sloan Research Fellowship, Air Force Office of Scientific Research, Office of Naval Research.
Adapted from materials provided by University of California - San Diego.

Fausto Intilla

martedì 16 ottobre 2007

Thwarting The Growth Of Internet Black Markets

Source:
Science Daily — Carnegie Mellon University's Adrian Perrig and Jason Franklin, working in conjunction with Vern Paxson of the International Computer Science Institute and Stefan Savage of the University of California, San Diego, have designed new computer tools to better understand and potentially thwart the growth of Internet black markets, where attackers use well-developed business practices to hawk viruses, stolen data and attack services.
"These troublesome entrepreneurs even offer tech support and free updates for their malicious creations that run the gamut from denial of service attacks designed to overwhelm Web sites and servers to data stealing Trojan viruses," said Perrig, an associate professor of electrical and computer engineering and engineering and public policy.
In order to understand the millions of lines of data derived from monitoring the underground markets for more than seven months, Carnegie Mellon researchers developed automated techniques to measure and catalogue the activities of the shadowy online crooks who profit from spewed spam, virus-laden PCs and identity theft. The researchers estimate that the total value of the illegal materials available for sale in the seven-month period could total more than $37 million.
"Our research monitoring found that more than 80,000 potential credit card numbers were available through these illicit underground web economies," said Franklin, a Ph.D. student in computer science. However, the researchers warned that because checking the validity of the card numbers was not possible without credit card company assistance, the cards seen may not have been valid when they were observed.
Whatever the purchases, a buyer will typically contact the black market vendor privately using email, or in some cases, a private instant message. Money generally changes hands through non-bank payment services such as e-gold, making the criminals difficult to track.
To stem the flow of stolen credit cards and identity data, Carnegie Mellon researchers proposed two technical approaches to reduce the number of successful market transactions, including a slander attack and another technique, which were aimed at undercutting the cyber-crooks verification or reputation system.
"Just like you need to verify that individuals are honest on E-bay, online criminals need to verify that they are dealing with 'honest' criminals," Franklin said.
In a slander attack, an attacker eliminates the verified status of a buyer or seller through false defamation. "By eliminating the verified status of the honest individuals, an attacker establishes a lemon market where buyers are unable to distinguish the quality of the goods or services," Franklin said.
The researchers also propose to undercut the burgeoning black market activity by creating a deceptive sales environment.
Perrig's team developed a technique to establish fake verified-status identities that are difficult to distinguish from other-verified status sellers making it hard for buyers to identify the honest verified-status sellers from dishonest verified-status sellers.
"So, when the unwary buyer tries to collect the goods and services promised, the seller fails to provide the goods and services. Such behavior is known as 'ripping.' And it is the goal of all black market site's verification systems to minimize such behavior," said Franklin.
There have been successful takedowns against known black market sites, such as the U.S. Secret Service-run Operation Firewall three years ago. That operation against the notorious Shadowcrew resulted in 28 arrests around the globe, Carnegie Mellon researchers reported.
"The scary thing about all this is that you do not have to be in the know to find black markets, they are easy to find, easy to join and just a mouse click away," Franklin said.
"We believe these black markets are growing, so we will have even more incidents to monitor and study in the future," Perrig said.
That growth is also reflected in the latest Computer Security Institute (CSI) Computer Crime and Security Survey that shows average cyber-losses more than doubled after a five-year decline. The 2007 CSI survey reported that U.S. companies on average lost more than $300,000 to cyber crooks compared to $168,000 last year.
Note: This story has been adapted from material provided by Carnegie Mellon University.

Fausto Intilla
www.oloscience.com

Computer Security Can Double As Help For The Blind


Source:

Science Daily — Before you can post a comment to most blogs, you have to type in a series of distorted letters and numbers (a CAPTCHA) to prove that you are a person and not a computer attempting to add comment spam to the blog.
What if -- instead of wasting your time and energy typing something meaningless like SGO9DXG -- you could label an image or perform some other quick task that will help someone who is visually impaired do their grocery shopping?
In a position paper presented at Interactive Computer Vision (ICV) 2007 on October 15 in Rio de Janeiro, computer scientists from UC San Diego led by professor Serge Belongie outline a grid system that would allow CAPTCHAs to be used for this purpose -- and an endless number of other good causes.
"One of the application areas for my research is assistive technologyfor the blind. For example, there is an enormous amount of data that needs to be labeled for our grocery shopping aid to work. We are developing a wearable computer with a camera that can lead a visually impaired user to a desired product in a grocery store by analyzing the video stream. Our paper describes a way that people who are looking to prove that they are humans and not computers can help label still shots from video streams in real time," said Belongie.
The researchers call their system a "Soylent grid" which is a reference to the 1973 film Soylent Green (see more on this reference at the end of the article).
"The degree to which human beings could participate in the system (as remote sighted guides) ranges from none at all to virtually unlimited. If no human user is involved in the loop, only computer vision algorithms solve the identification problem. But in principle, if there were an unlimited number of humans in the loop, all the video frames could be submitted to a SOYLENT GRID, be solved immediately and sent back to the device to guide the user," the authors write in their paper.
From the front end, users who want to post a comment on a blog would be asked to perform a variety of tasks, instead of typing in a string of misshapen letters and numbers.
"You might be asked to click on the peanut butter jar or click the Cheetos bag in an image," said Belongie. "This would be one of the so called 'Where's Waldo' object detection tasks."
The task list also includes "Name that Thing" (object recognition), "Trace This" (image segmentation) and "Hot or Not" (choosing visually pleasing images).
"Our research on the personal shopper for the visually impaired -- called Grozi -- is a big motivation for this project. When we started the Grozi project, one of the students, Michele Merler -- who is now working on a Ph.D. at Columbia University -- captured 45 minutes of video footage from the campus grocery store and then endured weeks of manually intensive labor, drawing bounding boxes and identifying the 120 products we focused on. This is work the soylent grid could do," said Belongie.
From the back end, researchers and others who need images labeled would interact with clients (like a blog hosting company) that need to take advantage of the CAPTCHA and spam filtering capabilities of the grid.
"Getting this done is going to take an innovative collaboration between academia and industry. Calit2 could be uniquely instrumental in this project," said Belongie. "Right now we are working on a proposal that will outline exactly what we need -- access to X number of CAPTCHA requests in one week, for example. With this, we'll do a case study and demonstrate just how much data can be labeled with 99 percent reliability through the soylent grid. I'm hoping for people to say, 'Wow, I didn't know that kind of computation was available.'"
This work incorporates recent work from a variety of researchers, including computer scientist Luis von Ahn from Carnegie Mellon University. His reCAPTCHA project uses CAPTCHAs to digitize books.
Soylent Grid?
The researchers call their system a "Soylent grid" and titled their paper "Soylent Grid: it's Made of People! Both the grid name and paper name are references to the 1973 cult classic film Soylent Green, a dystopian science fiction film set in an overpopulated world in which the masses are reduced to eating different varieties of "soylent" -- a synthetic food that suggests both soybeans and lentils. The line from the movie that inspired the title of this paper comes is delivered when someone discovers that soylent green is actually made of cadavers from a government sponsored euthanasia program -- prompting the phrase "Soylent green, it's made of people!" The computer scientists are playing off this famous phrase with their title: "Soylent Grid: it's Made of People!" The idea being that people from all over the world need to jump through anti-spam hoops such as CAPTCHAs, and the power of these people can be harnessed through a grid structure to do some good in the world.
Article: "Soylent Grid: it's Made of People!" by Stephan Steinbach, Vincent Rabaud and Serge Belongie
Note: This story has been adapted from material provided by University of California - San Diego.

Fausto Intilla

martedì 9 ottobre 2007

Quantum Computing Possibilites Enhanced With New Material


Source:

Science Daily — Scientists at Florida State University's National High Magnetic Field Laboratory and the university's Department of Chemistry and Biochemistry have introduced a new material that could be to computers of the future what silicon is to the computers of today.
The material -- a compound made from the elements potassium, niobium and oxygen, along with chromium ions -- could provide a technological breakthrough that leads to the development of new quantum computing technologies. Quantum computers would harness the power of atoms and molecules to perform memory and processing tasks on a scale far beyond those of current computers.
"The field of quantum information technology is in its infancy, and our work is another step forward in this fascinating field," said Saritha Nellutla, a postdoctoral associate at the magnet lab and lead author of the paper published in Physical Review Letters.
Semiconductor technology is close to reaching its performance limit. Over the years, processors have shrunk to their current size, with the components of a computer chip more than 1,000 times smaller than the thickness of a human hair. At those very small scales, quantum effects -- behaviors in matter that occur at the atomic and subatomic levels -- can start playing a role. By exploiting those behaviors, scientists hope to take computing to the next level.
In current computers, the basic unit of information is the "bit," which can have a value of 0 or 1. In so-called quantum computers, which currently exist only in theory, the basic unit is the "qubit" (short for quantum bit). A qubit can have not only a value of 0 or 1, but also all kinds of combinations of 0 and 1 -- including 0 and 1 at the same time -- meaning quantum computers could perform certain kinds of calculations much more effectively than current ones.
How scientists realize the promise of the theoretical qubit is not clear. Various designs and paths have been proposed, and one very promising idea is to use tiny magnetic fields, called "spins." Spins are associated with electrons and various atomic nuclei.
Magnet lab scientists used high magnetic fields and microwave radiation to "operate" on the spins in the new material they developed to get an indication of how long the spin could be controlled. Based on their experiments, the material could enable 500 operations in 10 microseconds before losing its ability to retain information, making it a good candidate for a qubit.
Putting this spin to work would usher in a technological revolution, because the spin state of an electron, in addition to its charge, could be used to carry, manipulate and store information.
"This material is very promising," said Naresh Dalal, a professor of chemistry and biochemistry at FSU and one of the paper's authors. "But additional synthetic and magnetic characterization work is needed before it could be made suitable for use in a device."
Dalal also serves as an adviser to FSU chemistry graduate student Mekhala Pati, who created the material.
Note: This story has been adapted from material provided by Florida State University.

Fausto Intilla

giovedì 4 ottobre 2007

Running Shipwreck Simulations Backwards Helps Identify Dangerous Waves

Source:
Science Daily — Big waves in fierce storms have long been the focus of ship designers in simulations testing new vessels.
But a new computer program and method of analysis by University of Michigan researchers makes it easy to see that a series of smaller waves—a situation much more likely to occur—could be just as dangerous.
"Like the Edmund Fitzgerald that sank in Michigan in 1975, many of the casualties that happen occur in circumstances that aren't completely understood, and therefore they are difficult to design for," said Armin Troesch, professor of naval architecture and marine engineering. "This analysis method and program gives ship designers a clearer picture of what they're up against."
Troesch and doctoral candidate Laura Alford will present a paper on their findings Oct. 2 at the International Symposium on Practical Design of Ships and Other Floating Structures, also known as PRADS 2007.
Today's ship design computer modeling programs are a lot like real life, in that they go from cause to effect. A scientist tells the computer what type of environmental conditions to simulate, asking, in essence, "What would waves like this do to this ship?" The computer answers with how the boat is likely to perform.
Alford and Troesch's method goes backwards, from effect to cause. To use their program, a scientist enters a particular ship response, perhaps the worst case scenario. The question this time is more like, "What are the possible wave configurations that could make this ship experience the worst case scenario?" The computer answers with a list of water conditions.
What struck the researchers when they performed their analysis was that quite often, the biggest ship response is not caused by the biggest waves. Wave height is only one contributing factor. Others are wave grouping, wave period (the amount of time between wave crests), and wave direction.
"In a lot of cases, you could have a rare response, but when we looked at just the wave heights that caused that response, we found they're not so rare," Alford said. "This is about operational conditions and what you can be safely sailing in. The safe wave height might be lower than we thought."
This new method is much faster than current simulations. Computational fluid dynamics modeling in use now works by subjecting the virtual ship to random waves. This method is extremely computationally intensive and a ship designer would have to go through months of data to pinpoint the worst case scenario.
Alford and Troesch's program and method of analysis takes about an hour. And it gives multiple possible wave configurations that could have statistically caused the end result.
There's an outcry in the shipping industry for advanced ship concepts, including designs with more than one hull, Troesch said. But because ships are so large and expensive to build, prototypes are uncommon. This new method is meant to be used in the early stages of design to rule out problematic architectures. And it is expected to help spur innovation.
A majority of international goods are still transported by ship, Troesch said.
The paper is called "A Methodology for Creating Design Ship Responses."
Note: This story has been adapted from material provided by University of Michigan.

Fausto Intilla
www.oloscience.com

Software 'Chipper' Speeds Debugging

Source:

Science Daily — Computer scientists at UC Davis have developed a technique to speed up program debugging by automatically "chipping" the software into smaller pieces so that bugs can be isolated more easily.
Computer programs consist of thousands, tens or even hundreds of thousands of lines of code. To isolate a bug in the code, programmers often break it into smaller pieces until they can pin down the error in a smaller stretch that is easier to manage. UC Davis graduate student Chad Sterling and Ron Olsson, professor of computer science, set out to automate that process.
"It's really tedious to go through thousands of lines of code," Olsson said.
The "Chipper" tools developed by Sterling and Olsson chip off pieces of software while preserving the program structure.
"The pieces have to work after they are cut down," Olsson said. "You can't just cut in mid-sentence."
In a recent paper in the journal "Software -- Practice and Experience," Olsson and Sterling describe ChipperJ, a version developed for the Java programming language. ChipperJ was able to reduce large programs to 20 to 35 percent of their former size in under an hour.
More information about automated program chipping is available on Olsson's Web site at http://www.cs.ucdavis.edu/~olsson/
Note: This story has been adapted from material provided by University of California, Davis.

Fausto Intilla
www.oloscience.com

mercoledì 3 ottobre 2007

'Dead Time' Limits Quantum Cryptography Speeds

Source:
Science DailyQuantum cryptography is potentially the most secure method of sending encrypted information, but does it have a speed limit" According to a new paper* by researchers at the National Institute of Standards and Technology (NIST) and the Joint Quantum Institute** (JQI), technological and security issues will stall maximum transmission rates at levels comparable to that of a single broadband connection, such as a cable modem, unless researchers reduce "dead times" in the detectors that receive quantum-encrypted messages.
In quantum cryptography, a sender, usually designated Alice, transmits single photons, or particles of light, encoding 0s and 1s to a recipient, "Bob." The photons Bob receives and correctly measures make up the secret "key" that is used to decode a subsequent message. Because of the quantum rules, an eavesdropper, "Eve," cannot listen in on the key transmission without being detected, but she could monitor a more traditional communication (such as a phone call) that must take place between Alice and Bob to complete their communication.
Modern telecommunications hardware easily allows Alice to transmit photons at rates much faster than any Internet connection. But at least 90 percent (and more commonly 99.9 percent) of the photons do not make it to Bob's detectors, so that he receives only a small fraction of the photons sent by Alice. Alice can send more photons to Bob by cranking up the speed of her transmitter, but then, they'll run into problems with the detector's "dead time," the period during which the detector needs to recover after it detects a photon. Commercially available single-photon detectors need about 50-100 nanoseconds to recover before they can detect another photon, much slower than the 1 nanosecond between photons in a 1-Ghz transmission.
Not only does dead time limit the transmission rate of a message, but it also raises security issues for systems that use different detectors for 0s and 1s. In that important "phone call," Bob must report the time of each detection event. If he reports two detections occurring within the dead time of his detectors, then Eve can deduce that they could not have come from the same detector and correspond to opposite bit values.
Sure, Bob can choose not to report the second, closely spaced photon, but this further decreases the key production rate. And for the most secure type of encryption, known as a one-time pad, the key has to have as many bits of information as the message itself.
The speed limit would go up, says NIST physicist Joshua Bienfang, if researchers reduce the dead time in single-photon detectors, something that several groups are trying to do. According to Bienfang, higher speeds also would be useful for wireless cryptography between a ground station and a satellite in low-Earth orbit. Since the two only would be close enough to communicate for a small part of the day, it would be beneficial to send as much information as possible during a short time window.
* D.J. Rogers, J.C. Bienfang, A. Nakassis, H. Xu and C.W. Clark, Detector dead-time effects and paralyzability in high-speed quantum key distribution, New Journal of Physics (September 2007);EJ/abstract/-kwd=nj-2f2/1367-2630/9/9/319.
**The JQI is a research partnership that includes NIST and the University of Maryland.
Note: This story has been adapted from material provided by National Institute of Standards and Technology.

Fausto Intilla
www.oloscience.com

Technology Could Enable Computers To 'Read The Minds' Of Users

Source:
Science DailyTufts University researchers are developing techniques that could allow computers to respond to users' thoughts of frustration -- too much work -- or boredom--too little work. Applying non-invasive and easily portable imaging technology in new ways, they hope to gain real-time insight into the brain's more subtle emotional cues and help provide a more efficient way to get work done.
"New evaluation techniques that monitor user experiences while working with computers are increasingly necessary," said Robert Jacob, computer science professor and researcher. "One moment a user may be bored, and the next moment, the same user may be overwhelmed. Measuring mental workload, frustration and distraction is typically limited to qualitatively observing computer users or to administering surveys after completion of a task, potentially missing valuable insight into the users' changing experiences."
Sergio Fantini, biomedical engineering professor, in conjunction with Jacob's human-computer interaction (HCI) group, is studying functional near-infrared spectroscopy (fNIRS) technology that uses light to monitor brain blood flow as a proxy for workload stress a user may experience when performing an increasingly difficult task. A $445,000 grant from the National Science Foundation will allow the interdisciplinary team to incorporate real-time biomedical data with machine learning to produce a more in-tune computer user experience.
Lighting up the brain
"fNIRS is an emerging non-invasive, lightweight imaging tool which can measure blood oxygenation levels in the brain," said Fantini, also an associate dean for graduate education at Tufts' School of Engineering.
The fNIRS device, which looks like a futuristic headband, uses laser diodes to send near-infrared light through the forehead at a relatively shallow depth--only two to three centimeters--to interact with the brain's frontal lobe. Light usually passes through the body's tissues, except when it encounters oxygenated or deoxygenated hemoglobin in the blood. Light waves are absorbed by the active, blood-filled areas of the brain and any remaining light is diffusely reflected to the fNIRS detectors.
"fNIRS, like MRI, uses the idea that blood flow changes to compensate for the increased metabolic demands of the area of the brain that's being used," said Erin Solovey, a graduate researcher at the School of Engineering.
"We don't know how specific we can be about identifying users' different emotional states," said Fantini. "However, the particular area of the brain where the blood flow change occurs should provide indications of the brain metabolic changes and by extension workload, which could be a proxy for emotions like frustration."
In the initial experiments, Jacob and Fantini's groups determined how accurately fNIRS could register users' workload. While wearing the fNIRS device, test subjects viewed a multicolored cube consisting of eight smaller cubes with two, three or four different colors. As the cube rotated onscreen, subjects counted the number of colored squares in a series of 30 tasks. The fNIRS device and subsequent user surveys reflected greater difficulty as users kept track of increasing numbers of colors. The fNIRS data agreed with user surveys up to 83 percent of the time.
The Tufts group will present its initial results on using fNIRS to detect the user workload experience at the Association for Computing Machinery (ACM) symposium on user interface software and technology, to be held Oct. 7 through 10 in Newport, R.I.
"It seems that we can predict, with relatively high confidence, whether the subject was experiencing no workload, low workload, or high workload," said Leanne Hirshfield, a graduate researcher and lead author on the poster paper to be presented at the ACM symposium.
Note: This story has been adapted from material provided by Tufts University.

Fausto Intilla
www.oloscience.com