Visualizzazione post con etichetta Computational Biology. Mostra tutti i post
Visualizzazione post con etichetta Computational Biology. Mostra tutti i post

venerdì 15 maggio 2009

The Origin of Artificial Species: Creating Artificial Personalities


(Left) Rity was developed to test the world’s first robot “chromosomes,” which allow it to have an artificial genome-based personality. (Right) A representation of Rity’s artificial genome. Darker shades represent higher gene values, and red represents negative values. Image credit: Jong-Hwan Kim, et al. ©2009 IEEE.
(PhysOrg.com) -- Does your robot seem to be acting a bit neurotic? Maybe it's just their personality. Recently, a team of researchers has designed computer-coded genomes for artificial creatures in which a specific personality is encoded. The ability to give artificial life forms their own individual personalities could not only improve the natural interactions between humans and artificial creatures, but also initiate the study of “The Origin of Artificial Species,” the researchers suggest.
The first artificial creature to receive the genomic personality is Rity, a dog-like software character that lives in a virtual 3D world in a PC. Rity’s genome is composed of 14 chromosomes, which together are composed of a total of 1,764 genes, each with its own value. Rather than manually assign the gene values, which would be difficult and time-consuming, the researchers proposed an evolutionary process that generates a genome with a specific personality desired by a user. The process is described in a recent study by authors Jong-Hwan Kim of KAIST in Daejeon, Korea; Chi-Ho Lee of the Samsung Economic Research Institute in Seoul, Korea; and Kang-Hee Lee of Samsung Electronics Company, Ltd., in Suwon-si, Korea.
“This is the first time that an artificial creature like a or software agent has been given a genome with a personality,” Kim told PhysOrg.com. “I proposed a new concept of an artificial chromosome as the essence to define the personality of an artificial creature and to pass on its traits to the next generation, like a genetic inheritance. It is critical to provide an impression that the robot is a living creature. With this respect, having emotions enhances natural for human-robot symbiosis in the coming years.”
As the researchers explain, an autonomous artificial creature - whether a physical robot or agent - can behave, interact, and react to environmental stimuli. Rity, for example, can interact with humans in the physical world using information through a mouse, a camera, or a microphone, with 47 perceptions. For instance, a single click and double click on Rity are perceived as “patted” and “hit,” respectively. Dragging Rity slowly and softly is perceived as “soothed,” and dragging it quickly and wildly as “shocked.”
To react to these stimuli in real time, Rity relies on its internal states which are composed of three units - motivation, homeostasis, and emotion - and controlled by its internal control architecture. The three units have a total of 14 states, which are the basis of the 14 chromosomes: the motivation unit includes six states (curiosity, intimacy, monotony, avoidance, greed, and the desire to control); the homeostasis unit includes three states (fatigue, hunger, and drowsiness); and the emotion unit has five states (happiness, sadness, anger, fear, and neutral).
“In Rity, internal states such as motivation, homeostasis and emotion change according to the incoming perception,” Kim said. “If Rity sees its master, its emotion becomes happy and its motivation may be ‘greeting and approaching’ him or her. It means the change of internal states and the activated behavior accordingly is internal and external responses to the incoming stimulus.”
The internal control architecture processes incoming sensor information, calculates each value of internal states as its response, and sends the calculated values to the behavior selection module to generate a proper behavior. Finally, the behavior selection module probabilistically selects a behavior through a voting mechanism, where each reasonable behavior has its own voting value. Unreasonable behaviors are prevented with matrix masks, while a reflexive behavior module, which imitates an animal’s instinct, deals with urgent situations such as running into a wall and enables a more immediate response.
“Rity was developed to test the world's first robotic ‘chromosomes,’ which are a set of computerized DNA (Deoxyribonucleic acid) code for creating robots that can think, feel, reason, express desire or intention, and could ultimately reproduce their kind, and evolve as a distinct species in a virtual world,” Kim said. “Rity can express its feeling through facial expression and behavior just like a living creature.”
As the researchers explain, each of the 14 chromosomes in Rity’s genome is composed of three gene vectors: the fundamental gene vector, the internal-state-related gene vector, and the behavior-related gene vector. As each chromosome is represented by 2 F-genes, 47 I-genes, and 77 B-genes, Rity has 1,764 genes in total. Each gene can have a range of values represented by real numbers. While genes are inherited, mutations may also occur. The nature of the genetic coding is such that a single gene can influence multiple behaviors, and also a single behavior can be influenced by multiple genes.
Depending on the values of the genes, the researchers specified five personalities (“the Big Five personality dimensions”) and their opposites to classify an artificial creature’s personality traits: extroverted/introverted, agreeable/antagonistic, conscientious/negligent, openness/closeness, and neurotic/emotionally stable.
To demonstrate an artificial genome, the researchers used their evolutionary algorithm to generate two contrasting personalities for Rity - agreeable and antagonistic - and compare Rity’s behavior in the different cases. Running the algorithm through 3,000 generations took about 12 hours to generate a genome encoding a desired personality by a Pentium 4, 2 GHz processor. For comparison, the researchers also used manual and random processes to generate genomes with agreeable and antagonistic personalities, though neither outperformed the evolutionary algorithm in terms of personality consistency and similarity to desired personality. Finally, the researchers also verified the accuracy of the evolutionary genome encoding by observing how the artificial creature reacted to a series of stimuli.
“The genome is an essential one encoding a mechanism for growth, reproduction and evolution, which necessarily defines ‘The Origin of Artificial Species,’” Kim said. “It means the origin stems from a computerized genetic code, which defines the mechanism for growing, multiplying and evolving along with its propensity to ‘feel’ happy, sad, angry, sleepy, hungry, afraid, etc.”
As the researchers showed, a 2D representation of the genome can enable users to view the chromosomes of the three gene types and easily insert or delete certain chromosomes or genes related to an artificial creature’s personality.
In the future, the researchers plan to combine the genome-based personality with the artificial creature’s own experiences in order to influence the creature’s behavioral responses. They also plan to classify and standardize the different behaviors in order to generalize the artificial genome structure.
More information:
Robot Intelligence Technology Lab: http://rit.kaist.ac.kr/home/ArtificialCreatures
Jong-Hwan Kim, Chi-Ho Lee, and Kang-Hee Lee. “Evolutionary Generative Process for an Artificial Creature’s Personality.” IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews, Vol. 39, No. 3, May 2009.
Copyright 2009 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.

venerdì 19 ottobre 2007

Computers With 'Common Sense'


Source:

ScienceDaily (Oct. 18, 2007) — Using a little-known Google Labs widget, computer scientists from UC San Diego and UCLA have brought common sense to an automated image labeling system. This common sense is the ability to use context to help identify objects in photographs.
For example, if a conventional automated object identifier has labeled a person, a tennis racket, a tennis court and a lemon in a photo, the new post-processing context check will re-label the lemon as a tennis ball.
“We think our paper is the first to bring external semantic context to the problem of object recognition,” said computer science professor Serge Belongie from UC San Diego.
The researchers show that the Google Labs tool called Google Sets can be used to provide external contextual information to automated object identifiers.
Google Sets generates lists of related items or objects from just a few examples. If you type in John, Paul and George, it will return the words Ringo, Beatles and John Lennon. If you type “neon” and “argon” it will give you the rest of the noble gasses.
“In some ways, Google Sets is a proxy for common sense. In our paper, we showed that you can use this common sense to provide contextual information that improves the accuracy of automated image labeling systems,” said Belongie.
The image labeling system is a three step process. First, an automated system splits the image up into different regions through the process of image segmentation. In the photo above, image segmentation separates the person, the court, the racket and the yellow sphere.
Next, an automated system provides a ranked list of probable labels for each of these image regions.
Finally, the system adds a dose of context by processing all the different possible combinations of labels within the image and maximizing the contextual agreement among the labeled objects within each picture.
It is during this step that Google Sets can be used as a source of context that helps the system turn a lemon into a tennis ball. In this case, these “semantic context constraints” helped the system disambiguate between visually similar objects.
In another example, the researchers show that an object originally labeled as a cow is (correctly) re-labeled as a boat when the other objects in the image – sky, tree, building and water – are considered during the post-processing context step. In this case, the semantic context constraints helped to correct an entirely wrong image label. The context information came from the co-occurrence of object labels in the training sets rather than from Google Sets.
The computer scientists also highlight other advances they bring to automated object identification. First, instead of doing just one image segmentation, the researchers generated a collection of image segmentations and put together a shortlist of stable image segmentations. This increases the accuracy of the segmentation process and provides an implicit shape description for each of the image regions.
Second, the researchers ran their object categorization model on each of the segmentations, rather than on individual pixels. This dramatically reduced the computational demands on the object categorization model.
In the two sets of images that the researchers tested, the categorization results improved considerably with inclusion of context. For one image dataset, the average categorization accuracy increased more than 10 percent using the semantic context provided by Google Sets. In a second dataset, the average categorization accuracy improved by about 2 percent using the semantic context provided by Google Sets. The improvements were higher when the researchers gleaned context information from data on co-occurrence of object labels in the training data set for the object identifier.
Right now, the researchers are exploring ways to extend context beyond the presence of objects in the same image. For example, they want to make explicit use of absolute and relative geometric relationships between objects in an image – such as “above” or “inside” relationships. This would mean that if a person were sitting on top of an animal, the system would consider the animal to be more likely a horse than a dog.
Reference: “Objects in Context,” by Andrew Rabinovich, Carolina Galleguillos, Eric Wiewiora and Serge Belongie from the Department of Computer Science and Engineering at the UCSD Jacobs School of Engineering. Andrea Vedaldi from the Department of Computer Science, UCLA.
The paper will be presented on Thursday 18 October 2007 at ICCV 2007 – the 11th IEEE International Conference on Computer Vision in Rio de Janeiro, Brazil.
Funders: National Science Foundation, Afred P. Sloan Research Fellowship, Air Force Office of Scientific Research, Office of Naval Research.
Adapted from materials provided by University of California - San Diego.

Fausto Intilla

giovedì 4 ottobre 2007

Running Shipwreck Simulations Backwards Helps Identify Dangerous Waves

Source:
Science Daily — Big waves in fierce storms have long been the focus of ship designers in simulations testing new vessels.
But a new computer program and method of analysis by University of Michigan researchers makes it easy to see that a series of smaller waves—a situation much more likely to occur—could be just as dangerous.
"Like the Edmund Fitzgerald that sank in Michigan in 1975, many of the casualties that happen occur in circumstances that aren't completely understood, and therefore they are difficult to design for," said Armin Troesch, professor of naval architecture and marine engineering. "This analysis method and program gives ship designers a clearer picture of what they're up against."
Troesch and doctoral candidate Laura Alford will present a paper on their findings Oct. 2 at the International Symposium on Practical Design of Ships and Other Floating Structures, also known as PRADS 2007.
Today's ship design computer modeling programs are a lot like real life, in that they go from cause to effect. A scientist tells the computer what type of environmental conditions to simulate, asking, in essence, "What would waves like this do to this ship?" The computer answers with how the boat is likely to perform.
Alford and Troesch's method goes backwards, from effect to cause. To use their program, a scientist enters a particular ship response, perhaps the worst case scenario. The question this time is more like, "What are the possible wave configurations that could make this ship experience the worst case scenario?" The computer answers with a list of water conditions.
What struck the researchers when they performed their analysis was that quite often, the biggest ship response is not caused by the biggest waves. Wave height is only one contributing factor. Others are wave grouping, wave period (the amount of time between wave crests), and wave direction.
"In a lot of cases, you could have a rare response, but when we looked at just the wave heights that caused that response, we found they're not so rare," Alford said. "This is about operational conditions and what you can be safely sailing in. The safe wave height might be lower than we thought."
This new method is much faster than current simulations. Computational fluid dynamics modeling in use now works by subjecting the virtual ship to random waves. This method is extremely computationally intensive and a ship designer would have to go through months of data to pinpoint the worst case scenario.
Alford and Troesch's program and method of analysis takes about an hour. And it gives multiple possible wave configurations that could have statistically caused the end result.
There's an outcry in the shipping industry for advanced ship concepts, including designs with more than one hull, Troesch said. But because ships are so large and expensive to build, prototypes are uncommon. This new method is meant to be used in the early stages of design to rule out problematic architectures. And it is expected to help spur innovation.
A majority of international goods are still transported by ship, Troesch said.
The paper is called "A Methodology for Creating Design Ship Responses."
Note: This story has been adapted from material provided by University of Michigan.

Fausto Intilla
www.oloscience.com