venerdì 19 giugno 2009

Sunspots Revealed In Striking Detail By Supercomputers

SOURCE

ScienceDaily (June 18, 2009) — In a breakthrough that will help scientists unlock mysteries of the Sun and its impacts on Earth, an international team of scientists led by the National Center for Atmospheric Research (NCAR) has created the first-ever comprehensive computer model of sunspots. The resulting visuals capture both scientific detail and remarkable beauty.
The high-resolution simulations of sunspot pairs open the way for researchers to learn more about the vast mysterious dark patches on the Sun's surface. Sunspots are associated with massive ejections of charged plasma that can cause geomagnetic storms and disrupt communications and navigational systems. They also contribute to variations in overall solar output, which can affect weather on Earth and exert a subtle influence on climate patterns.
The research, by scientists at NCAR and the Max Planck Institute for Solar System Research (MPS) in Germany, is being published June 18 in Science Express.
"This is the first time we have a model of an entire sunspot," says lead author Matthias Rempel, a scientist at NCAR's High Altitude Observatory. "If you want to understand all the drivers of Earth's atmospheric system, you have to understand how sunspots emerge and evolve. Our simulations will advance research into the inner workings of the Sun as well as connections between solar output and Earth's atmosphere."
Ever since outward flows from the center of sunspots were discovered 100 years ago, scientists have worked toward explaining the complex structure of sunspots, whose number peaks and wanes during the 11-year solar cycle. Sunspots encompass intense magnetic activity that is associated with solar flares and massive ejections of plasma that can buffet Earth's atmosphere. The resulting damage to power grids, satellites, and other sensitive technological systems takes an economic toll on a rising number of industries.
Creating such detailed simulations would not have been possible even as recently as a few years ago, before the latest generation of supercomputers and a growing array of instruments to observe the Sun. The model enables scientists to capture the convective flow and movement of energy that underlie the sunspots, which is not directly detectable by instruments.
The work was supported by the National Science Foundation, NCAR's sponsor. The research team improved a computer model, developed at MPS, that built upon numerical codes for magnetized fluids that had been created at the University of Chicago.
Computer model provides a unified physical explanation
The new simulations capture pairs of sunspots with opposite polarity. In striking detail, they reveal the dark central region, or umbra, with brighter umbral dots, as well as webs of elongated narrow filaments with flows of mass streaming away from the spots in the outer penumbral regions.
The model suggests that the magnetic fields within sunspots need to be inclined in certain directions in order to create such complex structures. The authors conclude that there is a unified physical explanation for the structure of sunspots in umbra and penumbra that is the consequence of convection in a magnetic field with varying properties.
The simulations can help scientists decipher the mysterious, subsurface forces in the Sun that cause sunspots. Such work may lead to an improved understanding of variations in solar output and their impacts on Earth.
Supercomputing at 76 trillion calculations per second
To create the model, the research team designed a virtual, three-dimensional domain that simulates an area on the Sun measuring about 31,000 miles by 62,000 miles and about 3,700 miles in depth - an expanse as long as eight times Earth's diameter and as deep as Earth's radius. The scientists then used a series of equations involving fundamental physical laws of energy transfer, fluid dynamics, magnetic induction and feedback, and other phenomena to simulate sunspot dynamics at 1.8 billion points within the virtual expanse, each spaced about 10 to 20 miles apart. For weeks, they solved the equations on NCAR's new bluefire supercomputer, an IBM machine that can perform 76 trillion calculations per second.
The work drew on increasingly detailed observations from a network of ground- and space-based instruments to verify that the model captured sunspots realistically.
The new model is far more detailed and realistic than previous simulations that failed to capture the complexities of the outer penumbral region. The researchers noted, however, that even their new model does not accurately capture the lengths of the filaments in parts of the penumbra. They can refine the model by placing the grid points even closer together, but that would require more computing power than is currently available.
"Advances in supercomputing power are enabling us to close in on some of the most fundamental processes of the Sun," says Michael Knoelker, director of NCAR's High Altitude Observatory and a co-author of the paper. "With this breakthrough simulation, an overall comprehensive physical picture is emerging for everything that observers have associated with the appearance, formation, dynamics, and the decay of sunspots on the Sun's surface."
The University Corporation for Atmospheric Research manages the National Center for Atmospheric Research under sponsorship by the National Science Foundation.
Adapted from materials provided by National Center for Atmospheric Research/University Corporation for Atmospheric Research.

Human Eye Inspires Advance In Computer Vision From Boston College Researchers


ScienceDaily (June 18, 2009) — Inspired by the behavior of the human eye, Boston College computer scientists have developed a technique that lets computers see objects as fleeting as a butterfly or tropical fish with nearly double the accuracy and 10 times the speed of earlier methods.
The linear solution to one of the most vexing challenges to advancing computer vision has direct applications in the fields of action and object recognition, surveillance, wide-base stereo microscopy and three-dimensional shape reconstruction, according to the researchers, who will report on their advance at the upcoming annual IEEE meeting on computer vision.
BC computer scientists Hao Jiang and Stella X. Yu developed a novel solution of linear algorithms to streamline the computer's work. Previously, computer visualization relied on software that captured the live image then hunted through millions of possible object configurations to find a match. Further compounding the challenge, even more images needed to be searched as objects moved, altering scale and orientation.
Rather than combing through the image bank – a time- and memory-consuming computing task – Jiang and Yu turned to the mechanics of the human eye to give computers better vision.
"When the human eye searches for an object it looks globally for the rough location, size and orientation of the object. Then it zeros in on the details," said Jiang, an assistant professor of computer science. "Our method behaves in a similar fashion, using a linear approximation to explore the search space globally and quickly; then it works to identify the moving object by frequently updating trust search regions."
Trust search regions act as visual touchstones the computer returns to again and again. Jiang and Yu's solution focuses on the mathematically-generated template of an image, which looks like a constellation when lines are drawn to connect the stars. Using the researchers' new algorithms, computer software identifies an object using the template of a trust search region. The program then adjusts the trust search regions as the object moves and finds its mathematical matches, relaying that shifting image to a memory bank or a computer screen to record or display the object.
Jiang says using linear approximation in a sequence of trust regions enables the new program to maintain spatial consistency as an object moves and reduces the number of variables that need to be optimized from several million to just a few hundred. That increased the speed of image matching 10 times over compared with previous methods, he said.
The researchers tested the software on a variety of images and videos – from a butterfly to a stuffed Teddy Bear – and report achieving a 95 percent detection rate at a fraction of the complexity. Previous so-called "greedy" methods of search and match achieved a detection rate of approximately 50 percent, Jiang said.
Jiang will present the team's findings at the IEEE Conference on Computer Vision and Pattern Recognition 2009, which takes place June 20-25 in Miami.
Adapted from materials provided by Boston College, via EurekAlert!, a service of AAAS.

Hybrid System Of Human-Machine Interaction Created


ScienceDaily (June 17, 2009) — Scientists at FAU have created a "hybrid" system to examine real-time interactions between humans and machines (virtual partners). By pitting human against machine, they open up the possibility of exploring and understanding a wide variety of interactions between minds and machines, and establishing the first step toward a much friendlier union of man and machine, and perhaps even creating a different kind of machine altogether.
For more than 25 years, scientists in the Center for Complex Systems and Brain Sciences (CCSBS) in Florida Atlantic University’s Charles E. Schmidt College of Science, and others around the world, have been trying to decipher the laws of coordinated behavior called “coordination dynamics”.
Unlike the laws of motion of physical bodies, the equations of coordination dynamics describe how the coordination states of a system evolve over time, as observed through special quantities called collective variables. These collective variables typically span the interaction of organism and environment. Imagine a machine whose behavior is based on the very equations that are supposed to govern human coordination. Then imagine a human interacting with such a machine whereby the human can modify the behavior of the machine and the machine can modify the behavior of the human.
In a groundbreaking study published in the June 3 issue of PLoS One and titled “Virtual Partner Interaction (VPI): exploring novel behaviors via coordination dynamics,” an interdisciplinary group of scientists in the CCSBS created VPI, a hybrid system of a human interacting with a machine. These scientists placed the equations of human coordination dynamics into the machine and studied real-time interactions between the human and virtual partners. Their findings open up the possibility of exploring and understanding a wide variety of interactions between minds and machines. VPI may be the first step toward establishing a much friendlier union of man and machine, and perhaps even creating a different kind of machine altogether.
“With VPI, a human and a ‘virtual partner’ are reciprocally coupled in real-time,” said Dr. J. A. Scott Kelso, the Glenwood and Martha Creech Eminent Scholar in Science at FAU and the lead author of the study. “The human acquires information about his partner’s behavior through perception, and the virtual partner continuously detects the human’s behavior through the input of sensors. Our approach is analogous to the dynamic clamp used to study the dynamics of interactions between neurons, but now scaled up to the level of behaving humans.”
In this first ever study of VPI, machine and human behaviors were chosen to be quite simple. Both partners were tasked to coordinate finger movements with one another. The human executed the task with the intention of performing in-phase coordination with the machine, thereby trying to synchronize his/her flexion and extension movements with those of the virtual partner’s.
The machine, on the other hand, executed the task with the competing goal of performing anti-phase coordination with the human, thereby trying to extend its finger when the human flexed and vice versa. Pitting machine against human through opposing task demands was a way the scientists chose to enhance the formation of emergent behavior, and also allowed them to examine each partner’s individual contribution to the coupled behavior. An intriguing outcome of the experiments was that human subjects ascribed intentions to the machine, reporting that it was “messing” with them.
“The symmetry between the human and the machine, and the fact that they carry the same laws of coordination dynamics, is a key to this novel scientific framework,” said co-author Dr. Gonzalo de Guzman, a physicist and research associate professor at the FAU center. “The design of the virtual partner mirrors the equations of motion of the human neurobehavioral system. The laws obtained from accumulated studies describe how the parts of the human body and brain self-organize, and address the issue of self-reference, a condition leading to complexity.”
One ready application of VPI is the study of the dynamics of complex brain processes such as those involved in social behavior. The extended parameter range opens up the possibility of systematically driving functional process of the brain (neuromarkers) to better understand their roles. The scientists in this study anticipate that just as many human skills are acquired by observing other human beings; human and machine will learn novel patterns of behavior by interacting with each other.
“Interactions with ever proliferating technological devices often place high skill demands on users who have little time to develop these skills,” said Kelso. “The opportunity presented through VPI is that equally useful and informative new behaviors may be uncovered despite the built-in asymmetry of the human-machine interaction.”
While stable and intermittent coordination behaviors emerged that had previously been observed in ordinary human social interactions, the scientists also discovered novel behaviors or strategies that have never previously been observed in human social behavior. The emergence of such novel behaviors demonstrates the scientific potential of the VPI human-machine framework.
Modifying the dynamics of the virtual partner with the purpose of inducing a desired human behavior, such as learning a new skill or as a tool for therapy and rehabilitation, are among several applications of VPI.
“The integration of complexity in to the behavioral and neural sciences has just begun,” said Dr. Emmanuelle Tognoli, research assistant professor in FAU’s CCSBS and co-author of the study. “VPI is a move away from simple protocols in which systems are ‘poked’ by virtue of ‘stimuli’ to understanding more complex, reciprocally connected systems where meaningful interactions occur.”
Research for this study was supported by the National Science Foundation program “Human and Social Dynamics,” the National Institute of Mental Health’s “Innovations Award,” “Basic and Translational Research Opportunities in the Social Neuroscience of Mental Health,” and the Office of Naval Research Code 30. Kelso’s research is also supported by the Pierre de Fermat Chaire d’Excellence and Tognoli’s research is supported by the Davimos Family Endowment for Excellence in Science.
Adapted from materials provided by Florida Atlantic University, via Newswise.

venerdì 5 giugno 2009

Endless Original Music: Computer Program Creates Music Based On Emotions


ScienceDaily (June 2, 2009) — A group of researchers from the University of Granada (UGR) has developed Inmamusys, a software program that can create music in response to emotions that arise in the listener. By using artificial intelligence (AI) techniques, the program enables original, copyright-free and emotion-inspiring music to be played continuously.
UGR researchers Miguel Delgado, Waldo Fajardo and Miguel Molina decided to design a software program that would enable a person who knew nothing about composition to create music. The system they devised, using AI, is called Inmamusys, an acronym for Intelligent Multiagent Music System, and is able to compose and play music in real time.
If successful, this prototype, which has been described recently in the journal Expert Systems with Applications, looks likely to bring about great changes in terms of the intrusive and repetitive canned music played in public places.
Miguel Molina, lead author of the study, says that while the repertoire of such canned music is very limited, the new invention can be used to create a pleasant, non-repetitive musical environment for anyone who has to be within earshot throughout the day.
Everyone's ears have suffered the effects of repetitively-played canned music, be it in workplaces, hospital environments or during phone calls made to directory inquiries numbers. On this basis, the research team decided that it would be "very interesting to design and build an intelligent system able to generate music automatically, ensuring the correct degree of emotiveness (in order to manage the environment created) and originality (guaranteeing that the tunes composed are not repeated, and are original and endless)."
Inmamusys has the necessary knowledge to compose emotive music through the use of AI techniques. In designing and developing the system, the researchers worked on the abstract representation of the concepts necessary to deal with emotions and feelings. To achieve this, Molina says, "we designed a modular system that includes, among other things, a two-level multiagent architecture."
A survey was used to evaluate the system, with the results showing that users are able to identify the type of music composed by the computer. A person with no musical knowledge whatsoever can use this artificial musical composer, because the user need do nothing more than decide on the type of music."
Beneath the system's ease of use, Miguel Molina reveals that a complex framework is at work to allow the computer to imitate a feature as human as creativity. Aside from creativity, music also requires specific knowledge.
According to Molina, this "is usually something done by human beings, although they do not understand how they do it. In reality, there are numerous processes involved in the creation of music and, unfortunately, we still do not understand many of them. Others are so complex that we cannot analyse them, despite the enormous power of current computing tools. Nowadays, thanks to the advances made in computer sciences, there are areas of research -- such as artificial intelligence -- that seek to reproduce human behaviour. One of the most difficult facets of all to reproduce is creativity."
Farewell to copyright payments
Commercial development of this prototype will not only change the way in which research is carried out into the relationship between computers and emotions, the means of interacting with music and structures by which music is composed in the future. It will also serve, say the study's authors, to reduce costs.
According to the researchers, "music is highly present in our leisure and working environments, and a large number of the places we visit have canned music systems. Playing these pieces of music involves copyright payments. Our system will make these music copyright payments a thing of the past."
Journal reference:
Miguel Delgado; Waldo Fajardo; Miguel Molina-Solana. Inmamusys: Intelligent multiagent music system. Expert Systems with Applications, 2009; 36 (3): 4574 DOI: 10.1016/j.eswa.2008.05.028
Adapted from materials provided by Plataforma SINC, via AlphaGalileo.

Computer Graphics Researchers Simulate The Sounds Of Water And Other Liquids

SOURCE

ScienceDaily (June 4, 2009) — Splash, splatter, babble, sploosh, drip, drop, bloop and ploop!
Those are some of the sounds that have been missing from computer graphic simulations of water and other fluids, according to researchers in Cornell's Department of Computer Science, who have come up with new algorithms to simulate such sounds to go with the images.
The work by Doug James, associate professor of computer science, and graduate student Changxi Zheng will be reported at the 2009 ACM SIGGRAPH conference Aug. 3-7 in New Orleans. It is the first step in a broader research program on sound synthesis supported by a $1.2 million grant from the Human Centered Computing Program of the National Science Foundation (NSF) to James, assistant professor Kavita Bala and associate professor Steve Marschner.
In computer-animated movies, sound can be added after the fact from recordings or by Foley artists. But as virtual worlds grow increasingly interactive and immersive, the researchers point out, sounds will need to be generated automatically to fit events that can't be predicted in advance. Recordings can be cued in, but can be repetitive and not always well matched to what's happening.
"We have no way to efficiently compute the sounds of water splashing, paper crumpling, hands clapping, wind in trees or a wine glass dropped onto the floor," the researchers said in their research proposal.
Along with fluid sounds, the research also will simulate sounds made by objects in contact, like a bin of Legos; the noisy vibrations of thin shells, like trash cans or cymbals; and the sounds of brittle fracture, like breaking glass and the clattering of the resulting debris.
All the simulations will be based on the physics of the objects being simulated in computer graphics, calculating how those objects would vibrate if they actually existed, and how those vibrations would produce acoustic waves in the air. Physics-based simulations also can be used in design, just as visual simulation is now, James said. "You can tell what it's going to sound like before you build it," he explained, noting that a lot of effort often goes into making things quieter.
In their SIGGRAPH paper, Zheng and James report that most of the sounds of water are created by tiny air bubbles that form as water pours and splashes. Moving water traps air bubbles on the scale of a millimeter or so. Surface tension contracts the bubbles, compressing the air inside until it pushes back and expands the bubble. The repeated expansion and contraction over milliseconds generates vibrations in the water that eventually make its surface vibrate, acting like a loudspeaker to create sound waves in the air.
The simulation method developed by the Cornell researchers starts with the geometry of the scene, figures out where the bubbles would be and how they're moving, computes the expected vibrations and finally the sounds they would produce. The simulation is done on a highly parallel computer, with each processor computing the effects of multiple bubbles. The researchers have fine-tuned the results by comparing their simulations with real water sounds.
Demonstration videos of simulations of falling, pouring, splashing and babbling water are available at http://www.cs.cornell.edu/projects/HarmonicFluids.
The current methods still require hours of offline computing time, and work best on compact sound sources, the researchers noted, but they said further development should make possible the real-time performance needed for interactive virtual environments and deal with larger sound sources such as swimming pools or perhaps even Niagara Falls. They also plan to approach the more complex collections of bubbles in foam or plumes.
The research reported in the SIGGRAPH paper was supported in part by an NSF Faculty Early Career Award to James, and by the Alfred P. Sloan Foundation, Pixar, Intel and Autodesk.
Adapted from materials provided by Cornell University. Original article written by Bill Steele.