lunedì 27 agosto 2007

World's Highest-resolution Computer Display Reaches 220 Million Pixels In Resolution


Source:

Science Daily — Engineers at the University of California, San Diego have constructed the highest-resolution computer display in the world – with a screen resolution up to 220 million pixels.
The system located at the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2) is also linked via optical fiber to Calit2’s building at UC Irvine, which boasts the previous record holder. The combination – known as the Highly Interactive Parallelized Display Space (HIPerSpace) – can deliver real-time rendered graphics simultaneously across 420 million pixels to audiences in Irvine and San Diego.
“We don’t intend to stop there,” said Falko Kuester, Calit2 professor for visualization and virtual reality and associate professor of structural engineering in UCSD’s Jacobs School of Engineering. “HIPerSpace provides a unique environment for visual analytics and cyberinfrastructure research and we are now seeking funding to double the size of the system at UC San Diego alone to reach half a billion pixels with a one gigapixel distributed display in sight.”
Kuester is the chief architect of the systems deployed in both Calit2 buildings. Until last week, UC Irvine’s Highly Interactive Parallelized Display Wall (HIPerWall) – built in 2005 with funding from the National Science Foundation (NSF) – held the record of 200 million pixels for a tiled display system. It is located in the Calit2 Center of Graphics, Visualization and Imaging Technology (GRAVITY), which Kuester directs. When Kuester’s group moved to UCSD in 2006 they began work on the next generation of massively tiled display walls, which now serve as a prototype for ultra-high resolution OptIPortal tiled displays developed by the NSF-funded OptIPuter project (led by Calit2 director Larry Smarr).
The new HIPerSpace system between Irvine and San Diego is joined together via high-performance, dedicated optical networking that clocks in at up to two gigabits per second (2Gbps). The systems use the same type of graphics rendering technology, from industry partner NVIDIA. The “graphics super cluster” being developed at UCSD consists of 80 NVIDIA Quadro FX 5600 graphics processing units (GPUs). “The graphics and computational performance of these cards is quite astounding,” said Kuester. “Putting the theoretical computational performance of the cluster at almost 40 teraflops. To put that into context, the top-rated supercomputer in the world five years ago was operating at that same speed. While these are purely theoretical numbers, the comparison clearly hints at capabilities of this new cluster that go far beyond generating impressive visual information.”
The processing power will come in handy for the kinds of large-scale applications that are likely to make use of the HIPerSpace system. Calit2 will make the displays available to teams of scientists or engineers dealing with very large data sets, from multiple gigabytes to terabytes, notably in the Earth sciences, climate prediction, biomedical engineering, genomics, and brain imaging. “The higher-resolution displays allow researchers to take in both the broad view of the data and the minutest details, all at the same time,” said Kuester. “HIPerSpace also allows us to experiment on the two campuses with distributed teams that can collaborate and share insights derived from a better understanding of complex results. This capability will allow researchers at two UC campuses to collaborate more intensively with each other, and eventually with other campuses, thanks to the rapid rollout of OptIPortals outside of California.”
In San Diego, the OptIPortal is deployed on the second floor of Atkinson Hall, next to the offices of the NEES Cyberinfrastructure Center (NEESit), which supports the NSF-funded George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES) and its 15 sites around the country. “Structural engineering simulations require a massive amount of data processing and visualization, especially if you need to crunch data coming in from all of the NEES participating sites,” said Kuester. “We are also using the system for a large-scale, global seismicity visualization using data collected over the past thirty years.”
“I am excited that UC Irvine’s HIPerWall is now networked to its larger sibling,” said Stephen Jenks, professor of electrical engineering and computer science at UC Irvine and a participant in Calit2 at UCI. “With the high-speed OptIPuter network between the two Calit2 buildings, we will be able to virtually join the display walls at a distance of nearly 100 miles, so they can work together to show different parts of a huge data set or each can replicate parts of the other. We look forward to exploring remote collaboration technology and how the two systems can help researchers understand data better than ever before.”
UCSD’s HIPerSpace OptIPortal is similar to the HIPerWall because both are tiled display systems, but with different hardware. Irvine’s version is constructed with 50 Apple 30-inch Cinema Displays, powered by 25 Power Mac G5s running the Mac OS X operating system. UCSD’s Linux-based OptIPortal consists of 55 Dell displays driven by 18 Dell XPS personal computers. The system at UCSD uses the San Diego Supercomputer Center’s new 64-bit version of grid-computing middleware known as ROCKS released in early August and Calit2’s Cluster GL for heterogeneous systems (CGLX) framework, which is capable of supporting both systems concurrently.
"The usability of high-performance visualization clusters such as HiPerSpace is bound tightly to the accessibility of its resources, so cumbersome script configuration and specially-written software are no longer viable,” said Calit2 postdoctoral researcher Kai-Uwe Doerr. “The visualization software developed here at Cailt2 was designed to provide an efficient and transparent mechanism to grant access to available graphics resources and make the transition of a desktop application to a cluster seamless and uncomplicated – with minimal or no changes to the original code." Doerr and Kuester are part of a large team making HIPerSpace a reality. Others at UC San Diego include So Yamaoka; Daniel Knoblauch; Jason Kimball; Kevin Ponto; Tung-Ju Hsieh; Andres Fernandez Munuera; Tom DeFanti; Greg Dawe; Stephen Jenks; and Duy-Quoc Lai, who will start graduate school at UC Irvine this fall. Other UCI researchers on the HIPerWall end of the project include Harry Mangalam, Frank Wessel, Charlie Zender, Soroosh Sorooshian, Jean-Luc Gaudiot and Sung-Jin Kim.
Note: This story has been adapted from a news release issued by University of California, San Diego.

Fausto Intilla

sabato 25 agosto 2007

MIT aims to optimize chip designs (Model could reduce fabrication costs)


Source:

Anne Trafton,
News Office, August 16, 2007

The computer chips inside high-speed communication devices have become so small that tiny variations that appear during chip fabrication can make a big difference in performance.
Those variations can cause fluctuations in circuit speed and power so the chips don't meet their original design specifications, says MIT Professor Duane Boning, whose research team is working to predict the variation in circuit performance and maximize the number of chips working within the specifications.
The researchers have recently developed a model to characterize the variation in one kind of chip. The model could be used to estimate the ability to manufacture a circuit early in the development stages, helping to optimize chip designs and reduce costs.
"We're getting closer and closer to some of the limits on size, and variations are increasing in importance," says Boning, a professor of electrical engineering and computer science (EECS) and associate head of the department. "It's becoming much more difficult to reduce variation in the manufacturing process, so we need to be able to deal with variation and compensate for it or correct it in the design."
Boning and EECS graduate student Daihyun Lim's model characterizes variation in radio frequency integrated circuits (RFICs), which are used in devices that transfer large amounts of data very rapidly, such as high-definition TV receivers.
The researchers published their results in two papers in February and June. They also presented a paper on the modeling of variation in integrated circuits at this year's International Symposium on Quality Electronic Design.
RFIC chips are essential in many of today's high-speed communication and imaging devices. Shrinking the size of a chip's transistors to extremely small dimensions (65 nanometers, or billionths of a meter), improves the speed and power consumption of the RFIC chips, but the small size also makes them more sensitive to small and inevitable variations produced during manufacturing.
"The extremely high speeds of these circuits make them very sensitive to both device and interconnect parameters," said Boning, who is also affiliated with MIT's Microsystems Technology Laboratories. "The circuit may still work, but with the nanometer-scale deviations in geometry, capacitance or other material properties of the interconnect, these carefully tuned circuits don't operate together at the speed they're supposed to achieve."
Every step of chip manufacturing can be a source of variation in performance, said Lim. One source that has become more pronounced as chips have shrunk is the length of transistor channels, which are imprinted on chips using lithography.
"Lithography of very small devices has its optical limitation in terms of resolution, so the variation of transistor channel length is inevitable in nano-scale lithography," said Lim.
The researchers' model looks at how variation affects three different properties of circuits--capacitance, resistance and transistor turn-on voltage. Those variations cannot be measured directly, so Lim took an indirect approach: He measured the speed of the chip's circuits under different amounts of applied current and then used a mathematical model to estimate the electrical parameters of the circuits.
To the researchers' surprise, they found correlations between some of the variations in each of the three properties, but not in others. For example, when capacitance was high, resistance was low. However, the transistor threshold voltage was nearly independent of the parasitic capacitance and resistance. The different degrees of correlation should be considered in the statistical simulation of the circuit performance during design for more accurate prediction of manufacturing yield, said Lim.
The research was funded by the MARCO/DARPA Focus Center Research Program's Interconnect Focus Center and Center for Circuits and Systems Solutions, and by IBM, National Semiconductor and Samsung Electronics.

Fausto Intilla

mercoledì 15 agosto 2007

An Interactive, 3D Voyage into Human Anatomy

Source:
Science Daily — Anatomists and biochemists have created a detailed virtual view of vital organs in the human body, down to the level of tissues and cells. The software recreates the visualization from a combination of illustrations, knowledge of molecular cell structures, and an understanding of the body. So far researchers have modeled the liver, kidneys and heart and plan on continuing building images of the entire body and then build images of diseases in a virtual environment.
ROCHESTER, N.Y. -- We all know what we look like on the outside, but what about inside our bodies? Virtual reality usually flies us through imaginary worlds. Now a new one flies through the real world of the human body. Anatomists, along with bio-chemists and medical illustration students, built the new detailed images to create a never-before-seen virtual view of the body. "I think it's really exciting to see what we had in our head come to life," Jillian Scott, a Medical Illustration Student at the University at Buffalo, N.Y., tells DBIS. The voyage goes deep into vital organs to reveal microscopic views of cells and tissues, providing a powerful tool for understanding the human body. Anatomist Richard Doolittle, of Rochester Institute of Technology in Rochester, N.Y., says, "Going with something like a 3D approach allows the student, allows the user, to see the structures from all different angles." The images are built through a combination of illustrations, knowledge of molecular cell structures, and an understanding of the body. Then computer software creates the images. The result is a virtual library of the human body. "Our real goal here is to provide the most reliable science we can find and the most graphically, graphically appealing way that we can," Paul Craig, a biochemist at Rochester Institute of Technology, tells DBIS. It's also an interactive way to navigate through the body, and learn more information from virtually every angle. So far, researchers have created images of the pancreas, liver, kidneys and heart and plan on continuing building images of the entire body and then build images of diseases in a virtual environment.BACKGROUND: With the help of a team of students, two scientists at the Rochester Institute of Technology, in New York state, created never-before-seen 3D virtual images of the pancreas, detailed images of the human skull, and DNA-level images of protein molecules. Viewers feel as if they are actually inside the body, taking a tour of a specific organ. The images will help them better understand human development, as well as improve the diagnosis and treatment of numerous diseases.HOW THE IMAGES ARE MADE: The students first set up a pipeline to three different software tools which, taken together, enabled them to create true 3D images. One software program creates virtual trips through the body at the microscopic level, for example, while another sends images through polarized filters to a dual project system to create the 3D effect. The prototype system requires a pair of 3D red-and-blue glasses to view the images, but eventually the team hopes to create a fully interactive version that can be used with any computer monitor. A user would be able to zoom in or out and observe a given organ at all angles.ABOUT COMPUTER MODELING: Computer modeling is used to simulate the structure and appearance both of static objects, such as building architecture, and of dynamic situations, such as a football game. Computer models can enable the user to test the consequences of choices and decisions. They can provide cutaway views that let you see aspects of an object that would be invisible in the real artifact, as well as visualization tools that can provide many different perspectives. Physical models that reproduce behavior are limited by the physics of the world, while computer models have much looser bounds. Physical models of living things can reproduce very few behaviors, compared to simulation models, and physical models simply cannot capture the sorts of species-level and conceptual-level phenomena that artificial life and artificial intelligence models do. Computer models enable you to run companies and civilizations, fight battles, play football games and evolve new species.WHAT IS VIRTUAL REALITY: The term "virtual reality" is often used to describe interactive software programs in which the user responds to visual and hearing cues as he or she navigates a 3D environment on a graphics monitor. But originally, it referred to total virtual environments, in which the user would be immersed in an artificial, three-dimensional computer-generated world, involving not just sight and sound, but touch as well. Devices that simulate the touch experience are called haptic devices. The user has a variety of input devices to navigate that world and interact with virtual objects, all of which must be linked together with the rest of the system to produce a fully immersive experience. The Optical Society of America contributed to the information contained in the TV portion of this report.For more information about this story, contact: Optical Society of America2010 Massachusetts Ave., N.W.Washington, DC 20036-1023Tel: 202-223-8130mailto:202-223-8130info@osa.org
Note: This story and accompanying video were originally produced for the American Institute of Physics series Discoveries and Breakthroughs in Science by Ivanhoe Broadcast News and are protected by copyright law. All rights reserved.

Fausto Intilla's web site:
http://www.oloscience.com/

Computers Expose The Physics Of NASCAR


Source:

Science Daily — It's an odd combination of Navier-Stokes equations and NASCAR driving.Computer scientists at the University of Washington have developed software that is incorporated in new technology allowing television audiences to instantaneously see how air flows around speeding cars. The algorithm, first presented at a computer graphics conference last August, was since used by sports network ESPN and sporting-technology company Sportvision Inc. to create a new effect for racing coverage.
The fast-paced innovation hit prime time in late July when ESPN used the Draft Track technology to visualize the air flow behind cars in the Allstate 400 at the Brickyard, a NASCAR race at the Indianapolis Motor Speedway. Zoran Popovi, an associate professor in the UW's department of computer science and engineering, and two students wrote the code that dramatically speeds up real-time fluid dynamics simulations. Working with ESPN, a Chicago-based company named Sportvision developed the application for NASCAR competition. The Draft Track application calculates air flow over the cars and then displays it as colors trailing behind the car. Green, blue, yellow and red correspond to different speeds and directions for air flow when two or more cars approach one another while driving at speeds upward of 200 miles per hour. "What ESPN wanted to do is tell the story for the viewer of how drafting works because it's such a big part of the event," said Rick Cavallaro, chief scientist at Sportvision. "How the drivers use drafting to save gas, pick up speed, et cetera." The UW researchers' breakthrough was figuring out how to simulate and display complex systems very quickly. Studios such as Pixar already use physical laws, such as the Navier-Stokes fluids equations, in their animations. This allows the studios to create realistic pictures of how smoke curls, how a fire's flames lick, and even how hair or fabric blows in the wind. But these calculations take hours to run on many high-performance computers. And increasing the speed of the image is only one challenge of moving to a real-time setting. "The studios shoot a two-second special effect and if it doesn't work they just change the parameters and try again," Popovi said. "But in a real-time context the simulation has to run indefinitely, and for an unforeseen set of inputs." To make the simulation work in real time and be interactive, "you kind of need to rethink the math problem," he said. "The method that ended up being used is drastically different from what people have done before." The new algorithm first simulates all the ways that smoke, fire -- or in this case, modified stock cars -- can behave. Then it runs the simulation for a reduced number of physically possible parameters. This allows the model to run a million times faster than before. The researchers presented the work at the SIGGRAPH computer graphics conference in August 2006. Popovi imagined that the first applications would be introducing interactive simulations in video games that would allow players to drive through a smoky fire, interact with the weather in a flight simulator, or drive racecars in a virtual wind tunnel. Other research results from his lab were licensed to the game industry and then adopted in video games. But in March, Sportvision approached the researchers to see whether it could license the software for use in NASCAR visualizations. The two parties agreed to a nonexclusive, open-source agreement where the company would be allowed to use the technique. "What's interesting is how the flow from the car in front is affecting the cars behind," Popovi said. "When there are two cars behind, then the interaction becomes very complex." Sportvision creates technology to enhance sports coverage. It introduced the glowing puck for National Hockey League telecasts in the mid 1990s and later came up with the yellow lines that drag a virtual highlighter over the first-down line in football. The company has already developed add-ons for ESPN's NASCAR coverage, placing Global Positioning System receivers, inertial measurement systems and telemetry on each car that can determine each car's speed and position several times a second. Now company engineers will use data from those sensors to model and display the air flowing over the cars. "This is certainly not an application that had occurred to me," admitted Adrien Treuille, a doctoral student who co-authored the software. He, like Popovi, said he had not previously been a NASCAR fan. The group hoped the work might be used for realistic training simulations such as firefighters entering a smoke-filled building. "But once [Sportvision] called us and started describing what they wanted to do," Treuille recalled, "We said, 'Yes, that would totally work.'" Article: "Model reduction for real-time fluids".
Note: This story has been adapted from a news release issued by University of Washington.
Fausto Intilla

NCAR Adds Resources To TeraGrid


Source:

Science Daily — Researchers who use the TeraGrid, the nation's most comprehensive and advanced infrastructure for open scientific research, can now leverage the computing resources of a powerful, 2048-processor BlueGene/L system at the National Center for Atmospheric Research (NCAR).
NCAR plans to provide up to 4.5 million processor-hours of BlueGene/L computing annually to researchers who have received computing grants from the National Science Foundation (NSF).The operational integration of TeraGrid with the BlueGene/L system, nicknamed "frost," involved extensive preparation by NCAR's Computational and Information Systems Laboratory (CISL). Engineers deployed the necessary networking infrastructure, then established connectivity to NCAR's data storage systems, and merged the local resource accounting system with the TeraGrid. "We are excited to be at a point where all our hard work and preparation pays off, and to provide the TeraGrid community with access to this valuable collaborative resource," says Richard Loft, NCAR TeraGrid principal investigator. NCAR is also testing experimental systems and services on the TeraGrid. These include the wide-area versions of general parallel file systems from IBM and Cluster File Systems, as well as a remote data visualization capability based on the VAPOR tool, an open source application developed by NCAR, the University of California, Davis, and Ohio State University under the sponsorship of NSF.NCAR's frost system, which is operated in partnership with the University of Colorado, will be the second BlueGene/L system on the TeraGrid, joining the San Diego Supercomputer Center's 6,144 processor system. With the addition of frost, the TeraGrid has more than 250 teraflops of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performance networks.About the TeraGridThe TeraGrid, sponsored by the National Science Foundation Office of Cyberinfrastructure, is a partnership of people, resources, and services that enables discovery in U.S. science and engineering. Through coordinated policy, grid software, and high-performance network connections, the TeraGrid integrates a distributed set of high-capability computational, data-management and visualization resources to make research more productive.
Note: This story has been adapted from a news release issued by National Center for Atmospheric Research.

Fausto Intilla