IBM's future Power7 chip may be just about done as far as the engineering is concerned, and its server designs might also be more or less completed as well. But there is plenty of time yet to tweak the boxes, and I doubt very much that the final packaging and pricing for the future Power7 machinery is anywhere close to being set. Which is a pity, really.
Philip, a new supercomputer-- named after one of the first Boyd Professors (A Boyd Professorship is the highest and most prestigious academic rank LSU can confer on a professor) at LSU -- chemistry professor Philip W. West, is a 3.5 TFlops Peak Performance 37 compute node cluster running the Red Hat Enterprise Linux 5 operating system. Each node contains two latest Quad Core Nehalem Xeon 64-bit processors operating at a core frequency of 2.93 GHz. Philip was delivered to LSU in May, 2009 and is to be open for general use to LSU users.
The San Diego Supercomputer Center has taken a significant step forward for scientific processing by developing the first of its kind High-Performance Computing (HPC) system which utilizes flash memory. Commonly used in household electronics such as digital cameras and cell phones, flash is generally considered a faster storage medium than traditional hard drives due to the fact that there are no moving parts, as opposed to the traditional drive which stores information on magnetic plates which must be individually accessed.
As computational scientists are confronted with increasingly massive datasets from supercomputing simulations and experiments, one of the biggest challenges is having the right tools to gain scientific insight from the data. One common method for gaining insight is to use scientific visualization, which transforms abstract data into more readily comprehensible images using advanced computer software and computer graphics. But the ever-growing size of scientific datasets presents a significant challenge to modern scientific visualization tools. As a result, there is a great deal of motivation to explore use of large, parallel resources, such as those at the U.S. Department of Energy's (DOE) supercomputing centers, to take advantage of their vast computational processing power, I/O bandwidth and large memory footprint.
I was doing some work and thought, "Wouldn't it be nice to have my own cluster?" I'm guessing not many people have those types of revelations, and probably fewer that decide they should go ahead and solve the problem. I wanted a cheap, small, easy to pack, light, quiet, low-power cluster that I could sit on my desk, and not even think about it.
Multi-million dollar supercomputers take up most of the headlines, but many organizations are now considering the addition of smaller, personal supercomputers to their desktop fleet. Despite some strong global sales, find out why the idea still hasn’t taken off at most companies
raditional supercomputer vendors pushing miniaturized versions of their big machines, like Cray with its CX1, or NEC with its SX-9, have definitely been endorsed by pockets of life sciences researchers, but certainly not on any widespread scale. GPU chip maker NVIDIA even has its own personal supercomputer offering in an effort to capitalize on the growing use of graphics chips in scientific computing. And according to market research firms like IDC's High Performance Computing group, personal supercomputers that cluster together GPUs and CPUs are a definite boon to pharmaceutical research shops. And with the steadily climbing growth of workgroup systems selling for less than $100,000, what's the problem?
Virtual Machines and Types of Service for TeraGrid Computing Foundational capabilities we provide in TeraGrid, such as "roaming" access and a "coordinated" software environment, open new possibilities in terms of more specialized services, or to allow the TeraGrid, as a system, to respond to supply and demand. For example, a resource provider might elect to increase the "price" of a queue in order to improve turnaround time by reducing demand, or decrease the price to increase demand (and thus utilization).
Before specialized graphics-processing chips existed, pioneers in the field of visualization used multicore supercomputers to realize data in three dimensions. Today, however, the speed at which supercomputers can process data is rapidly outstripping the speed at which they can input and output that data. Graphics-processing clusters are becoming obsolete.
Personal supercomputers that lash together a stack of graphics processing units and can sit on a desktop are becoming popular with researchers. These machines can be used to run calculations by the desk instead of waiting for time on one of the national supercomputers.
Linux magazine HPC Editor Douglas Eadline had a chance recently to discuss the current state of HPC clusters with Beowulf pioneer Don Becker, Founder and Chief Technical Officer, Scyld Software (now Part of Penguin Computing). For those that may have come to the HPC party late, Don was a co-founder of the original Beowulf project, which is the cornerstone for commodity-based high-performance cluster computing. Don’s work in parallel and distributed computing began in 1983 at MIT’s Real Time Systems group. He is known throughout the international community of operating system developers for his contributions to networking software and as the driving force behind beowulf.org.
The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, has officially launched the Triton Resource, an integrated, data-intensive computing system primarily designed to support UC San Diego and UC researchers. The Triton Resource -- which features some of the most extensive data analysis power available commercially or at any research institution in the country because of its unique large-memory nodes -- also will be available to researchers throughout the larger academic community, as well as private industry and government-funded organizations. Plans for the new system were first announced last fall, as SDSC formally opened a new building and data center that doubled the size of the existing supercomputer center to 160,000 square feet.
In a sharply worded speech to the Security Council this week, Russian President Dmitry Medvedev warned that Russia is "significantly behind" other countries in producing powerful supercomputers, and said the lag hurts Russia's competitiveness and its ability to defend itself.
Last year, researchers at Indiana University's Cryo-Transmission Electron Microscopy Facility (cryoEM) acquired a powerful new microscope capable of electron cryomicroscopy, a method of analyzing the structure of proteins at really low temperatures. However, the process often damages samples so researchers have to use a large number to ensure accurate results. This in turn means multiple images from hundreds of thousands of protein particles which then need to be made into composite images, requiring thousands of hours of compute time. So the analysis, movement, and management of all these image files quickly became an IT headache almost as soon as they flipped the on switch
"High-performance computing is transforming physics research," said Ralph Roskies, co-scientific director of the Pittsburgh Supercomputing Center (PSC), during a presentation on Friday, March 20, at the American Physical Society Meeting, held in Pittsburgh, March 16-20. "The Impact of NSF's TeraGrid on Physics Research" was the topic of his talk, which led off a panel of physicists who have made major strides in their work through the TeraGrid, the National Science Foundation's cyberinfrastructure program. "These world-class facilities," said Roskies, "on a much larger scale than ever before, present major new opportunities for physics researchers to carry out computations that would have been infeasible just a few years ago."
LexisNexis has built its business on bringing together billions of different records from many different sources. Its data and tools allow customers to query those data to find out everything from which middle-aged soccer moms bought white wine last month, to the names of everyone who registered a car last week in San Diego with a license plate that has an "O" and an "H" in it. Recently the company has been working with Sandia National Laboratories to understand whether the LexisNexis data tools might help researchers manage and understand the flood of data coming from the supercomputers and high resolution scientific instruments that drive discovery tod
"This project," said Sergiu Sanielevici, PSC director of scientific applications and user support, who also leads user support and services for the TeraGrid, "exemplifies how powerful systems like Pople can open doors to data-mining and data-centric research in fields not traditionally associated with HPC, such as the social sciences, and make it possible to get answers that would otherwise be impractical or impossible." PSC supported this project through the NSF TeraGrid program, which allocates large-scale computing resources free to researchers at U.S. universities on a peer-review proposal basis.