Before specialized graphics-processing chips existed, pioneers in the field of visualization used multicore supercomputers to realize data in three dimensions. Today, however, the speed at which supercomputers can process data is rapidly outstripping the speed at which they can input and output that data. Graphics-processing clusters are becoming obsolete.
Virtual Machines and Types of Service for TeraGrid Computing Foundational capabilities we provide in TeraGrid, such as "roaming" access and a "coordinated" software environment, open new possibilities in terms of more specialized services, or to allow the TeraGrid, as a system, to respond to supply and demand. For example, a resource provider might elect to increase the "price" of a queue in order to improve turnaround time by reducing demand, or decrease the price to increase demand (and thus utilization).
raditional supercomputer vendors pushing miniaturized versions of their big machines, like Cray with its CX1, or NEC with its SX-9, have definitely been endorsed by pockets of life sciences researchers, but certainly not on any widespread scale. GPU chip maker NVIDIA even has its own personal supercomputer offering in an effort to capitalize on the growing use of graphics chips in scientific computing. And according to market research firms like IDC's High Performance Computing group, personal supercomputers that cluster together GPUs and CPUs are a definite boon to pharmaceutical research shops. And with the steadily climbing growth of workgroup systems selling for less than $100,000, what's the problem?
Multi-million dollar supercomputers take up most of the headlines, but many organizations are now considering the addition of smaller, personal supercomputers to their desktop fleet. Despite some strong global sales, find out why the idea still hasn’t taken off at most companies
I was doing some work and thought, "Wouldn't it be nice to have my own cluster?" I'm guessing not many people have those types of revelations, and probably fewer that decide they should go ahead and solve the problem. I wanted a cheap, small, easy to pack, light, quiet, low-power cluster that I could sit on my desk, and not even think about it.
As computational scientists are confronted with increasingly massive datasets from supercomputing simulations and experiments, one of the biggest challenges is having the right tools to gain scientific insight from the data. One common method for gaining insight is to use scientific visualization, which transforms abstract data into more readily comprehensible images using advanced computer software and computer graphics. But the ever-growing size of scientific datasets presents a significant challenge to modern scientific visualization tools. As a result, there is a great deal of motivation to explore use of large, parallel resources, such as those at the U.S. Department of Energy's (DOE) supercomputing centers, to take advantage of their vast computational processing power, I/O bandwidth and large memory footprint.
The San Diego Supercomputer Center has taken a significant step forward for scientific processing by developing the first of its kind High-Performance Computing (HPC) system which utilizes flash memory. Commonly used in household electronics such as digital cameras and cell phones, flash is generally considered a faster storage medium than traditional hard drives due to the fact that there are no moving parts, as opposed to the traditional drive which stores information on magnetic plates which must be individually accessed.
Philip, a new supercomputer-- named after one of the first Boyd Professors (A Boyd Professorship is the highest and most prestigious academic rank LSU can confer on a professor) at LSU -- chemistry professor Philip W. West, is a 3.5 TFlops Peak Performance 37 compute node cluster running the Red Hat Enterprise Linux 5 operating system. Each node contains two latest Quad Core Nehalem Xeon 64-bit processors operating at a core frequency of 2.93 GHz. Philip was delivered to LSU in May, 2009 and is to be open for general use to LSU users.
IBM's future Power7 chip may be just about done as far as the engineering is concerned, and its server designs might also be more or less completed as well. But there is plenty of time yet to tweak the boxes, and I doubt very much that the final packaging and pricing for the future Power7 machinery is anywhere close to being set. Which is a pity, really.