IBM's future Power7 chip may be just about done as far as the engineering is concerned, and its server designs might also be more or less completed as well. But there is plenty of time yet to tweak the boxes, and I doubt very much that the final packaging and pricing for the future Power7 machinery is anywhere close to being set. Which is a pity, really.
Philip, a new supercomputer-- named after one of the first Boyd Professors (A Boyd Professorship is the highest and most prestigious academic rank LSU can confer on a professor) at LSU -- chemistry professor Philip W. West, is a 3.5 TFlops Peak Performance 37 compute node cluster running the Red Hat Enterprise Linux 5 operating system. Each node contains two latest Quad Core Nehalem Xeon 64-bit processors operating at a core frequency of 2.93 GHz. Philip was delivered to LSU in May, 2009 and is to be open for general use to LSU users.
The San Diego Supercomputer Center has taken a significant step forward for scientific processing by developing the first of its kind High-Performance Computing (HPC) system which utilizes flash memory. Commonly used in household electronics such as digital cameras and cell phones, flash is generally considered a faster storage medium than traditional hard drives due to the fact that there are no moving parts, as opposed to the traditional drive which stores information on magnetic plates which must be individually accessed.
As computational scientists are confronted with increasingly massive datasets from supercomputing simulations and experiments, one of the biggest challenges is having the right tools to gain scientific insight from the data. One common method for gaining insight is to use scientific visualization, which transforms abstract data into more readily comprehensible images using advanced computer software and computer graphics. But the ever-growing size of scientific datasets presents a significant challenge to modern scientific visualization tools. As a result, there is a great deal of motivation to explore use of large, parallel resources, such as those at the U.S. Department of Energy's (DOE) supercomputing centers, to take advantage of their vast computational processing power, I/O bandwidth and large memory footprint.