Welcome To The PS3cluster Guide Our community guide allows you to set up your own MPI (Message Passing Interface) based supercomputer cluster with the Playstation 3. This guide was co-written by Gaurav Khanna, based on his previous work on the Gravity Grid and is a current run-time environment for the research of co-author (Chris Poulin), based on his current work in distributed pattern recognition. As such, we currently utilize the Fedora Core for this infrastructure and illustrate a "how-to" below. NOTE: We focus on the Fedora 8 distribution, due to prevalence of Fedora and its Cell SDK (3.0) compatibility. Finally, this content should be considered open source, and here is the license.
The Ohio Supercomputer Center provides supercomputing, research and educational resources to a diverse state and national community, including education, academic research, industry and state government. At the Ohio Supercomputer Center, our duty is to empower our clients, partner strategically to develop new research and business opportunities, and lead Ohio's knowledge economy.
As computational scientists are confronted with increasingly massive datasets from supercomputing simulations and experiments, one of the biggest challenges is having the right tools to gain scientific insight from the data. One common method for gaining insight is to use scientific visualization, which transforms abstract data into more readily comprehensible images using advanced computer software and computer graphics. But the ever-growing size of scientific datasets presents a significant challenge to modern scientific visualization tools. As a result, there is a great deal of motivation to explore use of large, parallel resources, such as those at the U.S. Department of Energy's (DOE) supercomputing centers, to take advantage of their vast computational processing power, I/O bandwidth and large memory footprint.
Multi-million dollar supercomputers take up most of the headlines, but many organizations are now considering the addition of smaller, personal supercomputers to their desktop fleet. Despite some strong global sales, find out why the idea still hasn’t taken off at most companies
raditional supercomputer vendors pushing miniaturized versions of their big machines, like Cray with its CX1, or NEC with its SX-9, have definitely been endorsed by pockets of life sciences researchers, but certainly not on any widespread scale. GPU chip maker NVIDIA even has its own personal supercomputer offering in an effort to capitalize on the growing use of graphics chips in scientific computing. And according to market research firms like IDC's High Performance Computing group, personal supercomputers that cluster together GPUs and CPUs are a definite boon to pharmaceutical research shops. And with the steadily climbing growth of workgroup systems selling for less than $100,000, what's the problem?
Last week I moderated a webinar entitled Optimizing Performance for HPC: Part 2 - Interconnect with InfiniBand. It was a great presentation with a lot of practical information and good questions. If you missed it, it will be available for a few months, so you still have a chance to check it out. As part of the webinar, Vallard Benincosa of IBM, mentioned that the speed of light was a becoming an issue in network design. In engineering terms, that is refered to as a hard limit.
Before specialized graphics-processing chips existed, pioneers in the field of visualization used multicore supercomputers to realize data in three dimensions. Today, however, the speed at which supercomputers can process data is rapidly outstripping the speed at which they can input and output that data. Graphics-processing clusters are becoming obsolete.
Linux magazine HPC Editor Douglas Eadline had a chance recently to discuss the current state of HPC clusters with Beowulf pioneer Don Becker, Founder and Chief Technical Officer, Scyld Software (now Part of Penguin Computing). For those that may have come to the HPC party late, Don was a co-founder of the original Beowulf project, which is the cornerstone for commodity-based high-performance cluster computing. Don’s work in parallel and distributed computing began in 1983 at MIT’s Real Time Systems group. He is known throughout the international community of operating system developers for his contributions to networking software and as the driving force behind beowulf.org.
In a sharply worded speech to the Security Council this week, Russian President Dmitry Medvedev warned that Russia is "significantly behind" other countries in producing powerful supercomputers, and said the lag hurts Russia's competitiveness and its ability to defend itself.
"This project," said Sergiu Sanielevici, PSC director of scientific applications and user support, who also leads user support and services for the TeraGrid, "exemplifies how powerful systems like Pople can open doors to data-mining and data-centric research in fields not traditionally associated with HPC, such as the social sciences, and make it possible to get answers that would otherwise be impractical or impossible." PSC supported this project through the NSF TeraGrid program, which allocates large-scale computing resources free to researchers at U.S. universities on a peer-review proposal basis.
Ramdisks - Now We Are Talking Hyperspace! Ramdisks can offer a level of performance that is simply amazing. More than just a tool for benchmarking, there are new devices that utilize ramdisks for a bit of the ultra-performance
DMTCP: Transparent Checkpointing for Cluster Computations and the Desktop Authors: Jason Ansel, Kapil Arya, Gene Cooperman (Submitted on 6 Jan 2007 (v1), last revised 24 Feb 2009 (this version, v3)) Abstract: DMTCP (Distributed MultiThreaded CheckPointing) is a transparent user-level checkpointing package for distributed applications. Checkpointing and restart is demonstrated for a wide range of over 20 well known applications, including MATLAB, Python, TightVNC, MPICH2, OpenMPI, and runCMS. RunCMS runs as a 680 MB image in memory that includes 540 dynamic libraries, and is used for the CMS experiment of the Large Hadron Collider at CERN. DMTCP transparently checkpoints general cluster computations consisting of many nodes, processes, and threads
EUCALYPTUS - Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems - is an open-source software infrastructure for implementing "cloud computing" on clusters. The current interface to EUCALYPTUS is compatible with Amazon's EC2 interface, but the infrastructure is designed to support multiple client-side interfaces. EUCALYPTUS is implemented using commonly available Linux tools and basic Web-service technologies making it easy to install and maintain.
Sandia and Oak Ridge national laboratories said they have begun developing a concept for the next generation of supercomputers - supercomputers that will be able to analyze an enormous amount of particles in real time to examine and predict real world con
The semantic grid uses metadata to describe information in the grid. Turning information into something more than just a collection of data means understanding the context, format, and significance of the data. The semantic Web follows this model by provi