Building and Promoting a Linux-based Operating System to Support Virtual Organizations for Next Generation Grids (2006-2010). The emergence of Grids enables the sharing of a wide range of resources to solve large-scale computational and data intensive problems in science, engineering and commerce. While much has been done to build Grid middleware on top of existing operating systems, little has been done to extend the underlying operating systems to enablee and facilitate Grid computing, for example by embedding important functionalities directly into the operating system kernel.
"For a while now, IBM has had multiple and competing tools for managing AIX and Linux clusters for its supercomputer customers and yet another set of tools that were used for other HPC setups with a slightly more commercial bent to them. But Big Blue has now cleaned house, killing off its closed-source Cluster Systems Management (CSM) tool and tapping its own open source Extreme Cluster Administration Toolkit (known as xCAT) as its replacement."
The Ohio Supercomputer Center provides supercomputing, research and educational resources to a diverse state and national community, including education, academic research, industry and state government. At the Ohio Supercomputer Center, our duty is to empower our clients, partner strategically to develop new research and business opportunities, and lead Ohio's knowledge economy.
openMosix is a Linux kernel extension for single-system image clustering. This kernel extension turns a network of ordinary computers into a supercomputer for Linux applications.
PelicanHPC is a distribution of GNU/Linux that runs as a "live CD" (or it can be put on a USB device, or it can be used as a virtualized OS). If the ISO image file is burnt to a CD, the resulting CD can be used to boot a computer. The computer on which PelicanHPC is booted is referred to as the "frontend node". It is the computer with which the user interacts. Once PelicanHPC is running, a script - "pelican_setup" - may be run. This script configures the frontend node as a netboot server. After this has been done, other computers can boot copies of PelicanHPC over the network. These other computers are referred to as "compute nodes". PelicanHPC configures the cluster made up of the frontend node and the compute nodes so that MPI-based parallel computing may be done.
Last week I moderated a webinar entitled Optimizing Performance for HPC: Part 2 - Interconnect with InfiniBand. It was a great presentation with a lot of practical information and good questions. If you missed it, it will be available for a few months, so you still have a chance to check it out. As part of the webinar, Vallard Benincosa of IBM, mentioned that the speed of light was a becoming an issue in network design. In engineering terms, that is refered to as a hard limit.
InfiniBand is a switched fabric communications link primarily used in high-performance computing. Its features include quality of service and failover, and it is designed to be scalable. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. InfiniBand forms a superset of the Virtual Interface Architecture.
I was doing some work and thought, "Wouldn't it be nice to have my own cluster?" I'm guessing not many people have those types of revelations, and probably fewer that decide they should go ahead and solve the problem. I wanted a cheap, small, easy to pack, light, quiet, low-power cluster that I could sit on my desk, and not even think about it.
Philip, a new supercomputer-- named after one of the first Boyd Professors (A Boyd Professorship is the highest and most prestigious academic rank LSU can confer on a professor) at LSU -- chemistry professor Philip W. West, is a 3.5 TFlops Peak Performance 37 compute node cluster running the Red Hat Enterprise Linux 5 operating system. Each node contains two latest Quad Core Nehalem Xeon 64-bit processors operating at a core frequency of 2.93 GHz. Philip was delivered to LSU in May, 2009 and is to be open for general use to LSU users.
an Miller joined Cray in February 2008 and currently heads up Cray’s Productivity Solutions group with the recently introduced Cray CX1 desk side supercomputer – an Intel Cluster Ready product. Mr. Miller also leads Cray’s corporate marketing organization. Prior to joining Cray, he served as the Vice President of Polyserve Software at HP and as Vice President of Worldwide Sales for PolyServe prior to its acquisition by HP. Before joining PolyServe, Mr. Miller was Vice President of Worldwide sales for IBM High End xSeries Servers where he worked for both IBM xSeries and pSeries organizations, with a particular focus on marketing and sales for high end Intel based systems. Prior to IBM, he was Vice President of Global Marketing for Sequent Computer Systems, and Vice President Asia Pacific. Miller has also worked for Software AG as Senior Vice President Asia Pacific, and for Unisys in many capacities, ending as General Manager for Asia South. Mr. Miller is a Graduate of London University.
Rocks adds a vastly expanded solutions layer (Rocks HPC, Rocks Cloud, Rocks Rolls) and enterprise-class support, which transforms the leading open source cluster distribution into a production-ready cluster operating environment suitable for data centers of all shapes and sizes. Clustercorp also partners with the industry's leading workload management providers to offer Rocks MOAB, Rocks LSF, and Rocks SGE. Purchase turnkey Rocks clusters from a long list of reliable hardware partners including HP, Dell, Cray, Silicon Mechanics, and more.