The purpose of this list is to provide a ranking of the most energy-efficient supercomputers in the world and serve as a complementary view to the TOP500 List.
The Ohio Supercomputer Center provides supercomputing, research and educational resources to a diverse state and national community, including education, academic research, industry and state government. At the Ohio Supercomputer Center, our duty is to empower our clients, partner strategically to develop new research and business opportunities, and lead Ohio's knowledge economy.
This one-day symposium will explore the use of GPUs and Cell processors for scientific and high performance computing. SHARCNET has deployed high-performance clusters containing both architectures, and this symposium will give researchers the chance to learn about these new technologies from keynote speakers who are at the forefront of research in this field.
The aim of the DIET project is to develop a set of tools to build computational servers. Huge problems can now be computed over the Internet thanks to Grid Computing Environments like Globus or Legion. Because most of current applications are numerical, the use of libraries like BLAS, LAPACK, ScaLAPACK or PETSc is mandatory. The integration of such libraries in high level applications using languages like Fortran or C is far from being easy. Moreover, the computational power and memory needs of such applications may of course not be available on every workstation. Thus, the RPC seems to be a good candidate to build Problem Solving Environments on the Grid. Several tools following this approach exist, like Netsolve, NINF, NEOS, or RCS.
Building and Promoting a Linux-based Operating System to Support Virtual Organizations for Next Generation Grids (2006-2010). The emergence of Grids enables the sharing of a wide range of resources to solve large-scale computational and data intensive problems in science, engineering and commerce. While much has been done to build Grid middleware on top of existing operating systems, little has been done to extend the underlying operating systems to enablee and facilitate Grid computing, for example by embedding important functionalities directly into the operating system kernel.
"For a while now, IBM has had multiple and competing tools for managing AIX and Linux clusters for its supercomputer customers and yet another set of tools that were used for other HPC setups with a slightly more commercial bent to them. But Big Blue has now cleaned house, killing off its closed-source Cluster Systems Management (CSM) tool and tapping its own open source Extreme Cluster Administration Toolkit (known as xCAT) as its replacement."
As computational scientists are confronted with increasingly massive datasets from supercomputing simulations and experiments, one of the biggest challenges is having the right tools to gain scientific insight from the data. One common method for gaining insight is to use scientific visualization, which transforms abstract data into more readily comprehensible images using advanced computer software and computer graphics. But the ever-growing size of scientific datasets presents a significant challenge to modern scientific visualization tools. As a result, there is a great deal of motivation to explore use of large, parallel resources, such as those at the U.S. Department of Energy's (DOE) supercomputing centers, to take advantage of their vast computational processing power, I/O bandwidth and large memory footprint.
"This project," said Sergiu Sanielevici, PSC director of scientific applications and user support, who also leads user support and services for the TeraGrid, "exemplifies how powerful systems like Pople can open doors to data-mining and data-centric research in fields not traditionally associated with HPC, such as the social sciences, and make it possible to get answers that would otherwise be impractical or impossible." PSC supported this project through the NSF TeraGrid program, which allocates large-scale computing resources free to researchers at U.S. universities on a peer-review proposal basis.
Personal supercomputers that lash together a stack of graphics processing units and can sit on a desktop are becoming popular with researchers. These machines can be used to run calculations by the desk instead of waiting for time on one of the national supercomputers.
Before specialized graphics-processing chips existed, pioneers in the field of visualization used multicore supercomputers to realize data in three dimensions. Today, however, the speed at which supercomputers can process data is rapidly outstripping the speed at which they can input and output that data. Graphics-processing clusters are becoming obsolete.
EUCALYPTUS - Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems - is an open-source software infrastructure for implementing "cloud computing" on clusters. The current interface to EUCALYPTUS is compatible with Amazon's EC2 interface, but the infrastructure is designed to support multiple client-side interfaces. EUCALYPTUS is implemented using commonly available Linux tools and basic Web-service technologies making it easy to install and maintain.
Jay Boisseau, widely credited with bringing the University of Texas from Nowheresville to near the front of the pack in the world of supercomputing, "is a force of nature," says UT Earth scientist Omar Ghattas.
One of the ways TeraGrid benefits researchers is by providing centralized accounting services, making it easier for them to use different sets of computational resources. This requires synchronized account and allocation data among the TeraGrid resource p
NVIDIA Corporation, the world leader in visual computing technologies and the inventor of the GPU, today announced that the Korea Institute of Science and Technology Information (KISTI) Supercomputing Center has selected NVIDIA Quadro® FX 5600 graphics c
openMosix is a Linux kernel extension for single-system image clustering. This kernel extension turns a network of ordinary computers into a supercomputer for Linux applications.
Last week I moderated a webinar entitled Optimizing Performance for HPC: Part 2 - Interconnect with InfiniBand. It was a great presentation with a lot of practical information and good questions. If you missed it, it will be available for a few months, so you still have a chance to check it out. As part of the webinar, Vallard Benincosa of IBM, mentioned that the speed of light was a becoming an issue in network design. In engineering terms, that is refered to as a hard limit.
PelicanHPC is a distribution of GNU/Linux that runs as a "live CD" (or it can be put on a USB device, or it can be used as a virtualized OS). If the ISO image file is burnt to a CD, the resulting CD can be used to boot a computer. The computer on which PelicanHPC is booted is referred to as the "frontend node". It is the computer with which the user interacts. Once PelicanHPC is running, a script - "pelican_setup" - may be run. This script configures the frontend node as a netboot server. After this has been done, other computers can boot copies of PelicanHPC over the network. These other computers are referred to as "compute nodes". PelicanHPC configures the cluster made up of the frontend node and the compute nodes so that MPI-based parallel computing may be done.
Computerworld - High-performance computing (HPC) has almost always required a supercomputer — one of those room-size monoliths you find at government research labs and universities. And while those systems aren’t going away, some of the applications traditionally handled by the biggest of Big Iron are heading to the desktop. One reason is that processing that took an hour on a standard PC about eight years ago now takes six seconds, according to Ed Martin, a manager in the automotive unit at computer-aided design software maker Autodesk Inc. Monumental improvements in desktop processing power, graphics processing unit (GPU) performance, network bandwidth and solid-state drive speed combined with 64-bit throughput have made the desktop increasingly viable for large-scale computing projects.
FUSION1200® is a scalable 8 to 32-processor SMP system for the High Performance Technical Computing (HPTC) market. Available in both deskside and 19-inch rack-mount design, the FUSION1200® is a scalable alternative to traditional RISC based servers. FUSION1200® Series is an enterprise-class system for an IT department looking to leverage the benefits of Intel® standards in a data center. With the flexibility to grow from 8 to 32 Intel® Xeon® processors (quad or dual core), the FUSION1200® Series scales beyond conventional Intel® based platforms while delivering superior price-performance compared to traditional high-end servers. The SMP operational model of the FUSION1200® provides reduced management costs compared to clusters. This Intel® Xeon® processor based server, supporting Intel® Extended Memory 64 Technology and the ScaleMP® vSMP architecture, is the ideal platform for clients with applications that require high processor count and large shared memory.
Engineers, scientists, researchers and other workstation users are continually challenged with more complex problems and shorter deadlines in which to solve them. The Cray CX1-iWS™ solution addresses both of these issues by allowing larger models and simulations to be worked within the workstation environment, eliminating the need to move the problem to an external shared resource cluster, and providing an easy to setup solution.
<blockquote>Before he even took the podium, Ed Seidel was one of the buzz makers at the TeraGrid '09 conference. The day before his keynote, it was announced that he was stepping in as acting assistant director of the National Science Foundation's math and physical sciences directorate. For his talk at the conference, however, Seidel focused on the issues and efforts within his home at NSF, the Office of Cyberinfrastructure.</blockquote>
<blockquote>Paul Avery, a recognized leader in advanced grid and networking for science, delivered the first keynote address at the recent TeraGrid '09 conference in Arlington, Va. A professor of physics at the University of Florida, Avery is co-principal investigator and founding member of the Open Science Grid (OSG). Avery talked about the history of OSG, some of the projects that leverage its resources, and OSG's relationship with TeraGrid.</blockquote>
Everyone knows applications drive the HPC boat. It is one thing to run benchmarks and burn-in programs, but when it is time for production work, applications take over. Fortunately, there are many applications that can take advantage of clusters. These applications can be divided into three oversimplified categories.
Ramdisks - Now We Are Talking Hyperspace! Ramdisks can offer a level of performance that is simply amazing. More than just a tool for benchmarking, there are new devices that utilize ramdisks for a bit of the ultra-performance
C. S.Jadhav, D. K, P. S., and P. B. International Journal on Recent and Innovation Trends in Computing and Communication, 3 (2):
890--894(February 2015)
K. Kennedy, C. Koelbel, and H. Zima. Proceedings of the third ACM SIGPLAN conference on History of programming languages, page 7-1--7-22. New York, NY, USA, ACM, (2007)
W. Tantisiriroj, S. Son, S. Patil, S. Lang, G. Gibson, and R. Ross. Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, page 67:1--67:12. New York, NY, USA, ACM, (2011)
M. Luo, D. Panda, K. Ibrahim, and C. Iancu. Proceedings of the 26th ACM international conference on Supercomputing, page 121--132. New York, NY, USA, ACM, (2012)
J. Eberle, M. Schwinger, and H. Zwenzner. Proceedings of the 2021 conference on Big Data from Space, page 89--92. Publications Office of the European Union, (May 2021)
J. Eberle, M. Schwinger, and H. Zwenzner. Proceedings of the 2021 conference on Big Data from Space, page 89--92. Publications Office of the European Union, (May 2021)
J. Protze, M. Schulz, D. Ahn, and M. Müller. Proceedings of the 27th International Symposium on High-Performance Parallel and Distributed Computing, page 144--155. ACM, (2018)
V. Nagaraj, and R. Govindarajan. Proceedings of the 22nd international conference on Parallel architectures and compilation techniques, page 19--28. IEEE Press, (2013)
J. Eberle, M. Schwinger, and H. Zwenzner. Proceedings of the 2021 conference on Big Data from Space, page 89--92. Publications Office of the European Union, (May 2021)