bookmarks  5

  •  

    Now that IBM's RoadRunner supercomputer has broken the petaflop barrier, reaching more than one thousand trillion sustained floating-point operations per second, supercomputer developers say the next step is an exascale system capable of a million trillion calculations per second, a thousand times faster than a petaflop. At the upcoming International Supercomputing Conference in Dresden, Germany, University of Tennessee professor Jack Dongarra will give a presentation on exaflop systems in the year 2019. Dongarra says performance gains are following a predictable path, with the first gigaflop system being built 22 years ago. Dongarra says there will be exaflop computing in 11 years, and that by then every system on the Top500 computing list will be at least a petaflop. He says the greatest achievement with the RoadRunner system is the programming that allows the system to utilize different processor technologies. To achieve exascale systems, Dongarra says developers will have to create new programming languages and algorithms that can calculate at high degrees of concurrency to complete calculations quickly. The difficulty in reaching that level of programming, and changing to new methods, could be the roadblock that prevents exaflop computing from being realized in a similar timeline, he says.
    16 years ago by @gwpl
    (0)
     
     
  •  

    Steve Jobs' presentation at the opening session of Apple's Worldwide Developers Conference included a description of the next version of the Mac OS X operating system, dubbed Snow Leopard, which will be designed for use with parallel processors. Jobs says Apple will find a solution to the problem of programming the new generation of parallel chips efficiently. He says Apple will focus on "foundational features" that will be the basis for a future version of the Mac operating system. At the core of Snow Leopard will be a parallel-programming technology code-named Grand Central. Snow Leopard will utilize the computer power inherent in graphics processors that are now used in tandem with microprocessors in almost all personal and mobile computers. Jobs also described a new processing standard that Apple is proposing called Open Computing Language (OpenCL), which is intended to refocus graphics processors on standard computing functions. "Basically it lets you use graphics processors to do computation," Jobs says. "It's way beyond what Nvidia or anyone else has, and it's really simple."
    16 years ago by @gwpl
    (0)
     
     
  •  

    Cryptography has been an arms race, with codemakers and hackers constantly updating their arsenals, but quantum cryptography could theoretically give codemakers the upper hand. Even the absolute best in classical encryption, the 128-bit RSA, can be cracked using brute force computing power. However, quantum cryptography could make possible uncrackable code using quantum key distribution (QKD). Modern cryptography relies on the use of digital keys to encrypt data before sending it over a network so it can be decrypted by the recipient. QKD promises a theoretically uncrackable code, one that can be easily distributed and still be transparent. Additionally, the nature of quantum mechanics makes it so that if an eavesdropper tries to intercept or spy on the transmission, both the sender and the receiver will know. Any attempt to read the transmission will alert the sender and the receiver, allowing them to generate a new key to send securely. QKD had its first real-world application in Geneva, where quantum cryptography was used in the electronic voting system. Not only did QKD guarantee that the poll was secure, but it also ensured that no votes were lost in transmission, because the uncertainty principle established that there were no changes in the transmitted data. The SECOQC project, which did the work for the voting system, says the goal is to establish network-wide quantum encryption that can work over longer distances between multiple parties.
    16 years ago by @gwpl
    (0)
     
     
  •  

    Three competing teams of computer researchers are working on new types of software for use with mulitcore processors. Stanford University and six computer and chip makers--Sun Microsystems, Advanced Micro Devices, Nvidia, IBM, Hewlett-Packard, and Intel--are creating the Pervasive Parallelism Lab. Previously, Microsoft and Intel helped finance new labs at the University of California, Berkeley and the University of Illinois at Urbana-Champaign. The research efforts are in response to a growing awareness that the software industry is not ready for the coming availability of microprocessors with multiple cores on a single chip. Computer and chip manufacturers are concerned that if software cannot keep up with hardware improvements, consumers will not feel the need to upgrade their systems. Current operating system software can work with the most advanced server microprocessors and processors for video game machines, which have up to eight cores. But software engineers say that most applications are not designed for efficient use of the dozens or hundreds of processors that will be available in future computers. The university efforts will share some approaches, but will try different experiments, programming languages, and hardware innovations. The efforts will also rethink operating systems and compilers. The Berkeley researchers have divided parallel computing problems into seven classes, with each class being approached in different ways. The Stanford researchers say they are looking for new ways to hide the complexity of parallel computing from programmers, and will use virtual worlds and robotic vehicles to test their efforts.
    16 years ago by @gwpl
    (0)
     
     
  •  

    The Defense Advanced Research Projects Agency has issued a call for research proposals to design compilers that can dynamically optimize programs for specific environments. As the Defense Department runs programs across a wider range of systems, it is facing the lengthy and manual task of tuning programs to run under different environments, a process DARPA wants to automate. "The goal of DARPA's envisioned Architecture-Aware Compiler Environment (AACE) Program is to develop computationally efficient compilers that incorporate learning and reasoning methods to drive compiler optimizations for a broad spectrum of computing system configurations," says DARPA's broad area announcement. The compilers can be written in the C and Fortran programming languages, but the BAA encourages work in languages that support techniques for the parallelization of programs. The quality of the proposals will determine how much DARPA spends on the project, which will run at least through 2011. Proposals are due by June 2.
    16 years ago by @gwpl
    (0)
     
     
  • ⟨⟨
  • 1
  • ⟩⟩

publications  2  

  • ⟨⟨
  • 1
  • ⟩⟩