bookmarks  34

  •  

    Optimizing the capabilities of multicore processors in all sorts of products requires bridging the chasm between processors' and software's capability, and industry sources say the long-term focus should be on figuring out a way to write code for parallel computing. "We don't even know for sure what we should be teaching, but we know we should be changing what we're teaching," says University of California, Berkeley professor David Patterson, a former president of ACM. UC Berkeley and the University of Illinois at Urbana-Champaign will split $20 million from Intel and Microsoft to underwrite Universal Parallel Computing Research Centers over the next five years, with Berkeley's share going toward the enhancement of research already done by the school's Parallel Computing Laboratory and the hiring of 50 researchers to focus on the problem of writing software for parallelism. Patterson says Berkeley has started introducing freshmen to parallel computing through classes focusing on the "map-reduce" method, while upperclassmen are being given a grounding in "sticky" parallelism issues such as load balancing and synchronization. Patterson acknowledges that an entirely new programming language may need to be invented in order to tackle the challenge of parallel computing. Brown University professor Maurice Herlihy says a more likely possibility is the evolution of parallel programming features by existing languages--a view endorsed by AMD's Margaret Lewis, who cites the necessity of interim solutions to amend legacy software written for unicore processors along with software under development. Lewis says AMD is trying to infuse parallel coding methods via compilers and code analyzers, noting that with these interim solutions "programmers aren't getting the full benefits of parallelism ... but it runs better in a multicore environment."
    16 years ago by @gwpl
     
      acm_technews
       
       
    •  

      University of California, Berkeley professor of electrical engineering and computer sciences Richard Karp has been named a laureate of the 2008 Kyoto Prize, Japan's equivalent of the Nobel Prize, awarded by the Inamori Foundation. Karp is being recognized for his lifetime achievements in computer theory. A senior research scientist at the International Computer Science Institute in Berkeley, he is considered one of the world's leading computer theorists. Karp's work has significantly advanced the theory of NP-completeness, conceived in 1971 by former UC Berkeley math professor Stephen Cook. Karp developed a standard method for characterizing combinatorial problems into classes and identifying their level of intractability. Combinatorial problems that are NP-complete are the most difficult to solve. "Karp's theory streamlined algorithm design for problem-solving, accelerated algorithm engineering, and brought computational complexity within the scope of scientific research," says the Inamori Foundation. NP-completeness theory has become a cornerstone in modern theoretical computer science, and in the 1980s Cook and Karp received an ACM A.M. Turing Award for their contributions to the concept of NP-completeness. Karp has recently focused on bioinformatics and computational biology, including the development of algorithms for constructing various kinds of physical maps of DNA targets, and methods for classifying biological samples on the basis of gene expression data.
      16 years ago by @gwpl
       
        acm_technews
         
         
      •  

        At the International World Wide Web Conference in Beijing, two Google researchers unveiled VisualRank, software they say will advance digital image searching on the Web the same way Google's PageRank software advanced Web page searches. VisualRank is an algorithm that blends image-recognition software methods with techniques that weigh and rank images that look the most similar. Most image searches are based on cues from the text associated with each image, and not on the actual content of the image itself. Image analysis is a largely unsolved problem in computer science, the Google researchers say. "We wanted to incorporate all of the stuff that is happening in computer vision and put it in a Web framework," says Google's Shumeet Baluja, who made the presentation along with Google researcher Yushi Jing. Their paper, "PageRank for Product Image Search," focuses on a subset of the images that Google has cataloged. The researchers concentrated on the 2,000 most popular product queries on Google's product search, and sorted the top 10 images from both its ranking system and the standard Google Image Search results. The research effort then used a team of 150 Google employees to create a scoring system for image "relevance." The researchers say VisualRank returned 83 percent less irrelevant images.
        16 years ago by @gwpl
         
          acm_technews
           
           
        •  

          Now that IBM's RoadRunner supercomputer has broken the petaflop barrier, reaching more than one thousand trillion sustained floating-point operations per second, supercomputer developers say the next step is an exascale system capable of a million trillion calculations per second, a thousand times faster than a petaflop. At the upcoming International Supercomputing Conference in Dresden, Germany, University of Tennessee professor Jack Dongarra will give a presentation on exaflop systems in the year 2019. Dongarra says performance gains are following a predictable path, with the first gigaflop system being built 22 years ago. Dongarra says there will be exaflop computing in 11 years, and that by then every system on the Top500 computing list will be at least a petaflop. He says the greatest achievement with the RoadRunner system is the programming that allows the system to utilize different processor technologies. To achieve exascale systems, Dongarra says developers will have to create new programming languages and algorithms that can calculate at high degrees of concurrency to complete calculations quickly. The difficulty in reaching that level of programming, and changing to new methods, could be the roadblock that prevents exaflop computing from being realized in a similar timeline, he says.
          16 years ago by @gwpl
           
           
        •  

           

        publications  

          No matching posts.
        • ⟨⟨
        • ⟩⟩