bookmarks  34

  •  

    Optimizing the capabilities of multicore processors in all sorts of products requires bridging the chasm between processors' and software's capability, and industry sources say the long-term focus should be on figuring out a way to write code for parallel computing. "We don't even know for sure what we should be teaching, but we know we should be changing what we're teaching," says University of California, Berkeley professor David Patterson, a former president of ACM. UC Berkeley and the University of Illinois at Urbana-Champaign will split $20 million from Intel and Microsoft to underwrite Universal Parallel Computing Research Centers over the next five years, with Berkeley's share going toward the enhancement of research already done by the school's Parallel Computing Laboratory and the hiring of 50 researchers to focus on the problem of writing software for parallelism. Patterson says Berkeley has started introducing freshmen to parallel computing through classes focusing on the "map-reduce" method, while upperclassmen are being given a grounding in "sticky" parallelism issues such as load balancing and synchronization. Patterson acknowledges that an entirely new programming language may need to be invented in order to tackle the challenge of parallel computing. Brown University professor Maurice Herlihy says a more likely possibility is the evolution of parallel programming features by existing languages--a view endorsed by AMD's Margaret Lewis, who cites the necessity of interim solutions to amend legacy software written for unicore processors along with software under development. Lewis says AMD is trying to infuse parallel coding methods via compilers and code analyzers, noting that with these interim solutions "programmers aren't getting the full benefits of parallelism ... but it runs better in a multicore environment."
    16 years ago by @gwpl
     
      acm_technews
       
       
    •  

      University of California, Berkeley professor of electrical engineering and computer sciences Richard Karp has been named a laureate of the 2008 Kyoto Prize, Japan's equivalent of the Nobel Prize, awarded by the Inamori Foundation. Karp is being recognized for his lifetime achievements in computer theory. A senior research scientist at the International Computer Science Institute in Berkeley, he is considered one of the world's leading computer theorists. Karp's work has significantly advanced the theory of NP-completeness, conceived in 1971 by former UC Berkeley math professor Stephen Cook. Karp developed a standard method for characterizing combinatorial problems into classes and identifying their level of intractability. Combinatorial problems that are NP-complete are the most difficult to solve. "Karp's theory streamlined algorithm design for problem-solving, accelerated algorithm engineering, and brought computational complexity within the scope of scientific research," says the Inamori Foundation. NP-completeness theory has become a cornerstone in modern theoretical computer science, and in the 1980s Cook and Karp received an ACM A.M. Turing Award for their contributions to the concept of NP-completeness. Karp has recently focused on bioinformatics and computational biology, including the development of algorithms for constructing various kinds of physical maps of DNA targets, and methods for classifying biological samples on the basis of gene expression data.
      16 years ago by @gwpl
       
        acm_technews
         
         
      •  

        At the International World Wide Web Conference in Beijing, two Google researchers unveiled VisualRank, software they say will advance digital image searching on the Web the same way Google's PageRank software advanced Web page searches. VisualRank is an algorithm that blends image-recognition software methods with techniques that weigh and rank images that look the most similar. Most image searches are based on cues from the text associated with each image, and not on the actual content of the image itself. Image analysis is a largely unsolved problem in computer science, the Google researchers say. "We wanted to incorporate all of the stuff that is happening in computer vision and put it in a Web framework," says Google's Shumeet Baluja, who made the presentation along with Google researcher Yushi Jing. Their paper, "PageRank for Product Image Search," focuses on a subset of the images that Google has cataloged. The researchers concentrated on the 2,000 most popular product queries on Google's product search, and sorted the top 10 images from both its ranking system and the standard Google Image Search results. The research effort then used a team of 150 Google employees to create a scoring system for image "relevance." The researchers say VisualRank returned 83 percent less irrelevant images.
        16 years ago by @gwpl
         
          acm_technews
           
           
        •  

          Now that IBM's RoadRunner supercomputer has broken the petaflop barrier, reaching more than one thousand trillion sustained floating-point operations per second, supercomputer developers say the next step is an exascale system capable of a million trillion calculations per second, a thousand times faster than a petaflop. At the upcoming International Supercomputing Conference in Dresden, Germany, University of Tennessee professor Jack Dongarra will give a presentation on exaflop systems in the year 2019. Dongarra says performance gains are following a predictable path, with the first gigaflop system being built 22 years ago. Dongarra says there will be exaflop computing in 11 years, and that by then every system on the Top500 computing list will be at least a petaflop. He says the greatest achievement with the RoadRunner system is the programming that allows the system to utilize different processor technologies. To achieve exascale systems, Dongarra says developers will have to create new programming languages and algorithms that can calculate at high degrees of concurrency to complete calculations quickly. The difficulty in reaching that level of programming, and changing to new methods, could be the roadblock that prevents exaflop computing from being realized in a similar timeline, he says.
          16 years ago by @gwpl
           
           
        •  

           
        •  

          An Interview With Bjarne Stroustrup - Dr. Dobb's Journal (03/27/08) Buchanan, James C++ creator Bjarne Stroustrup says in an interview that next-generation programmers need a thorough education that covers training and understanding of algorithms, data structures, machine architecture, operating systems, and networking. "I think what should give is the idea that four years is enough to produce a well-rounded software developer: Let's aim to make a five- or six-year masters the first degree considered sufficient," he says. Before writing a software program, Stroustrup recommends that a programmer consult with peers and potential users to get a clear perspective of the problem domain, and then attempt to build a streamlined system to test the design's basic ideas. Stroustrup says he was inspired to create a first programming course to address what he perceived as a lack of basic skills for designing and implementing quality software among computer science students, such as the organization of code to ensure it is correct. "In my course I heavily emphasize structure, correctness, and define the purpose of the course as 'becoming able to produce code good enough for the use of others,'" he says. Stroustrup thinks programming can be vastly improved, especially by never losing sight of how important it is to produce correct, practical, and well-performing code. He describes a four-year undergraduate university course in computer science he helped design as having a fairly classical CS program with a slightly larger than usual software development project component in the first two years of study. Courses would cover hardware and software, discrete math, algorithms and data structures, operating and network systems, and programming languages, while a "programming studio" would be set up to expose students to group projects and project management.
          16 years ago by @gwpl
           
           
        •  

          Steve Jobs' presentation at the opening session of Apple's Worldwide Developers Conference included a description of the next version of the Mac OS X operating system, dubbed Snow Leopard, which will be designed for use with parallel processors. Jobs says Apple will find a solution to the problem of programming the new generation of parallel chips efficiently. He says Apple will focus on "foundational features" that will be the basis for a future version of the Mac operating system. At the core of Snow Leopard will be a parallel-programming technology code-named Grand Central. Snow Leopard will utilize the computer power inherent in graphics processors that are now used in tandem with microprocessors in almost all personal and mobile computers. Jobs also described a new processing standard that Apple is proposing called Open Computing Language (OpenCL), which is intended to refocus graphics processors on standard computing functions. "Basically it lets you use graphics processors to do computation," Jobs says. "It's way beyond what Nvidia or anyone else has, and it's really simple."
          16 years ago by @gwpl
           
           
        •  

          Cryptography has been an arms race, with codemakers and hackers constantly updating their arsenals, but quantum cryptography could theoretically give codemakers the upper hand. Even the absolute best in classical encryption, the 128-bit RSA, can be cracked using brute force computing power. However, quantum cryptography could make possible uncrackable code using quantum key distribution (QKD). Modern cryptography relies on the use of digital keys to encrypt data before sending it over a network so it can be decrypted by the recipient. QKD promises a theoretically uncrackable code, one that can be easily distributed and still be transparent. Additionally, the nature of quantum mechanics makes it so that if an eavesdropper tries to intercept or spy on the transmission, both the sender and the receiver will know. Any attempt to read the transmission will alert the sender and the receiver, allowing them to generate a new key to send securely. QKD had its first real-world application in Geneva, where quantum cryptography was used in the electronic voting system. Not only did QKD guarantee that the poll was secure, but it also ensured that no votes were lost in transmission, because the uncertainty principle established that there were no changes in the transmitted data. The SECOQC project, which did the work for the voting system, says the goal is to establish network-wide quantum encryption that can work over longer distances between multiple parties.
          16 years ago by @gwpl
           
           
        •  

          Who are the best spreaders of information in a social network? The answer may surprise you.
          15 years ago by @gwpl
           
           
        •  

          Non-repudiation is a system whereby sensitive data sent over the Internet is digitally signed at the source with a signature that can be traced to the user's computer as a safeguard against fraud, but Len Sassaman of the Catholic University of Leuven warns that making this system the default setting for all traffic on a network would enable authorities to trace the source of any online activity and take away users' anonymity. Worse still, Sassaman and University of Ireland colleague Meredith Patterson say that the One Laptop per Child (OLPC) foundation is unintentionally engaged in establishing such a system throughout the Third World by supplying inexperienced users Internet-ready laptops. Theft of the laptops is discouraged with a security model called Bitfrost in which each laptop automatically phones an anti-theft server and sends its serial number once a day so that it can get an activation key, and any machine reported stolen is refused activation. Sassaman and Patterson caution that the security model's use of non-repudiable digital signatures could be exploited by oppressive regimes to identify and silence dissidents. "They may not intend for the signatures to be used for non-repudiation, but it's possible to use them for this purpose," Sassaman says. Although the OLPC laptops are primarily intended to be used for educational purposes, which some people claim would preclude government monitoring, Sassaman says it is unlikely that the systems will be used solely by children, and that conditions in some developing nations might actually encourage children to act as whistleblowers. Sassaman and Patterson are modifying the Bitfrost security model to enable the laptops to identify each other without compromising their users' privacy, based on existing cryptographic methods that cannot be employed for non-repudiation.
          16 years ago by @gwpl
           
            acm_technews
             
             

          publications  229