bookmarks  34

  •  

    The MoGo artificial intelligence engine defeated professional 5th DAN Catalin Taranu in a 9x9 game of Go during the Go Tournament in Paris in late March. The victory, the first officially sanctioned "non blitz" victory for a machine over a Go Master, is considered a significant achievement because the game is patterned more after human thought than chess and its possible combinations exceed the number of particles in the universe. Taranu says the system was close to reaching the level of DAN in performance. The computer did lose to Taranu in a 19x19 configuration with a nine-stone handicap. The French National Institute for Research in Computer Science and Control (INRIA) developed the artificial intelligence engine. "The software used in this victory--the result of a collaboration between INRIA, the CNRS(1), LRI(2) and CMAP(3)--is based on innovative technologies that can be used in numerous different areas, particularly in the conservation of resources which is such a vital issue when it comes to tackling environmental problems," says INRIA researcher Olivier Teytaud, who led the MoGo team.
    16 years ago by @gwpl
     
     
  •  

    The European Union-funded RobotCub project will send an iCub robot to six European research labs, where researchers will train iCub to learn and act independently by learning from its own experiences. The project at Imperial College London will examine how "mirror neurons," which fire in humans to trigger memories of previous experiences when humans are trying to understand the physical actions of others, can be translated into a digital application. The team at UPMC in Paris will explore the dynamics needed to achieve full body control for iCub, and the researchers at TUM Munich will work on developing iCub's manipulation skills. A project team at the University of Lyons will explore internal simulations techniques, which occur in our brains when planning actions or trying to understand the actions of others. In Turkey, a team at METU in Ankara will focus on language acquisition and the iCub's ability to link objects with verbal utterances. The iCub robots are about the size of three-year-old children and are equipped with highly dexterous hands and fully articulated heads and eyes. The robots have hearing and touch capabilities and are designed to be able to crawl and to sit up. Researchers expect to enable iCub to learn by doing, including the ability to track objects visually or by sound, and to be able to navigate based on landmarks and a sense of its own position.
    16 years ago by @gwpl
     
      acm_technews
       
       
    •  

      Cryptography has been an arms race, with codemakers and hackers constantly updating their arsenals, but quantum cryptography could theoretically give codemakers the upper hand. Even the absolute best in classical encryption, the 128-bit RSA, can be cracked using brute force computing power. However, quantum cryptography could make possible uncrackable code using quantum key distribution (QKD). Modern cryptography relies on the use of digital keys to encrypt data before sending it over a network so it can be decrypted by the recipient. QKD promises a theoretically uncrackable code, one that can be easily distributed and still be transparent. Additionally, the nature of quantum mechanics makes it so that if an eavesdropper tries to intercept or spy on the transmission, both the sender and the receiver will know. Any attempt to read the transmission will alert the sender and the receiver, allowing them to generate a new key to send securely. QKD had its first real-world application in Geneva, where quantum cryptography was used in the electronic voting system. Not only did QKD guarantee that the poll was secure, but it also ensured that no votes were lost in transmission, because the uncertainty principle established that there were no changes in the transmitted data. The SECOQC project, which did the work for the voting system, says the goal is to establish network-wide quantum encryption that can work over longer distances between multiple parties.
      16 years ago by @gwpl
       
       
    •  

      Projekt europejski 6PR o tematyce zbliżonej do mTeam, bez uwzględniania urządzeń mobilnych, za to dużo o zarządzaniu wiedzą (knowledge management) dla współpracy.
      16 years ago by @adamw
       
       
    •  

      According to a recent survey from Merrill Lynch, 16% of the Baby Boomer workforce is looking for part-time work, and 42% will only take jobs that will allow them time off for leisure. Similar types of findings across all demographics are forcing companies to re-evaluate the flexibility options that they offer employees, especially as the so-called war for talent intensifies. While organizations recognize that inflexible work arrangements are a primary reason top talent leaves an organization, the actual implementation of these flexible work arrangements can be difficult to implement. As a guide, the article provides a review of flexible work arrangements at six different companies. When it comes to implementing a flexible work arrangement, a number of conditions prompt organizations to reconfigure their work plans. For example, the company could be losing market share, experiencing a deteriorating bottom line or facing a chronic shortage of talent. While there may be many reasons for an organization to embrace more flexible work situations for employees, common arrangements include flex scheduling that accommodates doctor appointments or school visits. Other arrangements include telecommuting one or more days per week; compressing workweeks from five days to four or three days per week; and job sharing.
      16 years ago by @gwpl
       
        acm_technews
         
         
      •  

        An Interview With Bjarne Stroustrup - Dr. Dobb's Journal (03/27/08) Buchanan, James C++ creator Bjarne Stroustrup says in an interview that next-generation programmers need a thorough education that covers training and understanding of algorithms, data structures, machine architecture, operating systems, and networking. "I think what should give is the idea that four years is enough to produce a well-rounded software developer: Let's aim to make a five- or six-year masters the first degree considered sufficient," he says. Before writing a software program, Stroustrup recommends that a programmer consult with peers and potential users to get a clear perspective of the problem domain, and then attempt to build a streamlined system to test the design's basic ideas. Stroustrup says he was inspired to create a first programming course to address what he perceived as a lack of basic skills for designing and implementing quality software among computer science students, such as the organization of code to ensure it is correct. "In my course I heavily emphasize structure, correctness, and define the purpose of the course as 'becoming able to produce code good enough for the use of others,'" he says. Stroustrup thinks programming can be vastly improved, especially by never losing sight of how important it is to produce correct, practical, and well-performing code. He describes a four-year undergraduate university course in computer science he helped design as having a fairly classical CS program with a slightly larger than usual software development project component in the first two years of study. Courses would cover hardware and software, discrete math, algorithms and data structures, operating and network systems, and programming languages, while a "programming studio" would be set up to expose students to group projects and project management.
        16 years ago by @gwpl
         
         
      •  

        MIT researcher Seth Lloyd believes that a new architecture for quantum random access memory (QRAM) could be used to reduce the energy wasted by random access memory (RAM) as well as for completely anonymous Internet searchers. Classical computing requires the use of RAM to retrieve information, but RAM design is wasteful and subject to interference, Lloyd says. Lloyd worked with Vittorio Giovannetti at the NEST-CNR-INFM in Pisa, Italy, and Lorenzo Maccone at the University of Pavia, Italy, to create a system that works as QRAM. Lloyd says their QRAM architecture was discovered when his colleagues and him were researching how to make QRAM work on classical RAM design. He says QRAM is a "sneakier" way of accessing RAM. In traditional RAM, the first bit of an address throws two switches, the second throws four, and so on, Lloyd says. With QRAM, "all the bits of the address only interact with two switches," Lloyd says. The energy saved using QRAM is not enough to offset the larger energy problems associated with classical computing, and Lloyd says QRAM is slower than RAM. However, he says QRAM's benefits can be applied to quantum Internet searches. "If you had a quantum Internet, then this would be useful," he says. "This offers a huge decrease in energy used and an increase in robustness." For this to work, Lloyd says "dark fiber" is needed, and although it is already being used for some classical communications, a quantum Internet would need more.
        16 years ago by @gwpl
         
          acm_technews
           
           
        •  

          University of California, Berkeley professor of electrical engineering and computer sciences Richard Karp has been named a laureate of the 2008 Kyoto Prize, Japan's equivalent of the Nobel Prize, awarded by the Inamori Foundation. Karp is being recognized for his lifetime achievements in computer theory. A senior research scientist at the International Computer Science Institute in Berkeley, he is considered one of the world's leading computer theorists. Karp's work has significantly advanced the theory of NP-completeness, conceived in 1971 by former UC Berkeley math professor Stephen Cook. Karp developed a standard method for characterizing combinatorial problems into classes and identifying their level of intractability. Combinatorial problems that are NP-complete are the most difficult to solve. "Karp's theory streamlined algorithm design for problem-solving, accelerated algorithm engineering, and brought computational complexity within the scope of scientific research," says the Inamori Foundation. NP-completeness theory has become a cornerstone in modern theoretical computer science, and in the 1980s Cook and Karp received an ACM A.M. Turing Award for their contributions to the concept of NP-completeness. Karp has recently focused on bioinformatics and computational biology, including the development of algorithms for constructing various kinds of physical maps of DNA targets, and methods for classifying biological samples on the basis of gene expression data.
          16 years ago by @gwpl
           
            acm_technews
             
             
          •  

            Most people today are only users of the information technology systems provided, making changes only when prompted, using "creativity" tools that stifle innovation, and accepting failures, disappointments, and crashes as inevitable and expected, writes Bill Thompson. In general, he says users accept the lack of programming tools or encouragement to engage in writing code, possibly because of the increasing complexity of modern computer systems. With so many users completely ignorant on how to program, it becomes difficult to have a serious debate about the core technical issues that affect the development and deployment of IT systems in our lives. The applications that support all aspects of society are all built by programmers, and there is a startling lack of programmers entering the software industry. Universities have seen applications for computer science degrees drop off, and computing is considered a non-essential subject in high school. Thompson says children need to see that programming is a useful skill that can be applied to a variety of careers. He says if more children were provided with suitable languages and tools for programming at school or at home, there would be at least some chance that those with an aptitude for coding would discover it early enough to become interested in the field.
            16 years ago by @gwpl
             
             
          •  

            The use of organic and chemical materials to perform digital signal processing without electrical currents could be the next major technological revolution, say Northwestern professors Sotirios Tsaftaris and Aggelos Katsaggelos. Their research includes studying the use of DNA for digital signal processing, as DNA strands can be used to input and process elements, and DNA is an excellent medium for data storage. Digital samples can be recorded in DNA, which can be kept in a liquid form in test tubes to save space. DNA can also be easily replicated using common laboratory techniques, creating a database that could be easily searched, no matter how large. Over the past 10 years scientists and engineers have experimented with different materials for performing signal processing, possibly leading to a "not-so-electric future" of digital signal processing, according to Tsaftaris and Katsaggelos. For example, artist and scientist Cameron Jones discovered that fungi grown on CDs causes the optically recorded sound to be distorted by the fungi, and that the fungi growth patterns were dependent on the optical grooves recorded on the CD. Meanwhile, in 2005, a group of E. coli cells were modified to react to light and were able to perform edge detection of an image, a basic processing task.
            16 years ago by @gwpl
             
             

          publications  229