bookmarks  34

  •  

     
  •  

    Who are the best spreaders of information in a social network? The answer may surprise you.
    14 years ago by @gwpl
     
     
  •  

    Non-repudiation is a system whereby sensitive data sent over the Internet is digitally signed at the source with a signature that can be traced to the user's computer as a safeguard against fraud, but Len Sassaman of the Catholic University of Leuven warns that making this system the default setting for all traffic on a network would enable authorities to trace the source of any online activity and take away users' anonymity. Worse still, Sassaman and University of Ireland colleague Meredith Patterson say that the One Laptop per Child (OLPC) foundation is unintentionally engaged in establishing such a system throughout the Third World by supplying inexperienced users Internet-ready laptops. Theft of the laptops is discouraged with a security model called Bitfrost in which each laptop automatically phones an anti-theft server and sends its serial number once a day so that it can get an activation key, and any machine reported stolen is refused activation. Sassaman and Patterson caution that the security model's use of non-repudiable digital signatures could be exploited by oppressive regimes to identify and silence dissidents. "They may not intend for the signatures to be used for non-repudiation, but it's possible to use them for this purpose," Sassaman says. Although the OLPC laptops are primarily intended to be used for educational purposes, which some people claim would preclude government monitoring, Sassaman says it is unlikely that the systems will be used solely by children, and that conditions in some developing nations might actually encourage children to act as whistleblowers. Sassaman and Patterson are modifying the Bitfrost security model to enable the laptops to identify each other without compromising their users' privacy, based on existing cryptographic methods that cannot be employed for non-repudiation.
    16 years ago by @gwpl
     
      acm_technews
       
       
    •  

      University of California, Berkeley professor of electrical engineering and computer sciences Richard Karp has been named a laureate of the 2008 Kyoto Prize, Japan's equivalent of the Nobel Prize, awarded by the Inamori Foundation. Karp is being recognized for his lifetime achievements in computer theory. A senior research scientist at the International Computer Science Institute in Berkeley, he is considered one of the world's leading computer theorists. Karp's work has significantly advanced the theory of NP-completeness, conceived in 1971 by former UC Berkeley math professor Stephen Cook. Karp developed a standard method for characterizing combinatorial problems into classes and identifying their level of intractability. Combinatorial problems that are NP-complete are the most difficult to solve. "Karp's theory streamlined algorithm design for problem-solving, accelerated algorithm engineering, and brought computational complexity within the scope of scientific research," says the Inamori Foundation. NP-completeness theory has become a cornerstone in modern theoretical computer science, and in the 1980s Cook and Karp received an ACM A.M. Turing Award for their contributions to the concept of NP-completeness. Karp has recently focused on bioinformatics and computational biology, including the development of algorithms for constructing various kinds of physical maps of DNA targets, and methods for classifying biological samples on the basis of gene expression data.
      16 years ago by @gwpl
       
        acm_technews
         
         
      •  

        Now that IBM's RoadRunner supercomputer has broken the petaflop barrier, reaching more than one thousand trillion sustained floating-point operations per second, supercomputer developers say the next step is an exascale system capable of a million trillion calculations per second, a thousand times faster than a petaflop. At the upcoming International Supercomputing Conference in Dresden, Germany, University of Tennessee professor Jack Dongarra will give a presentation on exaflop systems in the year 2019. Dongarra says performance gains are following a predictable path, with the first gigaflop system being built 22 years ago. Dongarra says there will be exaflop computing in 11 years, and that by then every system on the Top500 computing list will be at least a petaflop. He says the greatest achievement with the RoadRunner system is the programming that allows the system to utilize different processor technologies. To achieve exascale systems, Dongarra says developers will have to create new programming languages and algorithms that can calculate at high degrees of concurrency to complete calculations quickly. The difficulty in reaching that level of programming, and changing to new methods, could be the roadblock that prevents exaflop computing from being realized in a similar timeline, he says.
        16 years ago by @gwpl
         
         
      •  

        Steve Jobs' presentation at the opening session of Apple's Worldwide Developers Conference included a description of the next version of the Mac OS X operating system, dubbed Snow Leopard, which will be designed for use with parallel processors. Jobs says Apple will find a solution to the problem of programming the new generation of parallel chips efficiently. He says Apple will focus on "foundational features" that will be the basis for a future version of the Mac operating system. At the core of Snow Leopard will be a parallel-programming technology code-named Grand Central. Snow Leopard will utilize the computer power inherent in graphics processors that are now used in tandem with microprocessors in almost all personal and mobile computers. Jobs also described a new processing standard that Apple is proposing called Open Computing Language (OpenCL), which is intended to refocus graphics processors on standard computing functions. "Basically it lets you use graphics processors to do computation," Jobs says. "It's way beyond what Nvidia or anyone else has, and it's really simple."
        16 years ago by @gwpl
         
         
      •  

        Researchers led by Carnegie Mellon University professor David Brumley have found that software patches could be just as harmful as they are helpful because attackers could use the patches to automatically generate software in as little as 30 seconds that attacks the vulnerabilities the patch is supposed to fix. The malicious software could then be used to attack computers that had not received and installed the patch. Microsoft Research's Christos Gkantsidis says it takes about 24 hours to distribute a patch through Windows Update to 80 percent of the systems that need it. "The problem is that the infrastructure capacity that exists is not enough to serve all the users immediately," Gkantsidis says. "We currently don't have enough technologies that can distribute patches as fast as the worms." This distribution delay gives attackers time to receive a patch, find out what it is fixing, and create and distribute an exploit that will infect computers that have not yet received the patch. The researchers say new methods for distributing patches are needed to make them more secure. Brumley suggests taking steps to hide the changes that a patch makes, releasing encrypted patches that cannot be decrypted until the majority of users have downloaded them, or using peer-to-peer distribution methods to release patches in a single wave.
        16 years ago by @gwpl
         
          acm_technews
           
           
        •  

          According to a recent survey from Merrill Lynch, 16% of the Baby Boomer workforce is looking for part-time work, and 42% will only take jobs that will allow them time off for leisure. Similar types of findings across all demographics are forcing companies to re-evaluate the flexibility options that they offer employees, especially as the so-called war for talent intensifies. While organizations recognize that inflexible work arrangements are a primary reason top talent leaves an organization, the actual implementation of these flexible work arrangements can be difficult to implement. As a guide, the article provides a review of flexible work arrangements at six different companies. When it comes to implementing a flexible work arrangement, a number of conditions prompt organizations to reconfigure their work plans. For example, the company could be losing market share, experiencing a deteriorating bottom line or facing a chronic shortage of talent. While there may be many reasons for an organization to embrace more flexible work situations for employees, common arrangements include flex scheduling that accommodates doctor appointments or school visits. Other arrangements include telecommuting one or more days per week; compressing workweeks from five days to four or three days per week; and job sharing.
          16 years ago by @gwpl
           
            acm_technews
             
             
          •  

            A veteran programmer outlines the key differences between natural programmers and career programmers. While both types of programmers possess the same amount of talent and drive, they have vastly different approaches to completing their work. While some programmers are better at researching problems and developing cost-effective solutions, others have a natural instinct for arriving at innovative solutions. Some programmers love what they do, while others are more interested in the bottom line of the business. Natural programmers are able to make quick associations between very different topics. As a result, they are able to make the jump from code to real life application quickly. Natural programmers realize that there are many ways to do things correctly and several different ways to solve the same problem. While natural programmers understand the need for a system of rules within the workplace, they tend to treat authority with less respect than their career programmer peers. Moreover, they can be difficult to manage since they consider many office conventions (e.g. arriving at 9 am) to be arbitrary. Employers need to understand the motivations of the natural programmer and the type of office environment in which they are likely to thrive. They are not driven primarily by monetary compensation, but by the ability to work with interesting technologies and challenging projects. They tend to respect individuals within the organization who can teach them new technologies. Finally, they thrive when they can downplay the significance of status reports, QA forms, documentation, and timesheets.
            16 years ago by @gwpl
             
              acm_technews
               
               
            •  

              Nintendo set to launch "Wii Fit" exercise game For years, video games have been blamed for turning kids into idle layabouts who only venture off the couch to fill up on potato chips and soda. Nintendo Co Ltd now aims to shatter that image with a game that aims to get players off the couch and lead them to stretch, shake and sweat their way to a healthy life.
              16 years ago by @gwpl
               
                acm_technews
                 
                 
              •  

                Optimizing the capabilities of multicore processors in all sorts of products requires bridging the chasm between processors' and software's capability, and industry sources say the long-term focus should be on figuring out a way to write code for parallel computing. "We don't even know for sure what we should be teaching, but we know we should be changing what we're teaching," says University of California, Berkeley professor David Patterson, a former president of ACM. UC Berkeley and the University of Illinois at Urbana-Champaign will split $20 million from Intel and Microsoft to underwrite Universal Parallel Computing Research Centers over the next five years, with Berkeley's share going toward the enhancement of research already done by the school's Parallel Computing Laboratory and the hiring of 50 researchers to focus on the problem of writing software for parallelism. Patterson says Berkeley has started introducing freshmen to parallel computing through classes focusing on the "map-reduce" method, while upperclassmen are being given a grounding in "sticky" parallelism issues such as load balancing and synchronization. Patterson acknowledges that an entirely new programming language may need to be invented in order to tackle the challenge of parallel computing. Brown University professor Maurice Herlihy says a more likely possibility is the evolution of parallel programming features by existing languages--a view endorsed by AMD's Margaret Lewis, who cites the necessity of interim solutions to amend legacy software written for unicore processors along with software under development. Lewis says AMD is trying to infuse parallel coding methods via compilers and code analyzers, noting that with these interim solutions "programmers aren't getting the full benefits of parallelism ... but it runs better in a multicore environment."
                16 years ago by @gwpl
                 
                  acm_technews
                   
                   
                •  

                  MIT researcher Seth Lloyd believes that a new architecture for quantum random access memory (QRAM) could be used to reduce the energy wasted by random access memory (RAM) as well as for completely anonymous Internet searchers. Classical computing requires the use of RAM to retrieve information, but RAM design is wasteful and subject to interference, Lloyd says. Lloyd worked with Vittorio Giovannetti at the NEST-CNR-INFM in Pisa, Italy, and Lorenzo Maccone at the University of Pavia, Italy, to create a system that works as QRAM. Lloyd says their QRAM architecture was discovered when his colleagues and him were researching how to make QRAM work on classical RAM design. He says QRAM is a "sneakier" way of accessing RAM. In traditional RAM, the first bit of an address throws two switches, the second throws four, and so on, Lloyd says. With QRAM, "all the bits of the address only interact with two switches," Lloyd says. The energy saved using QRAM is not enough to offset the larger energy problems associated with classical computing, and Lloyd says QRAM is slower than RAM. However, he says QRAM's benefits can be applied to quantum Internet searches. "If you had a quantum Internet, then this would be useful," he says. "This offers a huge decrease in energy used and an increase in robustness." For this to work, Lloyd says "dark fiber" is needed, and although it is already being used for some classical communications, a quantum Internet would need more.
                  16 years ago by @gwpl
                   
                    acm_technews
                     
                     
                  •  

                    Many women in IT credit their mothers for making them believe they could succeed in any career. IT and service manager Priscilla Milam says when she got into computer science there were no other women in the program, and it was her mother who told her to learn to live in a man's world, encouraging her to read the headlines in the financial pages, sports pages, and general news, and not to get emotional. "Still, IT in general is a man's world, and by keeping up with the news and sports, when the pre/post meetings end up in discussions around whether the Astros won or lost or who the Texans drafted, I can participate; and suddenly they see me as part of the group and not an outsider," Milam says. Catalyst says the percentage of women holding computer and mathematics positions has declined since 2000, from 30 percent to 27 percent in 2006. Milam and other women in high-tech positions say a passion for technology begins early in life and a few encouraging words from their mothers helped them realize they could overcome the challenges that exist when entering an industry dominated by men. CSC lead solution architect Debbie Joy says the key to succeeding in IT is to put gender aside at work and learn to regard colleagues as peers, and soon they will do the same.
                    16 years ago by @gwpl
                     
                      acm_technews
                       
                       
                    •  

                      Both young men and women are avoiding high school courses that could lead to careers in IT, but young women are dropping those courses faster than young men, says Australia's Charles Sturt University Faculty of Education dean Toni Downes. Downes was a senior member of a research project that examined the interest of male and female high school students in particular high school subjects. The study of 1,334 male and female students found that only 13 percent of girls said they would study IT-related subjects in their senior years, and both boys and girls shied away from high school computing and IT subjects between 2002 and 2007. Downes believes that a shift in computer curriculum from a combination of computer literacy and foundational studies to computing and IT as an academic discipline has contributed to the decline in enrollments, particularly among females. "The reasons are complex, but the reasons that girls give are often the same reasons that disinterested boys give," Downes says. "Sometimes they are making their judgments on careers based on stereotypes, sometimes the girls are making their decisions based on self-limiting identities like 'it's not cool for me to be a nerd' because they think the career is nerdy." Downes says part of the problem is that girls do not engage with technology in ways that allow them to use it playfully, instead of just functionally, so they are not attracted to thinking creatively or critically about how and why technology works.
                      16 years ago by @gwpl
                       
                        acm_technews
                         
                         
                      •  

                        Cryptography has been an arms race, with codemakers and hackers constantly updating their arsenals, but quantum cryptography could theoretically give codemakers the upper hand. Even the absolute best in classical encryption, the 128-bit RSA, can be cracked using brute force computing power. However, quantum cryptography could make possible uncrackable code using quantum key distribution (QKD). Modern cryptography relies on the use of digital keys to encrypt data before sending it over a network so it can be decrypted by the recipient. QKD promises a theoretically uncrackable code, one that can be easily distributed and still be transparent. Additionally, the nature of quantum mechanics makes it so that if an eavesdropper tries to intercept or spy on the transmission, both the sender and the receiver will know. Any attempt to read the transmission will alert the sender and the receiver, allowing them to generate a new key to send securely. QKD had its first real-world application in Geneva, where quantum cryptography was used in the electronic voting system. Not only did QKD guarantee that the poll was secure, but it also ensured that no votes were lost in transmission, because the uncertainty principle established that there were no changes in the transmitted data. The SECOQC project, which did the work for the voting system, says the goal is to establish network-wide quantum encryption that can work over longer distances between multiple parties.
                        16 years ago by @gwpl
                         
                         
                      •  

                        Three competing teams of computer researchers are working on new types of software for use with mulitcore processors. Stanford University and six computer and chip makers--Sun Microsystems, Advanced Micro Devices, Nvidia, IBM, Hewlett-Packard, and Intel--are creating the Pervasive Parallelism Lab. Previously, Microsoft and Intel helped finance new labs at the University of California, Berkeley and the University of Illinois at Urbana-Champaign. The research efforts are in response to a growing awareness that the software industry is not ready for the coming availability of microprocessors with multiple cores on a single chip. Computer and chip manufacturers are concerned that if software cannot keep up with hardware improvements, consumers will not feel the need to upgrade their systems. Current operating system software can work with the most advanced server microprocessors and processors for video game machines, which have up to eight cores. But software engineers say that most applications are not designed for efficient use of the dozens or hundreds of processors that will be available in future computers. The university efforts will share some approaches, but will try different experiments, programming languages, and hardware innovations. The efforts will also rethink operating systems and compilers. The Berkeley researchers have divided parallel computing problems into seven classes, with each class being approached in different ways. The Stanford researchers say they are looking for new ways to hide the complexity of parallel computing from programmers, and will use virtual worlds and robotic vehicles to test their efforts.
                        16 years ago by @gwpl
                         
                         
                      •  

                        At the International World Wide Web Conference in Beijing, two Google researchers unveiled VisualRank, software they say will advance digital image searching on the Web the same way Google's PageRank software advanced Web page searches. VisualRank is an algorithm that blends image-recognition software methods with techniques that weigh and rank images that look the most similar. Most image searches are based on cues from the text associated with each image, and not on the actual content of the image itself. Image analysis is a largely unsolved problem in computer science, the Google researchers say. "We wanted to incorporate all of the stuff that is happening in computer vision and put it in a Web framework," says Google's Shumeet Baluja, who made the presentation along with Google researcher Yushi Jing. Their paper, "PageRank for Product Image Search," focuses on a subset of the images that Google has cataloged. The researchers concentrated on the 2,000 most popular product queries on Google's product search, and sorted the top 10 images from both its ranking system and the standard Google Image Search results. The research effort then used a team of 150 Google employees to create a scoring system for image "relevance." The researchers say VisualRank returned 83 percent less irrelevant images.
                        16 years ago by @gwpl
                         
                          acm_technews
                           
                           
                        •  

                          Computer scientist Donald E. Knuth, winner of ACM's A.M. Turing Award in 1974, says in an interview that open-source code has yet to reach its full potential, and he anticipates that open-source programs will start to be totally dominant as the economy makes a migration from products to services, and as increasing numbers of volunteers come forward to tweak the code. Knuth admits that he is unhappy about the current movement toward multicore architecture, complaining that "it looks more or less like the hardware designers have run out of ideas, and that they're trying to pass the blame for the future demise of Moore's Law to the software writers by giving us machines that work faster only on a few key benchmarks!" He acknowledges the existence of important parallelism applications but cautions that they need dedicated code and special-purpose methods that will have to be significantly revised every several years. Knuth maintains that software produced via literate programming was "significantly better" than software whose development followed more traditional methodologies, and he speculates that "if people do discover nice ways to use the newfangled multithreaded machines, I would expect the discovery to come from people who routinely use literate programming." Knuth cautions that software developers should be careful when it comes to adopting trendy methods, and expresses strong reservations about extreme programming and reusable code. He says the only truly valuable thing he gets out of extreme programming is the concept of working in teams and reviewing each other's code. Knuth deems reusable code to be "mostly a menace," and says that "to me, 're-editable code' is much, much better than an untouchable black box or toolkit."
                          16 years ago by @gwpl
                           
                            acm_technews
                             
                             
                          •  

                            The Defense Advanced Research Projects Agency has issued a call for research proposals to design compilers that can dynamically optimize programs for specific environments. As the Defense Department runs programs across a wider range of systems, it is facing the lengthy and manual task of tuning programs to run under different environments, a process DARPA wants to automate. "The goal of DARPA's envisioned Architecture-Aware Compiler Environment (AACE) Program is to develop computationally efficient compilers that incorporate learning and reasoning methods to drive compiler optimizations for a broad spectrum of computing system configurations," says DARPA's broad area announcement. The compilers can be written in the C and Fortran programming languages, but the BAA encourages work in languages that support techniques for the parallelization of programs. The quality of the proposals will determine how much DARPA spends on the project, which will run at least through 2011. Proposals are due by June 2.
                            16 years ago by @gwpl
                             
                             
                          •  

                            The European Union-funded RobotCub project will send an iCub robot to six European research labs, where researchers will train iCub to learn and act independently by learning from its own experiences. The project at Imperial College London will examine how "mirror neurons," which fire in humans to trigger memories of previous experiences when humans are trying to understand the physical actions of others, can be translated into a digital application. The team at UPMC in Paris will explore the dynamics needed to achieve full body control for iCub, and the researchers at TUM Munich will work on developing iCub's manipulation skills. A project team at the University of Lyons will explore internal simulations techniques, which occur in our brains when planning actions or trying to understand the actions of others. In Turkey, a team at METU in Ankara will focus on language acquisition and the iCub's ability to link objects with verbal utterances. The iCub robots are about the size of three-year-old children and are equipped with highly dexterous hands and fully articulated heads and eyes. The robots have hearing and touch capabilities and are designed to be able to crawl and to sit up. Researchers expect to enable iCub to learn by doing, including the ability to track objects visually or by sound, and to be able to navigate based on landmarks and a sense of its own position.
                            16 years ago by @gwpl
                             
                              acm_technews
                               
                               
                            •  

                              University of Arizona researchers are developing hybrid hardware/software systems that could eventually use machine intelligence to allow spacecraft to fix themselves. Arizona professor Ali Akoglu is using field programmable gate arrays (FPGA) to build self-healing systems that can be reconfigured as needed to emulate different types of hardware. Akoglu says general-purpose computers can run a variety of systems but they are extremely slow compared to hard-wired systems designed to perform specific tasks. What is needed, Akoglu says, are systems that combine the speed of hard-wired systems with the flexibility of general-purpose computers, which is what he is trying to accomplish using FPGAs. The researchers are testing five wirelessly-linked hardware units that could represent a combination of five landers and rovers on Mars. Akoglu says the system tries to recover from a malfunction in two ways. First, the unit tries to fix itself at the node level by reprogramming malfunctioning circuits. If that fails, the unit tries to recover by employing redundant circuitry. If the unit's onboard resources cannot fix the problem, the network-level intelligence is alerted and another unit takes over functions that were done by the broken unit. If two units go down, the three remaining units divide the tasks. "Our objective is to go beyond predicting a fault to using a self-healing system to fix the predicted fault before it occurs," he says.
                              16 years ago by @gwpl
                               
                                acm_technews
                                 
                                 
                              •  

                                Experts at FutureNet, an annual conference held to address communications services, say the Internet architecture will face severe challenges over the next few years that could significantly strain the Web's effectiveness. One of the most prominent issues facing the Internet is the impending shortage of IP addresses, which some forecasters say could occur within the next few years. IPv4 offers about 4.7 billion possible IP addresses, but it is running out of capacity. Juniper's Ron Bonica says there are three likely solutions to this problem. The first is to sick with IPv4, which would create some immediate problems with the impending shortage of IP addresses but would also lead to the creation of an IP address trading system through which companies and individuals that own an excessive number of addresses could sell them at market value. Another possibility would be a rapid deployment of IPv6, the next generation Internet Protocol that is capable of supporting several billion more addresses than IPv4. Bonica says many companies and organizations are reluctant to make the switch because it will require significant investments on the part of end users and ISPs, and transition mechanisms to help make the switch have not been deployed yet. Bonica says the third option is a compromise between these two solutions that involves a gradual shift from IPv4 to IPv6. Another issue addressed FutureNet addressed was the strain more IP addresses will place on routing tables, which are not scalable and cannot adapt to exponential increases in IP addresses. "The basic, fundamental problems of scaling a network haven't been addressed in any innovative manner," says American Registry of Internet Numbers Chairman John Curran.
                                16 years ago by @gwpl
                                 
                                  acm_technews
                                   
                                   
                                •  

                                  Music professors Clifton Callender at Florida State University, Ian Quinn at Yale University, and Dmitri Tymoczko at Princeton University have developed a new way of analyzing and categorizing music using the complex mathematics found in music. The new method, called "geometrical music theory," looks at sequences of notes, chords, rhythms, and scales, and categorizes them so they can be grouped into "families." The families can be given a mathematical structure that can be represented by points in complex geometrical spaces, similar to x-y graphing used in algebra. Different categorizations produce unique geometrical spaces, reflecting the various ways musicians in different times understood music. The researchers say that having tools for conceptualizing music could lead to a variety of applications, such as creating new instruments, new musical toys, and new visualization tools. Tymoczko says the most satisfying part for him is being able to see the logical structure that links many different musical concepts. "To some extent, we can represent the history of music as a long process of exploring different symmetries and different geometries," he says.
                                  16 years ago by @gwpl
                                   
                                    acm_technews
                                     
                                     
                                  •  

                                    Without significant new investment, the Internet's current network architecture will reach the limits of its capacity by 2010, warned AT&T's Jim Cicconi at the Westminster eForum on Web 2.0 in London. "The surge in online content is at the center of the most dramatic changes affecting the Internet today," Cicconi says. "In three years' time, 20 typical households will generate more traffic than the entire Internet today." Cicconi says at least $55 billion in investments are needed in new infrastructure over the next three years in the United States alone, and $130 billion worldwide. The "unprecedented new wave of broadband traffic" will increase fifty-fold by 2015, Cicconi predicts, adding that AT&T will invest $19 billion to maintain its network and upgrade the core of its network. Cicconi adds that more demand for high-definition video will put an increasing strain on the Internet's infrastructure, noting that eight hours of video is loaded onto YouTube every minute, and that HD video consumes seven to 10 times more bandwidth than normal video. "Video will be 80 percent of all traffic by 2010, up from 30 percent today," he says.
                                    16 years ago by @gwpl
                                     
                                      acm_technews
                                       
                                       
                                    •  

                                      Projekt europejski 6PR o tematyce zbliżonej do mTeam, bez uwzględniania urządzeń mobilnych, za to dużo o zarządzaniu wiedzą (knowledge management) dla współpracy.
                                      16 years ago by @adamw
                                       
                                       
                                    •  

                                      The Replicating Rapid-prototyper printer (RepRap) is an open source, self-copying 3D printer that works by building objects in layers of plastic, primarily polylactic acid, a bio-degradable polymer made from lactic acid. Unlike existing prototyping printers, RepRap can replicate and update itself, including printing its own parts, says RepRap software developer Vik Olliver. The RepRap development team, is spread throughout New Zealand, the United Kingdom, and the United States. By making the project open source, the team hopes to be able to continue to improve the machine until it can do what people want it to do. Improvements received by the team are then sent back to users, allowing RepRap to evolve as a whole. A recent feature added to RepRap are heads that can be changed for different kinds of plastic. Olliver says a head that deposits low melting-point metal is in development, which means low melting-point metal could be put inside higher melting-point plastic, allowing for the production of structures such as motors. RepRap could also allow people to build circuits in 3D and in various shapes. Having the machine be able to copy itself is the most useful feature the team can give it and is the primary goal of the project, Olliver says.
                                      16 years ago by @gwpl
                                       
                                       
                                    •  

                                       
                                    •  

                                      Robots could fill 3.5 million jobs in Japan by 2025, concludes a new Machine Industry Memorial Foundation report. The report says robots have the potential to save $21 billion on health care costs for the elderly by 2025. Robots could help caregivers with children or older people by reading books out loud or helping bathe the elderly, and they also could do some housework. People would be able to focus on more important things, including caregivers, who could gain more than an extra hour a day as a result of such assistance. The robots could range from micro-sized capsules that detect lesions to high-tech vacuum cleaners, but it could take more time before they have a big impact in Japan. "There's the expensive price tag, the functions of the robots still need to improve, and then there are the mindsets of people," says Takao Kobayashi, who worked on the study. "People need to have the will to use the robots."
                                      16 years ago by @gwpl
                                       
                                       
                                    •  

                                      The MoGo artificial intelligence engine defeated professional 5th DAN Catalin Taranu in a 9x9 game of Go during the Go Tournament in Paris in late March. The victory, the first officially sanctioned "non blitz" victory for a machine over a Go Master, is considered a significant achievement because the game is patterned more after human thought than chess and its possible combinations exceed the number of particles in the universe. Taranu says the system was close to reaching the level of DAN in performance. The computer did lose to Taranu in a 19x19 configuration with a nine-stone handicap. The French National Institute for Research in Computer Science and Control (INRIA) developed the artificial intelligence engine. "The software used in this victory--the result of a collaboration between INRIA, the CNRS(1), LRI(2) and CMAP(3)--is based on innovative technologies that can be used in numerous different areas, particularly in the conservation of resources which is such a vital issue when it comes to tackling environmental problems," says INRIA researcher Olivier Teytaud, who led the MoGo team.
                                      16 years ago by @gwpl
                                       
                                       
                                    •  

                                      The use of organic and chemical materials to perform digital signal processing without electrical currents could be the next major technological revolution, say Northwestern professors Sotirios Tsaftaris and Aggelos Katsaggelos. Their research includes studying the use of DNA for digital signal processing, as DNA strands can be used to input and process elements, and DNA is an excellent medium for data storage. Digital samples can be recorded in DNA, which can be kept in a liquid form in test tubes to save space. DNA can also be easily replicated using common laboratory techniques, creating a database that could be easily searched, no matter how large. Over the past 10 years scientists and engineers have experimented with different materials for performing signal processing, possibly leading to a "not-so-electric future" of digital signal processing, according to Tsaftaris and Katsaggelos. For example, artist and scientist Cameron Jones discovered that fungi grown on CDs causes the optically recorded sound to be distorted by the fungi, and that the fungi growth patterns were dependent on the optical grooves recorded on the CD. Meanwhile, in 2005, a group of E. coli cells were modified to react to light and were able to perform edge detection of an image, a basic processing task.
                                      16 years ago by @gwpl
                                       
                                       
                                    •  

                                      Quantum computers would be able to process information in ways that standard computers cannot by tapping the unusual properties of quantum mechanics, but an analysis suggests that quantum computers would outclass conventional machines only by a slight degree for most computing problems, writes MIT professor Scott Aaronson. Evidence now indicates that quantum machines would be susceptible to many of the same algorithmic restrictions as classical computers, and these restrictions are totally independent of the practical problems of constructing quantum computers. A solid quantum computer algorithm would guarantee that computational paths leading to an incorrect answer neutralize while paths reading to a right answer reinforce, Aaronson says. The discovery of an efficient quantum algorithm to solve NP-complete problems remains elusive despite much effort, but one definite finding is that such an algorithm would have to efficiently take advantage of the problems' structure in a manner that is outside the capabilities of present-day methods. Aaronson points out that physicists have yet to come up with a final theory of physics, which gives rise to the possibility that a physical way to efficiently solve NP-complete problems may one day be revealed by a future theory. "People speculate about yet more powerful kinds of computers, some of which would make quantum computers look as pedestrian as vending machines," he notes. "All of them, however, would rely on speculative changes to the laws of physics." Aaronson projects that the difficulty of NP-complete problems will someday be perceived as a basic principle that describes part of the universe's fundamental nature.
                                      16 years ago by @gwpl
                                       
                                       
                                    •  

                                      In "Augmenting Human Intellect: A Conceptual Framework," Doug Engelbart, head of the Augmentation Research Center at Stanford Research Institute, presented a philosophy that favored efficiency over ease-of-use in human-computer interaction, notes Richard Monson-Haefel. In essence, Engelbart felt that basing computer interactions on the most efficient systems was the best way to achieve an optimal human-computer symbiosis. Monson-Haefel thinks the best embodiment of Engelbart's views is his five-finger keyboard, which is designed for use with one hand and carries out very rapid data entry and computer interactions when combined with a computer mouse, which Engelbart also conceived of. The keyboard-mouse combination was very tough to learn, which points to the crux of Engelbart's dilemma: More efficient and potentially more powerful human-computer interfaces have a very steep learning curve. Monson-Haefel says the modern approach to human-computer interaction stresses ease-of-use and usability without training, which runs counter to Engelbart's philosophy, which led to some of the most exceptional computer technologies in use today. The author does not think Engelbart's preference for efficiency is a completely unsound notion, and he reasons that "perhaps, like the violin, people could reach a new level of synergy with computers if they followed Engelbart's philosophy and focused on efficiency over ease-of-use."
                                      16 years ago by @gwpl
                                       
                                       
                                    •  

                                      An Interview With Bjarne Stroustrup - Dr. Dobb's Journal (03/27/08) Buchanan, James C++ creator Bjarne Stroustrup says in an interview that next-generation programmers need a thorough education that covers training and understanding of algorithms, data structures, machine architecture, operating systems, and networking. "I think what should give is the idea that four years is enough to produce a well-rounded software developer: Let's aim to make a five- or six-year masters the first degree considered sufficient," he says. Before writing a software program, Stroustrup recommends that a programmer consult with peers and potential users to get a clear perspective of the problem domain, and then attempt to build a streamlined system to test the design's basic ideas. Stroustrup says he was inspired to create a first programming course to address what he perceived as a lack of basic skills for designing and implementing quality software among computer science students, such as the organization of code to ensure it is correct. "In my course I heavily emphasize structure, correctness, and define the purpose of the course as 'becoming able to produce code good enough for the use of others,'" he says. Stroustrup thinks programming can be vastly improved, especially by never losing sight of how important it is to produce correct, practical, and well-performing code. He describes a four-year undergraduate university course in computer science he helped design as having a fairly classical CS program with a slightly larger than usual software development project component in the first two years of study. Courses would cover hardware and software, discrete math, algorithms and data structures, operating and network systems, and programming languages, while a "programming studio" would be set up to expose students to group projects and project management.
                                      16 years ago by @gwpl
                                       
                                       
                                    •  

                                      Most people today are only users of the information technology systems provided, making changes only when prompted, using "creativity" tools that stifle innovation, and accepting failures, disappointments, and crashes as inevitable and expected, writes Bill Thompson. In general, he says users accept the lack of programming tools or encouragement to engage in writing code, possibly because of the increasing complexity of modern computer systems. With so many users completely ignorant on how to program, it becomes difficult to have a serious debate about the core technical issues that affect the development and deployment of IT systems in our lives. The applications that support all aspects of society are all built by programmers, and there is a startling lack of programmers entering the software industry. Universities have seen applications for computer science degrees drop off, and computing is considered a non-essential subject in high school. Thompson says children need to see that programming is a useful skill that can be applied to a variety of careers. He says if more children were provided with suitable languages and tools for programming at school or at home, there would be at least some chance that those with an aptitude for coding would discover it early enough to become interested in the field.
                                      16 years ago by @gwpl
                                       
                                       
                                    • ⟨⟨
                                    • 1
                                    • ⟩⟩

                                    publications  229