BibSonomy :: group :: utrust
actions for all displayed bookmarks:
export:
sort:
others:

bookmarks  (34)

actions for all displayed publications:
export:
 
sort:
others:

publications  (228)

  • preview
  • preview
    Who are the best spreaders of information in a social network? The answer may surprise you. · http://www.technologyreview.com/blog/arxiv/24748/?a=f
    4 years and 7 months ago
    by gwpl
    3
    (0)
  • preview
    Non-repudiation is a system whereby sensitive data sent over the Internet is digitally signed at the source with a signature that can be traced to the user's computer as a safeguard against fraud, but Len Sassaman of the Catholic University of Leuven warns that making this system the default setting for all traffic on a network would enable authorities to trace the source of any online activity and take away users' anonymity. Worse still, Sassaman and University of Ireland colleague Meredith Patterson say that the One Laptop per Child (OLPC) foundation is unintentionally engaged in establishing such a system throughout the Third World by supplying inexperienced users Internet-ready laptops. Theft of the laptops is discouraged with a security model called Bitfrost in which each laptop automatically phones an anti-theft server and sends its serial number once a day so that it can get an activation key, and any machine reported stolen is refused activation. Sassaman and Patterson caution that the security model's use of non-repudiable digital signatures could be exploited by oppressive regimes to identify and silence dissidents. "They may not intend for the signatures to be used for non-repudiation, but it's possible to use them for this purpose," Sassaman says. Although the OLPC laptops are primarily intended to be used for educational purposes, which some people claim would preclude government monitoring, Sassaman says it is unlikely that the systems will be used solely by children, and that conditions in some developing nations might actually encourage children to act as whistleblowers. Sassaman and Patterson are modifying the Bitfrost security model to enable the laptops to identify each other without compromising their users' privacy, based on existing cryptographic methods that cannot be employed for non-repudiation. · http://technology.newscientist.com/channel/tech/mg19826596.100-laptops-could-betray-users-in-the-developing-world.html
    6 years and 2 months ago
    by gwpl
    1
    (0)
  • preview
    University of California, Berkeley professor of electrical engineering and computer sciences Richard Karp has been named a laureate of the 2008 Kyoto Prize, Japan's equivalent of the Nobel Prize, awarded by the Inamori Foundation. Karp is being recognized for his lifetime achievements in computer theory. A senior research scientist at the International Computer Science Institute in Berkeley, he is considered one of the world's leading computer theorists. Karp's work has significantly advanced the theory of NP-completeness, conceived in 1971 by former UC Berkeley math professor Stephen Cook. Karp developed a standard method for characterizing combinatorial problems into classes and identifying their level of intractability. Combinatorial problems that are NP-complete are the most difficult to solve. "Karp's theory streamlined algorithm design for problem-solving, accelerated algorithm engineering, and brought computational complexity within the scope of scientific research," says the Inamori Foundation. NP-completeness theory has become a cornerstone in modern theoretical computer science, and in the 1980s Cook and Karp received an ACM A.M. Turing Award for their contributions to the concept of NP-completeness. Karp has recently focused on bioinformatics and computational biology, including the development of algorithms for constructing various kinds of physical maps of DNA targets, and methods for classifying biological samples on the basis of gene expression data. · http://www.berkeley.edu/news/media/releases/2008/06/20_kyotoprize.shtml
    6 years and 2 months ago
    by gwpl
    1
    (0)
  • preview
    Now that IBM's RoadRunner supercomputer has broken the petaflop barrier, reaching more than one thousand trillion sustained floating-point operations per second, supercomputer developers say the next step is an exascale system capable of a million trillion calculations per second, a thousand times faster than a petaflop. At the upcoming International Supercomputing Conference in Dresden, Germany, University of Tennessee professor Jack Dongarra will give a presentation on exaflop systems in the year 2019. Dongarra says performance gains are following a predictable path, with the first gigaflop system being built 22 years ago. Dongarra says there will be exaflop computing in 11 years, and that by then every system on the Top500 computing list will be at least a petaflop. He says the greatest achievement with the RoadRunner system is the programming that allows the system to utilize different processor technologies. To achieve exascale systems, Dongarra says developers will have to create new programming languages and algorithms that can calculate at high degrees of concurrency to complete calculations quickly. The difficulty in reaching that level of programming, and changing to new methods, could be the roadblock that prevents exaflop computing from being realized in a similar timeline, he says. · http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9095279
    6 years and 3 months ago
    by gwpl
    1
    (0)
  • preview
    Steve Jobs' presentation at the opening session of Apple's Worldwide Developers Conference included a description of the next version of the Mac OS X operating system, dubbed Snow Leopard, which will be designed for use with parallel processors. Jobs says Apple will find a solution to the problem of programming the new generation of parallel chips efficiently. He says Apple will focus on "foundational features" that will be the basis for a future version of the Mac operating system. At the core of Snow Leopard will be a parallel-programming technology code-named Grand Central. Snow Leopard will utilize the computer power inherent in graphics processors that are now used in tandem with microprocessors in almost all personal and mobile computers. Jobs also described a new processing standard that Apple is proposing called Open Computing Language (OpenCL), which is intended to refocus graphics processors on standard computing functions. "Basically it lets you use graphics processors to do computation," Jobs says. "It's way beyond what Nvidia or anyone else has, and it's really simple." · http://bits.blogs.nytimes.com/2008/06/10/apple-in-parallel-turning-the-pc-world-upside-down/
    6 years and 3 months ago
    by gwpl
    1
    (0)
  • preview
    Researchers led by Carnegie Mellon University professor David Brumley have found that software patches could be just as harmful as they are helpful because attackers could use the patches to automatically generate software in as little as 30 seconds that attacks the vulnerabilities the patch is supposed to fix. The malicious software could then be used to attack computers that had not received and installed the patch. Microsoft Research's Christos Gkantsidis says it takes about 24 hours to distribute a patch through Windows Update to 80 percent of the systems that need it. "The problem is that the infrastructure capacity that exists is not enough to serve all the users immediately," Gkantsidis says. "We currently don't have enough technologies that can distribute patches as fast as the worms." This distribution delay gives attackers time to receive a patch, find out what it is fixing, and create and distribute an exploit that will infect computers that have not yet received the patch. The researchers say new methods for distributing patches are needed to make them more secure. Brumley suggests taking steps to hide the changes that a patch makes, releasing encrypted patches that cannot be decrypted until the majority of users have downloaded them, or using peer-to-peer distribution methods to release patches in a single wave. · http://www.technologyreview.com/Infotech/20839/?a=f
    6 years and 3 months ago
    by gwpl
    1
    (0)
  • preview
    According to a recent survey from Merrill Lynch, 16% of the Baby Boomer workforce is looking for part-time work, and 42% will only take jobs that will allow them time off for leisure. Similar types of findings across all demographics are forcing companies to re-evaluate the flexibility options that they offer employees, especially as the so-called war for talent intensifies. While organizations recognize that inflexible work arrangements are a primary reason top talent leaves an organization, the actual implementation of these flexible work arrangements can be difficult to implement. As a guide, the article provides a review of flexible work arrangements at six different companies. When it comes to implementing a flexible work arrangement, a number of conditions prompt organizations to reconfigure their work plans. For example, the company could be losing market share, experiencing a deteriorating bottom line or facing a chronic shortage of talent. While there may be many reasons for an organization to embrace more flexible work situations for employees, common arrangements include flex scheduling that accommodates doctor appointments or school visits. Other arrangements include telecommuting one or more days per week; compressing workweeks from five days to four or three days per week; and job sharing. · http://www.workforce.com/section/06/feature/25/51/84/
    6 years and 3 months ago
    by gwpl
    1
    (0)
  • preview
    A veteran programmer outlines the key differences between natural programmers and career programmers. While both types of programmers possess the same amount of talent and drive, they have vastly different approaches to completing their work. While some programmers are better at researching problems and developing cost-effective solutions, others have a natural instinct for arriving at innovative solutions. Some programmers love what they do, while others are more interested in the bottom line of the business. Natural programmers are able to make quick associations between very different topics. As a result, they are able to make the jump from code to real life application quickly. Natural programmers realize that there are many ways to do things correctly and several different ways to solve the same problem. While natural programmers understand the need for a system of rules within the workplace, they tend to treat authority with less respect than their career programmer peers. Moreover, they can be difficult to manage since they consider many office conventions (e.g. arriving at 9 am) to be arbitrary. Employers need to understand the motivations of the natural programmer and the type of office environment in which they are likely to thrive. They are not driven primarily by monetary compensation, but by the ability to work with interesting technologies and challenging projects. They tend to respect individuals within the organization who can teach them new technologies. Finally, they thrive when they can downplay the significance of status reports, QA forms, documentation, and timesheets. · http://itmanagement.earthweb.com/features/article.php/3749841/Natural+Programmers+(Code+Monkeys)+vs.+Career+Programmers+(Geeks+in+Suits).htm
    6 years and 3 months ago
    by gwpl
    1
    (0)
  • preview
    Nintendo set to launch "Wii Fit" exercise game For years, video games have been blamed for turning kids into idle layabouts who only venture off the couch to fill up on potato chips and soda. Nintendo Co Ltd now aims to shatter that image with a game that aims to get players off the couch and lead them to stretch, shake and sweat their way to a healthy life. · http://uk.reuters.com/article/technology-media-telco-SP/idUKN1641507620080519
    6 years and 4 months ago
    by gwpl
    1
    (0)
  • preview
    Optimizing the capabilities of multicore processors in all sorts of products requires bridging the chasm between processors' and software's capability, and industry sources say the long-term focus should be on figuring out a way to write code for parallel computing. "We don't even know for sure what we should be teaching, but we know we should be changing what we're teaching," says University of California, Berkeley professor David Patterson, a former president of ACM. UC Berkeley and the University of Illinois at Urbana-Champaign will split $20 million from Intel and Microsoft to underwrite Universal Parallel Computing Research Centers over the next five years, with Berkeley's share going toward the enhancement of research already done by the school's Parallel Computing Laboratory and the hiring of 50 researchers to focus on the problem of writing software for parallelism. Patterson says Berkeley has started introducing freshmen to parallel computing through classes focusing on the "map-reduce" method, while upperclassmen are being given a grounding in "sticky" parallelism issues such as load balancing and synchronization. Patterson acknowledges that an entirely new programming language may need to be invented in order to tackle the challenge of parallel computing. Brown University professor Maurice Herlihy says a more likely possibility is the evolution of parallel programming features by existing languages--a view endorsed by AMD's Margaret Lewis, who cites the necessity of interim solutions to amend legacy software written for unicore processors along with software under development. Lewis says AMD is trying to infuse parallel coding methods via compilers and code analyzers, noting that with these interim solutions "programmers aren't getting the full benefits of parallelism ... but it runs better in a multicore environment." · http://www.sysmannews.com/content/article.aspx?ArticleID=32043
    6 years and 4 months ago
    by gwpl
    1
    (0)
  • preview
    MIT researcher Seth Lloyd believes that a new architecture for quantum random access memory (QRAM) could be used to reduce the energy wasted by random access memory (RAM) as well as for completely anonymous Internet searchers. Classical computing requires the use of RAM to retrieve information, but RAM design is wasteful and subject to interference, Lloyd says. Lloyd worked with Vittorio Giovannetti at the NEST-CNR-INFM in Pisa, Italy, and Lorenzo Maccone at the University of Pavia, Italy, to create a system that works as QRAM. Lloyd says their QRAM architecture was discovered when his colleagues and him were researching how to make QRAM work on classical RAM design. He says QRAM is a "sneakier" way of accessing RAM. In traditional RAM, the first bit of an address throws two switches, the second throws four, and so on, Lloyd says. With QRAM, "all the bits of the address only interact with two switches," Lloyd says. The energy saved using QRAM is not enough to offset the larger energy problems associated with classical computing, and Lloyd says QRAM is slower than RAM. However, he says QRAM's benefits can be applied to quantum Internet searches. "If you had a quantum Internet, then this would be useful," he says. "This offers a huge decrease in energy used and an increase in robustness." For this to work, Lloyd says "dark fiber" is needed, and although it is already being used for some classical communications, a quantum Internet would need more. · http://www.physorg.com/news129289258.html
    6 years and 4 months ago
    by gwpl
    1
    (0)
  • preview
    Many women in IT credit their mothers for making them believe they could succeed in any career. IT and service manager Priscilla Milam says when she got into computer science there were no other women in the program, and it was her mother who told her to learn to live in a man's world, encouraging her to read the headlines in the financial pages, sports pages, and general news, and not to get emotional. "Still, IT in general is a man's world, and by keeping up with the news and sports, when the pre/post meetings end up in discussions around whether the Astros won or lost or who the Texans drafted, I can participate; and suddenly they see me as part of the group and not an outsider," Milam says. Catalyst says the percentage of women holding computer and mathematics positions has declined since 2000, from 30 percent to 27 percent in 2006. Milam and other women in high-tech positions say a passion for technology begins early in life and a few encouraging words from their mothers helped them realize they could overcome the challenges that exist when entering an industry dominated by men. CSC lead solution architect Debbie Joy says the key to succeeding in IT is to put gender aside at work and learn to regard colleagues as peers, and soon they will do the same. · http://www.networkworld.com/news/2008/050808-mom-knows-best-side.html
    6 years and 4 months ago
    by gwpl
    1
    (0)
  • preview
    Both young men and women are avoiding high school courses that could lead to careers in IT, but young women are dropping those courses faster than young men, says Australia's Charles Sturt University Faculty of Education dean Toni Downes. Downes was a senior member of a research project that examined the interest of male and female high school students in particular high school subjects. The study of 1,334 male and female students found that only 13 percent of girls said they would study IT-related subjects in their senior years, and both boys and girls shied away from high school computing and IT subjects between 2002 and 2007. Downes believes that a shift in computer curriculum from a combination of computer literacy and foundational studies to computing and IT as an academic discipline has contributed to the decline in enrollments, particularly among females. "The reasons are complex, but the reasons that girls give are often the same reasons that disinterested boys give," Downes says. "Sometimes they are making their judgments on careers based on stereotypes, sometimes the girls are making their decisions based on self-limiting identities like 'it's not cool for me to be a nerd' because they think the career is nerdy." Downes says part of the problem is that girls do not engage with technology in ways that allow them to use it playfully, instead of just functionally, so they are not attracted to thinking creatively or critically about how and why technology works. · http://www.computerworld.com.au/index.php/id;566180830
    6 years and 4 months ago
    by gwpl
    1
    (0)
  • preview
    Cryptography has been an arms race, with codemakers and hackers constantly updating their arsenals, but quantum cryptography could theoretically give codemakers the upper hand. Even the absolute best in classical encryption, the 128-bit RSA, can be cracked using brute force computing power. However, quantum cryptography could make possible uncrackable code using quantum key distribution (QKD). Modern cryptography relies on the use of digital keys to encrypt data before sending it over a network so it can be decrypted by the recipient. QKD promises a theoretically uncrackable code, one that can be easily distributed and still be transparent. Additionally, the nature of quantum mechanics makes it so that if an eavesdropper tries to intercept or spy on the transmission, both the sender and the receiver will know. Any attempt to read the transmission will alert the sender and the receiver, allowing them to generate a new key to send securely. QKD had its first real-world application in Geneva, where quantum cryptography was used in the electronic voting system. Not only did QKD guarantee that the poll was secure, but it also ensured that no votes were lost in transmission, because the uncertainty principle established that there were no changes in the transmitted data. The SECOQC project, which did the work for the voting system, says the goal is to establish network-wide quantum encryption that can work over longer distances between multiple parties. · http://cordis.europa.eu/ictresults/index.cfm/section/news/tpl/article/BrowsingType/Features/ID/89694
    6 years and 4 months ago
    by gwpl
    1
    (0)
  • preview
    Three competing teams of computer researchers are working on new types of software for use with mulitcore processors. Stanford University and six computer and chip makers--Sun Microsystems, Advanced Micro Devices, Nvidia, IBM, Hewlett-Packard, and Intel--are creating the Pervasive Parallelism Lab. Previously, Microsoft and Intel helped finance new labs at the University of California, Berkeley and the University of Illinois at Urbana-Champaign. The research efforts are in response to a growing awareness that the software industry is not ready for the coming availability of microprocessors with multiple cores on a single chip. Computer and chip manufacturers are concerned that if software cannot keep up with hardware improvements, consumers will not feel the need to upgrade their systems. Current operating system software can work with the most advanced server microprocessors and processors for video game machines, which have up to eight cores. But software engineers say that most applications are not designed for efficient use of the dozens or hundreds of processors that will be available in future computers. The university efforts will share some approaches, but will try different experiments, programming languages, and hardware innovations. The efforts will also rethink operating systems and compilers. The Berkeley researchers have divided parallel computing problems into seven classes, with each class being approached in different ways. The Stanford researchers say they are looking for new ways to hide the complexity of parallel computing from programmers, and will use virtual worlds and robotic vehicles to test their efforts. · http://www.nytimes.com/2008/04/30/technology/30lab.html?_r=1&oref=slogin
    6 years and 4 months ago
    by gwpl
    1
    (0)
  • preview
    At the International World Wide Web Conference in Beijing, two Google researchers unveiled VisualRank, software they say will advance digital image searching on the Web the same way Google's PageRank software advanced Web page searches. VisualRank is an algorithm that blends image-recognition software methods with techniques that weigh and rank images that look the most similar. Most image searches are based on cues from the text associated with each image, and not on the actual content of the image itself. Image analysis is a largely unsolved problem in computer science, the Google researchers say. "We wanted to incorporate all of the stuff that is happening in computer vision and put it in a Web framework," says Google's Shumeet Baluja, who made the presentation along with Google researcher Yushi Jing. Their paper, "PageRank for Product Image Search," focuses on a subset of the images that Google has cataloged. The researchers concentrated on the 2,000 most popular product queries on Google's product search, and sorted the top 10 images from both its ranking system and the standard Google Image Search results. The research effort then used a team of 150 Google employees to create a scoring system for image "relevance." The researchers say VisualRank returned 83 percent less irrelevant images. · http://www.nytimes.com/2008/04/28/technology/28google.html?_r=1&oref=slogin
    6 years and 4 months ago
    by gwpl
    1
    (0)
  • preview
    Computer scientist Donald E. Knuth, winner of ACM's A.M. Turing Award in 1974, says in an interview that open-source code has yet to reach its full potential, and he anticipates that open-source programs will start to be totally dominant as the economy makes a migration from products to services, and as increasing numbers of volunteers come forward to tweak the code. Knuth admits that he is unhappy about the current movement toward multicore architecture, complaining that "it looks more or less like the hardware designers have run out of ideas, and that they're trying to pass the blame for the future demise of Moore's Law to the software writers by giving us machines that work faster only on a few key benchmarks!" He acknowledges the existence of important parallelism applications but cautions that they need dedicated code and special-purpose methods that will have to be significantly revised every several years. Knuth maintains that software produced via literate programming was "significantly better" than software whose development followed more traditional methodologies, and he speculates that "if people do discover nice ways to use the newfangled multithreaded machines, I would expect the discovery to come from people who routinely use literate programming." Knuth cautions that software developers should be careful when it comes to adopting trendy methods, and expresses strong reservations about extreme programming and reusable code. He says the only truly valuable thing he gets out of extreme programming is the concept of working in teams and reviewing each other's code. Knuth deems reusable code to be "mostly a menace," and says that "to me, 're-editable code' is much, much better than an untouchable black box or toolkit." · http://www.informit.com/articles/article.aspx?p=1193856
    6 years and 4 months ago
    by gwpl
    9
    (0)
  • preview
    The Defense Advanced Research Projects Agency has issued a call for research proposals to design compilers that can dynamically optimize programs for specific environments. As the Defense Department runs programs across a wider range of systems, it is facing the lengthy and manual task of tuning programs to run under different environments, a process DARPA wants to automate. "The goal of DARPA's envisioned Architecture-Aware Compiler Environment (AACE) Program is to develop computationally efficient compilers that incorporate learning and reasoning methods to drive compiler optimizations for a broad spectrum of computing system configurations," says DARPA's broad area announcement. The compilers can be written in the C and Fortran programming languages, but the BAA encourages work in languages that support techniques for the parallelization of programs. The quality of the proposals will determine how much DARPA spends on the project, which will run at least through 2011. Proposals are due by June 2. · http://www.gcn.com/online/vol1_no1/46142-1.html
    6 years and 4 months ago
    by gwpl
    1
    (0)
  • preview
    The European Union-funded RobotCub project will send an iCub robot to six European research labs, where researchers will train iCub to learn and act independently by learning from its own experiences. The project at Imperial College London will examine how "mirror neurons," which fire in humans to trigger memories of previous experiences when humans are trying to understand the physical actions of others, can be translated into a digital application. The team at UPMC in Paris will explore the dynamics needed to achieve full body control for iCub, and the researchers at TUM Munich will work on developing iCub's manipulation skills. A project team at the University of Lyons will explore internal simulations techniques, which occur in our brains when planning actions or trying to understand the actions of others. In Turkey, a team at METU in Ankara will focus on language acquisition and the iCub's ability to link objects with verbal utterances. The iCub robots are about the size of three-year-old children and are equipped with highly dexterous hands and fully articulated heads and eyes. The robots have hearing and touch capabilities and are designed to be able to crawl and to sit up. Researchers expect to enable iCub to learn by doing, including the ability to track objects visually or by sound, and to be able to navigate based on landmarks and a sense of its own position. · http://cordis.europa.eu/ictresults/index.cfm/section/news/tpl/article/BrowsingType/Features/ID/89673
    6 years and 4 months ago
    by gwpl
    1
    (0)