eBox Platform eBox Platform is a unified network server that offers easy and efficient computer network administration for SMBs. It can act as a Gateway, an Infrastructure Manager, a Unified Threat Manager, an Office Server, a Unified Communication Server or a combination of them. These functionalities are tightly integrated, automating most tasks, avoiding mistakes and saving time for system administrators. eBox Platform is released under the GNU General Public License (GPL) and runs on top of Ubuntu GNU/Linux. eBox Technologies S.L. is the company behind eBox Platform and all the technologies and services related to them, providing a comprehensive set of deployment, support and managed services for the Global eBox Partner Network.
Wolfram Alpha to open data feeds Wolfram Alpha, a project from the makers of math software Mathematica, will soon be opening up its data sets, opening up new possibilities for data mash ups
NxTop delivers secure, controlled desktops to any PC in the form of a virtual desktop or a virtual laptop, greatly reducing the cost and complexity of desktop management and patch management while improving desktop environment security. NxTop provides numerous advantages over traditional VDI solutions.
Future of the Screen: After the CRT, a Display Deluge By Jon Stokes | 09.02.09 For the seven decades following the debut of television at the 1933 Chicago World's Fair, the term "cathode ray tube" (CRT) was virtually synonymous with "display." Shortly after the turn of the millennium, liquid crystal display (LCD) technology began to replace the venerable CRT in desktop-computer applications, and by the middle of the decade LCD was rapidly squeezing the CRT out the television market that the latter had invented. Just two years ago, it seemed obvious that the display space was in the final stages of a relatively straightforward evolutionary shift, with LCD replacing the CRT in the same way that the gas-powered automobile had replaced the horse and buggy.
At Backblaze, we provide unlimited storage to our customers for only $5 per month, so we had to figure out how to store hundreds of petabytes of customer data in a reliable, scalable way—and keep our costs low. After looking at several overpriced commercial solutions, we decided to build our own custom Backblaze Storage Pods: 67 terabyte 4U servers for $7,867. In this post, we’ll share how to make one of these storage pods, and you’re welcome to use this design. Our hope is that by sharing, others can benefit and, ultimately, refine this concept and send improvements back to us. Evolving and lowering costs is critical to our continuing success at Backblaze. Below is a video that shows a 3-D model of the Backblaze Storage Pod. Continue reading to learn the exact details of the design
The purpose of our centre is to provide a national focus for research and development into curation issues and to promote expertise and good practice, both national and international, for the management of all research outputs in digital format. Find out more about the DCC.
inderbox stores and organizes your notes, plans, and ideas. It can help you analyze and understand them. And Tinderbox helps you share ideas through Web journals and web logs
AFNI (which might be an acronym for Analyis of Functional NeuroImages) is a set of C programs for processing, analyzing, and displaying functional MRI (FMRI) data - a technique for mapping human brain activity. It runs on Unix X11 Motif systems, including
It costs more than three times as much to publish an article in a humanities or social-science journal as it does to publish one in a science, technical, or medical, or STM, journal, and the prevailing model used by many publishers of STM journals will not work for their humanities and social-sciences counterparts. Those are some of the eye-opening conclusions released today in a report on an in-depth study of eight flagship journals in the humanities and social sciences.
In my last project at work, I had to replace NFS with GFS2 and Clustering. So in this tutorial I will show you how to create a Red Hat or CentOS cluster with GFS2. I will also show you how to optimize GFS2 performance in the next HowTo, because you will quickly notice some loss of performance until you do a little optimization first.I will 1st show you how do build a Cluster with GFS2 on the Command Line and in the next tutorial I will show you how to do the same thing using Conga.
For around seventy years, 800 rolls of early nitrate film sat in sealed barrels in the basement of a shop. Now miraculously rediscovered and restored, the Mitchell
VMware to xen migration German original: http://www.eisxen.org/49.html To convert a Linux VMWare Image to a normal image, below the most important commands: install qemu convert the VMware image to a RAW-Image qemu-img convert -f vmdk /var/vmware/vm/vm01.vmdk \ -O raw /tmp/vmimage.raw A vmware imange is a containerfile. The partitions have to be spilt up to individual images for xen. To find out what is in the VMware image, take a look with: fdisk -l -u /home/xen/vmimage.raw Device Boot Start End Blocks Id System linux1 * 63 208844 104391 83 Linux linux2 208845 7550549 3670852 83 Linux linux3 7550550 8193149 321300 82 Linux swap I presume there are already some lvm’s available. Now use the command below a couple times to extract them dd if=vmimage.raw of=/dev/vg01/xendomU01slash bs=512 skip=208845 count=7341705 Of course, check the result (with a (loop)mount)
Re: How migrate vmware guest to xen Hi, It's possible with the qemu tools Install the qemu package (which is available from the OpenSuSE Build Service) and use the following command: qemu-img convert -f vmdk /path/to/vm01.vmdk -O raw /tmp/xenimage.raw This will create a raw disk which can be used with xen from a VMware vmdk disk.
The United States increasingly relies on information networks for the conduct of vital business. These networks are potentially subject to major disruptions from a variety of external sources. To date, there has been no clear statement of the magnitude of this threat or the ability of the various networks to withstand or respond to such disruptions. This project examines the national communications and information infrastructure. The research was conducted for the Office of Science and Technology Policy with task funding from the National Science Foundation.
As computational scientists are confronted with increasingly massive datasets from supercomputing simulations and experiments, one of the biggest challenges is having the right tools to gain scientific insight from the data. One common method for gaining insight is to use scientific visualization, which transforms abstract data into more readily comprehensible images using advanced computer software and computer graphics. But the ever-growing size of scientific datasets presents a significant challenge to modern scientific visualization tools. As a result, there is a great deal of motivation to explore use of large, parallel resources, such as those at the U.S. Department of Energy's (DOE) supercomputing centers, to take advantage of their vast computational processing power, I/O bandwidth and large memory footprint.