Amazon Elastic Block Store (EBS) enables a single Amazon EC2 instance to attach to one or more highly available, highly reliable storage volumes of up to 1 TB of data each. Once attached, applications on a single Amazon EC2 instance can read or write from the Amazon EBS volume similar to a disk drive. With Amazon EBS, an Amazon EC2 instance can now be terminated without losing the data that resides on the Amazon EBS volume. One use case involves running a relational database within an Amazon EC2 instance, but maintaining the data within an Amazon EBS volume.
In the months prior to leaving Heavy, I led an exciting project to build a hosting platform for our online products on top of Amazon’s Elastic Compute Cloud (EC2). We eventually launched our newest product at Heavy using EC2 as the primary hosting platform. I’ve been following a lot of what other people have been doing with EC2 for data processing and handling big encoding or rendering jobs. We set out to build a fairly standard LAMP hosting infrastructure where we could easily and quickly add additional capacity. In fact, we can add new servers to our production pool in under 20 minutes, from the time we call the “run instance” API at EC2, to the time when public traffic begins hitting the new server. This includes machine startup time, adding custom server config files and cron jobs, rolling out application code, running smoke tests, and adding the machine to public DNS. What follows is a general outline of how we do this.
Disco is an oss implementation of the Map-Reduce framework for distributed computing. Disco supports parallel computations over large data sets on unreliable cluster of computers. The Disco core is written in Erlang. Users of Disco typically write jobs in Python, which makes it possible to express even complex algorithms or data processing tasks often only in tens of lines of code. This means that you can quickly write scripts to process massive amounts of data. Disco was started at Nokia Research Center as a lightweight framework for rapid scripting of distributed data processing tasks. This far Disco has been succesfully used, for instance, in parsing and reformatting data, data clustering, probabilistic modelling, data mining, full-text indexing, and log analysis with hundreds of gigabytes of real-world data. Linux is the only supported platform but you can run Disco in the Amazon's Elastic Computing Cloud.
Stax provides easy deployment of test and production environments, a local development model, and strong integration with existing development tools, frameworks, and processes. Get Started * Jump start your development with built-in application templates for popular Java technologies including Struts, GWT, Wicket, JRuby on Rails, Jython, Adobe Flex and ColdFusion * Create the MySQL databases required for your application. * Deploy to multiple environments such as test, staging, and production to support your application's lifecycle. Develop * Develop on your workstation using the Stax SDK, which provides a local reproduction of server production environment. * Use any editor or IDE (Eclipse projects are automatically generated for new applications). * Stax Ant plug-ins let you integrate Stax with your existing Java projects. Deploy * Publish applications from your local environment to the Stax cloud on Amazon EC2.