MapReduce: simplified data processing on large clusters
, and .
Communications of the ACM 51 (1): 107--113 (January 2008)

MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a <i>map</i> and a <i>reduce</i> function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
  • @genbob
  • @becker
  • @danielt
  • @flint63
  • @thoni
  • @castagna
  • @telekoma
  • @nosebrain
  • @mgns
  • @dblp
  • @diverzulu
  • @jaeschke
  • @seb
  • @chesteve
  • @asmelash
  • @dx
  • @sb3000
  • @machinelearning
  • @gron
  • @ytyoun
This publication has not been reviewed yet.

rating distribution
average user rating0.0 out of 5.0 based on 0 reviews
    Please log in to take part in the discussion (add own reviews or comments).