Saturday, July 19, 2008

Today's Paper: Follow-ups for Google's MapReduce

Ralf Lämmel: Google's MapReduce programming model - Revisited. Sci. Comput. Program. 70(1): 1-30 (2008)
http://dx.doi.org/10.1016/j.scico.2007.07.001

Abstract:
Google’s MapReduce programming model serves for processing large data sets in a massively parallel manner. We deliver the first rigorous description of the model including its advancement as Google’s domain-specific language Sawzall. To this end, we reverse-engineer the seminal papers on MapReduce and Sawzall, and we capture our findings as an executable specification. We also identify and resolve some obscurities in the informal presentation given in the seminal papers. We use typed functional programming (specifically Haskell) as a tool for design recovery and executable specification. Our development comprises three components: (i) the basic program skeleton that underlies MapReduce computations; (ii) the opportunities for parallelism in executing MapReduce computations; (iii) the fundamental characteristics of Sawzall’s aggregators as an advancement of the MapReduce approach. Our development does not formalize the more implementational aspects of an actual, distributed execution of MapReduce computations.

Jeffrey's MapReduce papers and Talk:
  • Jeffrey Dean, Sanjay Ghemawat: MapReduce: simplified data processing on large clusters. Commun. ACM 51(1): 107-113 (2008)
    http://doi.acm.org/10.1145/1327452.1327492
  • Jeffrey Dean: MapReduce and Other Building Blocks for Large-Scale Distributed Systems at Google. Invited Talk at USENIX Annual Technical Conference 2007
    http://www.usenix.org/media/events/usenix07/tech/mp3/dean.mp3

No comments: