Apache's Hadoop project aims to solve these problems by providing a framework for running large data processing applications on clusters of commodity hardware. Combined with Amazon EC2 for running the application, and Amazon S3 for storing the data, we can run large jobs very economically. This paper describes how to use Amazon Web Services and Hadoop to run an ad hoc analysis on a large collection of web access logs that otherwise would have cost a prohibitive amount in either time or money.
P. Sethia, and K. Karlapalem. Engineering Applications of Artificial Intelligence, 24 (7):
1120--1127(2011)Infrastructures and Tools for Multiagent Systems.
G. Sadasivam, and G. Baktavatchalam. MDAC '10: Proceedings of the 2010 Workshop on Massive Data Analytics on the Cloud, page 1--7. New York, NY, USA, ACM, (2010)
D. Knoell, M. Atzmueller, C. Rieder, and K. Scherer. Proc. GWEM 2017, co-located with 9th Conference Professional Knowledge Management (WM 2017), Karlsruhe, Germany, KIT, (2017)
D. Knoell, M. Atzmueller, C. Rieder, and K. Scherer. Proc. GWEM 2017, co-located with 9th Conference Professional Knowledge Management (WM 2017), Karlsruhe, Germany, KIT, ((In Press) 2017)
D. Knoell, M. Atzmueller, C. Rieder, and K. Scherer. Proc. GWEM 2017, co-located with 9th Conference Professional Knowledge Management (WM 2017), Karlsruhe, Germany, KIT, (2017)