Apache's Hadoop project aims to solve these problems by providing a framework for running large data processing applications on clusters of commodity hardware. Combined with Amazon EC2 for running the application, and Amazon S3 for storing the data, we can run large jobs very economically. This paper describes how to use Amazon Web Services and Hadoop to run an ad hoc analysis on a large collection of web access logs that otherwise would have cost a prohibitive amount in either time or money.
explores the challenges of constructing a distributed e-business architecture based on the concept of Request Based Virtual OrganiZation (RBVO) and presents a solution based on ebXML, Open Source e-business component
Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively.
C. Ghidini, и L. Serafini. Modelling and Using Context -- Proceedings of the 2nd International and Interdisciplinary Conference, Context'99, том 1688 из Lecture Notes in Artificial Intelligence, стр. 159--172. Springer Verlag - Heidelberg, (1999)
G. Pirro, C. Mastroianni, и D. Talia. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF GRID COMPUTING-THEORY METHODS AND APPLICATIONS, 26 (1):
38-49(января 2010)
L. Kagal, S. Cost, T. Finin, и Y. Peng. Proceedings of IJCAI-01 Workshop on Autonomy, Delegation and Control, (2001)http://citeseer.nj.nec.com/kagal01framework.html.