JPPF enables applications with large processing power requirements to be run on any number of computers, in order to dramatically reduce their processing time. This is done by splitting an application into smaller parts that can be executed simultaneously on different machines.
Hama (means a hippopotamus in Korean) is a parallel matrix computation package currently in incubation with Apache. It is a library of matrix operations for large-scale processing and development environments as well as a Map/Reduce framework for a large-scale numerical analysis and data mining, that need the intensive computation power of matrix inversion, e.g., linear regression, PCA, SVM and etc. It will be useful for many scientific applications, e.g., physics computations, linear algebra, computational fluid dynamics, statistics, graphic rendering and many more.
C. Ma, V. Smith, M. Jaggi, M. Jordan, P. Richtárik, and M. Takáč. (2015)cite arxiv:1502.03508Comment: ICML 2015: JMLR W&CP volume37, Proceedings of The 32nd International Conference on Machine Learning, pp. 1973-1982.
D. Kingma, and J. Ba. (2014)cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015.
D. Povey, X. Zhang, and S. Khudanpur. (2014)cite arxiv:1410.7455Comment: Accepted as workshop contribution to ICLR 2015. 12 pages plus 16 pages of appendices, International Conference on Learning Representations (ICLR): Workshop track, 2015. 2 sets of minor fixes post-publication..
O. Yadan, K. Adams, Y. Taigman, and M. Ranzato. (2013)cite arxiv:1312.5853Comment: Machine Learning, Deep Learning, Convolutional Networks, Computer Vision, GPU, CUDA.