TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.
Torch is a scientific computing framework with wide support for machine learning algorithms. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.
COBOSLAB: Cognitive Bodyspaces: Learning and Behavior:
Laboratory that investigates and models the Self-organized Learning of and Behavior within Integrated Multimodal Multimodular Bodyspace Representations.
J. Lin, R. Nogueira, and A. Yates. (2020)cite arxiv:2010.06467Comment: Final preproduction version of volume in Synthesis Lectures on Human Language Technologies by Morgan & Claypool.
Q. Le, and T. Mikolov. Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, page 1188--1196. Bejing, China, PMLR, (June 2014)
S. Wang, L. Hu, Y. Wang, X. He, Q. Sheng, M. Orgun, L. Cao, F. Ricci, and P. Yu. (2021)cite arxiv:2105.06339Comment: Accepted by IJCAI 2021 Survey Track, copyright is owned to IJCAI. The first systematic survey on graph learning based recommender systems. arXiv admin note: text overlap with arXiv:2004.11718.
M. Paris, and R. Jäschke. Proceedings of the 14th International Conference on Knowledge Science, Engineering and Management, volume 12816 of Lecture Notes in Artificial Intelligence, page 1--14. Springer, (2021)
M. Dacrema, P. Cremonesi, and D. Jannach. (2019)cite arxiv:1907.06902Comment: Source code available at: https://github.com/MaurizioFD/RecSys2019_DeepLearning_Evaluation.
P. Xia, S. Wu, and B. Van Durme. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), page 7516--7533. Association for Computational Linguistics, (November 2020)
X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T. Chua. Proceedings of the 26th International Conference on World Wide Web, page 173–182. Republic and Canton of Geneva, CHE, International World Wide Web Conferences Steering Committee, (2017)
C. Sciuto, K. Yu, M. Jaggi, C. Musat, and M. Salzmann. (2019)cite arxiv:1902.08142Comment: We find that random policy in NAS works amazingly well and propose an evaluation framework to have a fair comparison. 8 pages.
G. Neto. Universidade Federal do Maranhão (UFMA), Programa de Pós-Graduação em Ciência da Computação/CCET, Dissertation, (Jul 26, 2018)Departamento de Informática/CCET.