Proceedings of the Second International Workshop on Multimodal Immersive Learning Systems (MILeS 2022)
At the Seventeenth European Conference on Technology Enhanced Learning (EC-TEL 2022)
Toulouse, France, September 12th-16th, 2022.
The 1st Workshop on Philosophy of Learning Analytics (POLA) at the 11th International Conference on Learning Analytics and Knowledge (LAK21). Online April 12/13, 2021
Methodenset für Online-Veranstaltungen und -Workshops: Mit diesen Vertrauenskarten und Take-A-Break-Karten Webinare interaktiv und motivierend gestalten!
- Aug. 19 – Aug. 28, 2020
- Nike Sun (Massachusetts Institute of Technology; chair), Jian Ding (University of Pennsylvania), Ronen Eldan (Weizmann Institute), Elchanan Mossel (Massachusetts Institute of Technology), Joe Neeman (University of Texas at Austin), Jelani Nelson (UC Berkeley), Tselil Schramm (Stanford University; Microsoft Research Fellow)
- Sep. 28 – Oct. 2, 2020
- Lihong Li (Google Brain; chair), Marc G. Bellemare (Google Brain)
- The success of deep neural networks in modeling complicated functions has recently been applied by the reinforcement learning community, resulting in algorithms that are able to learn in environments previously thought to be much too large. Successful applications span domains from robotics to health care. However, the success is not well understood from a theoretical perspective. What are the modeling choices necessary for good performance, and how does the flexibility of deep neural nets help learning? This workshop will connect practitioners to theoreticians with the goal of understanding the most impactful modeling decisions and the properties of deep neural networks that make them so successful. Specifically, we will study the ability of deep neural nets to approximate in the context of reinforcement learning.
- Aug. 31 – Sep. 4, 2020
- Csaba Szepesvari (University of Alberta, Google DeepMind; chair), Emma Brunskill (Stanford University), Sébastien Bubeck (MSR), Alan Malek (DeepMind), Sean Meyn (University of Florida), Ambuj Tewari (University of Michigan), Mengdi Wang (Princeton)
This program aims to reunite researchers across disciplines that have played a role in developing the theory of reinforcement learning. It will review past developments and identify promising directions of research, with an emphasis on addressing existing open problems, ranging from the design of efficient, scalable algorithms for exploration to how to control learning and planning. It also aims to deepen the understanding of model-free vs. model-based learning and control, and the design of efficient methods to exploit structure and adapt to easier environments.
The program focused on the following four themes:
- Optimization: How and why can deep models be fit to observed (training) data?
- Generalization: Why do these trained models work well on similar but unobserved (test) data?
- Robustness: How can we analyze and improve the performance of these models when applied outside their intended conditions?
- Generative methods: How can deep learning be used to model probability distributions?
Machine Learning Summer School (MLSS) is a course about modern methods of statistical machine learning and inference. It presents topics which are at the cor...
F. Mitzlaff, S. Doerfel, A. Hotho, R. Jäschke, and J. Mueller. 15th Discovery Challenge of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2013, Prague, Czech Republic - Sctober 27, 2013. Proceedings, 1120, page 7--24. Aachen, Germany, CEUR-WS, (2014)
G. Schreiber, A. Stemmer, and R. Bischoff. IEEE Workshop on Innovative Robot Control Architectures for Demanding (Research) Applications How to Modify and Enhance Commercial Controllers (ICRA 2010), page 15--21. Citeseer, (2010)