Dr. Marzano, a nationally known educational researcher and developer of the Marzano Teacher Evaluation Model and the Marzano School Leadership Evaluation Model, discusses how districts may use teacher evaluation models as primarily either measurement systems –which provide a static picture of a teacher’s performance at a given point; or as growth systems—which track improvements in teacher pedagogy over time. - See more at: http://www.marzanoevaluation.com/news/teacher-evaluation-whats-fair-whats-effective/#sthash.KaHjK1uL.dpuf
I’m included this link as the idea of player and team assessment in professional sports has begun to change. I just find this a fascinating topic in how our society is seeing a shift in how we evaluate in general including in the realm of professional sports. In the past player evaluation was done by experts who would watch and make a decision – the process is very subjective. Analytics provide ways to quantify in numbers what we see happen on the ice or field. The same goes for teams. While at the end of the day the score is what matters, analysts have found metrics to identify keys to long term success for teams as well.
Presentation used by Tinne De Laet, KU Leuven, for a keynote presentation during an event: organised by Leiden University, Erasmus University Rotterdam, and Delft University of Technology.
The presentations presents the results of two case studies from the Erasmus+ project ABLE and STELA, and provides 9 recommendations regarding learning analytics
Recommender systems provide users with content they might be interested in. Conventionally, recommender systems are evaluated mostly by using prediction accuracy metrics only. But, the ultimate goal of a recommender system is to increase user satisfaction.
Now that the “the only constant is change” in society, our capacity to engage with novel challenges is of first order importance. What are the personal dispositions that authentic learning needs to cultivate, and can we make these assessable and visible to learners and educators?
We introduce Vicuna-13B, an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. Preliminary evaluation using GPT-4 as a judge shows Vicuna-13B achieves more than 90%* quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90%* of cases. The cost of training Vicuna-13B is around $300. The code and weights, along with an online demo, are publicly available for non-commercial use.
A. Said, E. Zangerle, and C. Bauer. Proceedings of the 17th ACM Conference on Recommender Systems, page 1221-1222. New York, NY, USA, ACM, (September 2023)