The TAO framework is an open-source project which provides a very general and open architecture for computer-assisted test development and delivery. As upcoming evaluation needs will imply the collaboration among a large number of stakeholders situated at different institutional levels and with very different needs for assessment tools, the TAO framework has the ambition to provide a modular and versatile framework for collaborative distributed test development and delivery with the potential to be extended and adapted to virtually every evaluation purpose that could be handled by the means of computer-based assessment.
Hier wird in Einzelschritten die Evaluation von Webseiten nach formalen Kriterien wie Verfasserschaft, Inhalt, Relevanz, etc. deren Überprüfbarkeit und Seriosität vorgestellt.
A. Said, E. Zangerle, and C. Bauer. Proceedings of the 17th ACM Conference on Recommender Systems, page 1221-1222. New York, NY, USA, ACM, (September 2023)
M. Straesser, S. Eismann, J. von Kistowski, A. Bauer, and S. Kounev. Proceedings of the 2023 ACM/SPEC International Conference on Performance Engineering, page 31-41. New York, NY, USA, Association for Computing Machinery, (2023)
M. Straesser, S. Eismann, J. von Kistowski, A. Bauer, and S. Kounev. Proceedings of the 2023 ACM/SPEC International Conference on Performance Engineering, page 31-41. New York, NY, USA, Association for Computing Machinery, (2023)
M. Straesser, S. Eismann, J. von Kistowski, A. Bauer, and S. Kounev. Proceedings of the 2023 ACM/SPEC International Conference on Performance Engineering, page 31-41. New York, NY, USA, Association for Computing Machinery, (2023)
M. Straesser, S. Eismann, J. von Kistowski, A. Bauer, and S. Kounev. Proceedings of the 2023 ACM/SPEC International Conference on Performance Engineering, page 31-41. New York, NY, USA, Association for Computing Machinery, (2023)
M. Straesser, S. Eismann, J. von Kistowski, A. Bauer, and S. Kounev. Proceedings of the 2023 ACM/SPEC International Conference on Performance Engineering, page 31-41. New York, NY, USA, Association for Computing Machinery, (2023)
N. Hazrati, and F. Ricci. Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, page 95-98. ACM, (July 2022)
M. Falis, H. Dong, A. Birch, and B. Alex. Proceedings of the 21st Workshop on Biomedical Language Processing, page 389--401. Dublin, Ireland, Association for Computational Linguistics, (May 2022)
E. Zangerle, C. Bauer, and A. Said. Proceedings of the Fifteenth ACM Conference on Recommender Systems, page 794-795. New York, NY, USA, ACM, (September 2021)
N. Felicioni, M. Dacrema, and P. Cremonesi. Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, page 10-15. ACM, (June 2021)
S. Abass, S. Ahmed, S. Makki, and N. Osman. International Journal of Computer Science and Information Technology (IJCSIT), 10 (1/2):
01 - 15(June 2021)
S. Wu, and Y. Yang. (2021)cite arxiv:2105.04090Comment: Accepted for Publication at IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP). Online supplemental materials are attached to the end of this arXiv version.