The Analytics Workench is a tool for performing different kinds of analyses. It combines a web-based frontend for designing analysis workflows with server-side computation of the designed analysis processes. The workflows are represented using a visual language.
The workbench was designed as an extensible analysis framework. Extensibility includes both the possibility to connect different frontends to the computational backend as well as the possibility to extend the available analysis features. As the workbench is still in development, new analysis features are added regularly.
The version offered here is a demo version, which is restricted to a selection of analysis features from the field of Social Network Analysis. Please be aware that the version offered here is not intended for productive use. Thus created analysis workflows and results may be deleted from time to time without further warning!
The ASSISTments Platform ASSISTS students in learning while it gives teachers assessMENT of their students' progress. The ASSISTments platform is a generic system for any subject from math to English to science. Different researcher teams have funding to build libraries of content in ASSISTments. Currently ASSISTments is best known for the mathematic content inside of ASSISTments, but increasingly individual teachers are using ASSISTments to write their own content which they can share with the other teachers. More than half of the questions in ASSISTments have been built by teachers, and that number is growing fast.
UXCam is an experience analytics solution for mobile apps. Session Replay, Heatmaps, Funnel Analytics and Quantitative Analytics make UXCam a complete enterprise analytics solution for deeply understanding user behavior. Book a short demo today.
In the Developmental Intelligence Laboratory, we are interested in understanding fundamental cognitive mechanisms of human intelligence, human learning, and human interaction and communication in everyday activities. To do so, we collect and analyze micro-level multimodal behavioral data using state-of-the-art sensing and computational techniques. One of our primary research aims is to understand human learning and early development. How do young children acquire fundamental knowledge of the world? How do they select and process the information around them and learn from scratch? How do they learn to move their bodies and to communicate and interact with others? Learning this kind of knowledge and skills is the core of human intelligence. To understand how human learners achieve the learning goal, the primary approach in our research is to attach GoPro-like cameras on the head of young children to record egocentric video from their point of view. Using this innovative approach, we've been collecting video data of children’s everyday activities, such as playing with their parents and their peers, reading books with parents and caregivers, and playing outside. We've been using state-of-the-art machine learning and data mining approaches to analyze high-density behavioral data. This research line will ultimately solve the mystery on why human children are such efficient learners. Moreover, the findings from our research will be used to help improve learning of children with developmental deficits. A complimentary research line is to explore how human learning can teach us about how machines can learn. Can we model and simulate how a human child learns and develops? To this end, our research aims at bridging and connecting developmental science in psychology and machine learning and computer vision in computer science.
M-Lab provides the largest collection of open Internet performance data on the planet. As a consortium of research, industry, and public-interest partners, M-Lab is dedicated to providing an ecosystem for the open, verifiable measurement of global network performance. Real science requires verifiable processes, and M-Lab welcomes scientific collaboration and scrutiny. This is why all of the data collected by M-Lab’s global measurement platform are made openly available, and all of the measurement tools hosted by M-Lab are open source. Anyone with time and skill can review and improve the underlying methodologies and assumptions on which M-Lab’s platform, tools, and data rely. Transparency and review are key to good science, and good science is key to good measurement.
E. Hakami, и D. Hernandez-Leo. LAK21: 11th International Learning Analytics and Knowledge Conference, стр. 269–279. New York, NY, USA, Association for Computing Machinery, (2021)