Marius Wehner und Lynn Schmodde von der Wirtschaftswissenschaftlichen Fakultät der Heinrich-Heine-Universität Düsseldorf berichten von ihrer Forschung zu Learning Analytics. Im Verbundprojekt LADi haben sie Diskriminierungspotenziale und Bias in den Algorithmen untersucht sowie die Wahrnehmung der Lernenden von Beurteilungen durch Learning Analytics. Interviewer in Folge 11 des DINItus Podcasts ist Erik Reidt vom ZIM/Multimediazentrum der HHU Düsseldorf.
This episode is all about bias. Our hosts Maren Scheffel and Nia Dowel talk to Shamya Karumbaiah and Rene Kizilcec about bias in learning analytics and some of the work they are doing in that area.
Der Kurs vermittelt ein grundlegendes Verständnis für Machine Learning und den Umgang mit Algorithmen. Nach einem Einführungsteil auf der Basis inhaltlicher Wissensvermittlung, haben Sie intensiv die Möglichkeit, Kompetenzen durch forschendes Lernen und anhand realer Szenarien zu entwickeln.
Certain words are like sparks in a puddle of gasoline. “Bias” is definitely one of those words—and for good reason. If there is something that we are doing, that we are unaware of, that is causing harm to others, then we definitely should be taking it seriously.
Wie stark lassen sich Lehrende durch Learning Analytics in ihrer Bewertung von Studierenden beeinflussen? Welche diskriminierenden aber auch ungleichheits-reduzierenden Effekte gehen von Algorithmen aus? In diesem Beitrag stellen die Autor*innen das Potential und die Gefahren von Learning Analytics vor und werten die Forschungsergebnisse eines Conjoint-Experiments aus.
La ministra Yolanda Díaz abre de nuevo la mesa de negociación con sindicatos y patronal tras el acuerdo alcanzado para reconocer a los 'riders' como trabajadores de las plataformas de reparto
El texto reconoce la relación laboral existente entre el repartidor y las compañías, en línea con la sentencia del Tribunal Supremo, y obliga a las empresas a informar a los sindicatos sobre el funcionamiento de los algoritmos de la aplicación
El último Acuerdo social sitúa a España en cabeza de la UE en el reconocimiento de los derechos laborales de las personas que trabajan en reparto de plataformas digitales
Breanne K. Litts, Kristin A. Searle, Bryan M. J. Brayboy, Yasmin B. Kafai, British Journal of Educational Technology, Feb 21, 2021
Commentary by Stephen Downes
(Washington, DC) Today, Congressman Tom Malinowski (NJ-7) and Congresswoman Anna G. Eshoo (CA-18) introduced the Protecting Americans from Dangerous Algorithms Act, legislation to hold large social media platforms accountable for their algorithmic amplification of harmful, radicalizing content that leads to offline violence. The bill narrowly amends Section 230 of the
SHA-2 (Secure Hash Algorithm 2), of which SHA-256 is a part, is one of the most popular hashing algorithms out there. In this article, we are going to break down each step of the algorithm as simple as we can and work through a real-life example by hand.
This essay will be somewhat longer but let me put the main point forward first: It is time we #defundAI. Millions upon millions are thrown towards researchers and businesses promising science fiction narratives while the world is burning to the ground. It’s time to stop.
Durch digitale Lernplattformen können vermehrt Daten über Lernende, Lerninhalte und die Lernsituation ausgewertet werden. Die algorithmische Analyse nennt sich Learning Analytics. Diese Analyse ermöglicht einen individuellen Lernprozess sowie eine Früherkennung von Lernschwächen. Learning Analytics bergen allerdings auch einige Nachteile.
Smarte Ausstattung: Um das Lernen der Zukunft digital zu gestalten, benötigen Schulen passende technische Ausstattung. Die Auswahl ist jedoch jedoch rießig.
COVID-19 Exit through the App Store? A rapid evidence review of the technical considerations and societal implications of using technology to transition from the COVID-19 crisis was undertaken...
In 1687, Sir Isaac Newton published his seminal article “Philosophiae Naturalis Principia Mathematica” in which he described the motion of celestial bodies (Newton, 1987).
This edited volume includes a collection of expanded papers from the 2019 Sino-German Symposium on AI-supported educational technologies, which was held in Wuhan, China, March, 2019. The contributors are distinguished researchers from computer science and learning science.
I am an AI researcher, and I’m worried about some of the societal impacts that we’re already seeing. In particular, these 5 things scare me about AI: 1. Algorithms are often implemented without ways to address mistakes. 2. AI makes it easier to not feel responsible. 3. AI encodes & magnifies bias. 4. Optimizing metrics above all else leads to negative outcomes. 5. There is no accountability for big tech companies.
Here at Trail of Bits we review a lot of code. From major open source projects to exciting new proprietary software, we’ve seen it all. But one common denominator in all of these systems is that for some inexplicable reason people still seem to think RSA is a good cryptosystem to use. Let me save…
Scalable learning is a key differentiator for modern enterprise business. The theory states that the institutions most likely to thrive in today’s changing economic environments will be those that provide opportunities not only to learn faster as a whole organization, but also to learn from other individuals and organizations to create new knowledge.
Experts warn about EU law that could change the architecture of the internet, forcing websites to install flawed and expensive filters that would block satirical content like memes and lead to digital monopolization.
It recently came to my attention that I was waging a war across multiple fronts and fatigue had struck — they were winning. For months I had battled, fighting their persistence with my propensity to click x.
Premier article d’une série consacrée aux algorithmes et à leur utilisation par les pouvoirs publics. Pour le sociologue Dominique Cardon, l’algorithme accompagne l’évolution d’une société marquée par une individualisation des rapports et une dérive vers la méritocratie.
What’s easy for a computer to do, and what’s almost impossible? Those questions form the core of computational complexity. We present a map of the landscape: P, NP, etc.
IDEA is a series of nonverbal algorithm assembly instructions by Sándor P. Fekete, Sebastian Morr, and Sebastian Stiller. They were originally created for Sándor's algorithms and datastructures lecture at TU Braunschweig, but we hope they will be useful in all sorts of context. We publish them here so that they can be used by teachers, students, and curious people alike.
At some point, you can’t get any further with linked lists, selection sort, and voodoo Big O, and you have to go get a real algorithms textbook and learn all that horrible math, at least a little. But which book? There are tons of them. I haven’t read every algorithms book out there, but I…
from David Mount !
Alternate Lecture notes at:
- https://www.cs.umd.edu/users/meesh/cmsc351/mount/lectures/
- https://www.cs.umd.edu/~mount/251/Lects/251lects.pdf
As algorithms gain ground with uses such as credit scoring and predictive policing systems, how can we make sure that automated decision-making works for the public good? Interview of Matthias Spielkamp from AlgorithmWatch by Aaron Sterniczky.
An objective function is a measure of how similar a prediction of a value and the actual value are. Usually, we are looking to find the set of parameters that lead to the smallest possible cost which would imply that your algorithm will perform well.