Algorithmic hiring is the usage of tools based on Artificial intelligence (AI) for finding and selecting job candidates. As other applications of AI, it is vulnerable to perpetuate discrimination. Considering technological, legal, and ethical aspects, the EU-funded FINDHR project will facilitate the prevention, detection, and management of discrimination in algorithmic hiring and closely related areas involving human recommendation.
Sociological research on inequality has increasingly moved beyond the examination of inequalities as they
presumably exist to explore the generic narrative processes that perpetuate that inequality. Unfortunately,
however, this research remains concentrated on either individual or ideological grand narratives and
ignores the fact that the work narratives do, including the production and structuring of inequality, occurs
at multiple levels: cultural, structural, organizational, and personal, and never exclusively at just one of
these. In this study, we use Somali origin narratives to describe conceptually the ways in which narratives
produced at different personal and societal levels—cultural, institutional, organizational—dialectically
structure the generic processes that produce and perpetuate social inequality.
L’accueil des émotions et l‘identification des besoins pour soutenir
l’adhésion aux mesures sanitaires. Cet outil est construit à partir de ressources éprouvées développées par des médecins travaillant à une approche de communication empathique pour accompagner les patients souffrant de maladies graves et leurs aidants. Ces ressources sont établies à partir de principes clés communs rappelés brièvement en première partie de cet outil.
Wie stark lassen sich Lehrende durch Learning Analytics in ihrer Bewertung von Studierenden beeinflussen? Welche diskriminierenden aber auch ungleichheits-reduzierenden Effekte gehen von Algorithmen aus? In diesem Beitrag stellen die Autor*innen das Potential und die Gefahren von Learning Analytics vor und werten die Forschungsergebnisse eines Conjoint-Experiments aus.
While classifying AI systems used at work as high-risk is appropriate, however, the Proposed Regulation is far from being sufficient to protect workers adequately.
How European Union non-discrimination laws are interpreted and enforced vary by context and by state definitions of key terms, like “gender” or “religion.” Non-discrimination laws become even more…