The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties, +972 and Local Call reveal.
Israeli intelligence sources reveal use of ‘Lavender’ system in Gaza war and claim permission given to kill civilians in pursuit of low-ranking militants
I find that one of the most frustrating kinds of AI hype is when people who are actually in a position to use their own expertise to push back instead give in to the FOMO and do the hype for tech companies. Today's case in point is a recent article in The Chronicle of Higher Education...
This essay is based in part on presentations given in the Spring and Summer of 2018 at the Creative AI Meetup at the Photographer’s Gallery in London, the University of Chicago’s Franke Institute for the Humanities, the Aarhus Institute of Advanced Studies in Denmark, INRS in Quebec, and the University of Warwick Centre for Interdisciplinary Methodologies Research Forum. It is the second part of a longer discussion about deep learning, the first part of which is in the essay, “Deep Learning as an Epistemic Ensemble”.
I wrote this essay for the printed magazine of the Elevate Festival 2024. On Friday March 1st. at 2pm I will participate in a panel discussion there on the issue of “AI vs. Democracy” that people can check out live or on stream/watch in a recording later
The prolific use of Artificial Intelligence Large Language Models (LLMs) present new challenges we must address and new questions we must answer. For instance, what do we do when AI is wrong?
L’ouvrage « Quand la machine apprend », de Yann Le Cun, permet de décoder certains mystères de l’intelligence artificielle, en s’intéressant au fonctionnement des neurones du cerveau... humain.
Silicon Valley utopians imagine AI solutions to ecological crisis, while being oblivious to the real material and ecological harms their fantasies wreak.
Algorithmic hiring is the usage of tools based on Artificial intelligence (AI) for finding and selecting job candidates. As other applications of AI, it is vulnerable to perpetuate discrimination. Considering technological, legal, and ethical aspects, the EU-funded FINDHR project will facilitate the prevention, detection, and management of discrimination in algorithmic hiring and closely related areas involving human recommendation.
Tl;dr: The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.
Bekannte Manager und Experten wie Elon Musk warnen vor den Risiken künstlicher Intelligenz. Sie wollen die Entwicklung stoppen, die Technik müsse den Menschen dienen.
William Eden forecasts an AI winter. He argues that AI systems (1) are too unreliable and too inscrutable, (2) won’t get that much better (mostly due to hardware limitations) and/or (3) won’t be that profitable.
Wie funktionieren Systeme wie ChatGPT? Sind sie wirklich „intelligent“? Was passiert, wenn sie großflächig zum Einsatz kommen? Was hätte das für Auswirkungen auf Bibliotheken?
Here I collect a selected set of critical lenses on so-called1 'AI', including the recently hyped ChatGPT. I hope these resources are useful for others as well, and help make insightful why we need to remain vigilant and resist the AI hype. I expect to be updating this blog as time passes. If you have…
Europe can become a global leader in artificial intelligence, but only if it protects its citizens and involves workers in the regulatory and deployment process. In that regard, the European Commission’s recent draft regulation leaves much to be desired.
What does Timnit Gebru’s firing and the recent papers coming out of Google tell us about the state of research at the world’s biggest AI research department. The high point for Google’s research in…
Employers using software to monitor workers’ every movement are likely to be in breach of EU privacy laws, trade unions warn today as they launch a new report on artificial intelligence at work.
While classifying AI systems used at work as high-risk is appropriate, however, the Proposed Regulation is far from being sufficient to protect workers adequately.
The Commission is proposing the first ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.
How European Union non-discrimination laws are interpreted and enforced vary by context and by state definitions of key terms, like “gender” or “religion.” Non-discrimination laws become even more…
This essay will be somewhat longer but let me put the main point forward first: It is time we #defundAI. Millions upon millions are thrown towards researchers and businesses promising science fiction narratives while the world is burning to the ground. It’s time to stop.
The digital transformation holds many promises to spur innovation, generate efficiencies and improve services while boosting more inclusive and sustainable growth and enhancing well-being.
The ETUC is convinced that the precautionary principle, in the Treaty, means that the strategy should be inclusive and ambitious and restrict its actions to high risks.
The European Commission puts forward a European approach to Artificial Intelligence and Robotics. It deals with technological, ethical, legal and socio-economic aspects to boost EU's research and industrial capacity and to put AI at the service of European citizens and economy.
Despite its commitment to ‘trustworthy’ artificial intelligence, the EU is bankrolling AI projects that are questionable, write Fieke Jansen and Daniel Leufer.
The European Parliament's internal market committee (IMCO) insists humans must remain in control automated decision-making processes, ensuring that people are responsible and able to overrule the outcome of decisions made by computer algorithms.
I am an AI researcher, and I’m worried about some of the societal impacts that we’re already seeing. In particular, these 5 things scare me about AI: 1. Algorithms are often implemented without ways to address mistakes. 2. AI makes it easier to not feel responsible. 3. AI encodes & magnifies bias. 4. Optimizing metrics above all else leads to negative outcomes. 5. There is no accountability for big tech companies.
Law Professor Jeremias Adams-Prassl explores the rise of the “algorithmic boss” and how artificial intelligence and the development of new technology has and will continue to impact the labour market.
Artificial intelligence (AI) and face recognition technology is being used for the first time in job interviews in the UK to identify the best candidates.
J. Hennrich, E. Ritz, P. Hofmann, and N. Urbach. Capturing artificial intelligence applications’ value proposition in healthcare – a qualitative research study, (2024)
P. Plackis-Cheng, T. Chalasani, and S. Palme. FINDHR Expert Reports, Fairness and Intersectional Non-Discrimination in Human Recommendation (FINDHR), Barcelona, (December 2023)