One of the use cases I thought was reasonable to expect from ChatGPT and Friends (LLMs) was summarising. It turns out I was wrong. What ChatGPT isn't summarising at all, it only looks like it. What it does is something else and that something else only becomes summarising in very specific circumstances.
Bekannte Manager und Experten wie Elon Musk warnen vor den Risiken künstlicher Intelligenz. Sie wollen die Entwicklung stoppen, die Technik müsse den Menschen dienen.
William Eden forecasts an AI winter. He argues that AI systems (1) are too unreliable and too inscrutable, (2) won’t get that much better (mostly due to hardware limitations) and/or (3) won’t be that profitable.
Wie funktionieren Systeme wie ChatGPT? Sind sie wirklich „intelligent“? Was passiert, wenn sie großflächig zum Einsatz kommen? Was hätte das für Auswirkungen auf Bibliotheken?
Here I collect a selected set of critical lenses on so-called1 'AI', including the recently hyped ChatGPT. I hope these resources are useful for others as well, and help make insightful why we need to remain vigilant and resist the AI hype. I expect to be updating this blog as time passes. If you have…