Security News > 2023 > October > AI Risks

AI Risks
2023-10-09 11:03

It's widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of AI safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from AI. The technology historian David C. Brock calls these fears "Wishful worries"-that is, "Problems that it would be nice to have, in contrast to the actual agonies of the present."

A signal moment came when Timnit Gebru, a co-leader of Google's AI ethics team, was dismissed for pointing out the risks of developing ever-larger AI language models.

Integrity experts call for the development of responsible AI, for civic education to ensure AI literacy and for keeping humans front and center in AI systems.

Other groups of prognosticators cast the rise of AI through the language of competitiveness and national security.

In the lobbying battles over Europe's trailblazing AI regulatory framework, US megacompanies pleaded to exempt their general-purpose AI from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate.

A publicly directed AI development project would serve to counterbalance for-profit corporate AI and help ensure an even playing field for access to the twenty-first century's key technology while offering a platform for the ethical development and use of AI. Also, we should embrace the humanity behind AI. We can hold founders and corporations accountable by mandating greater AI transparency in the development stage, in addition to applying legal standards for actions associated with AI. Remarkably, this is something that both the left and the right can agree on.


News URL

https://www.schneier.com/blog/archives/2023/10/ai-risks.html

#AI