whos to blame when ai goes rogue the uns quiet warning that got very loud
AI Daily News

Who’s to Blame When AI Goes Rogue? The UN’s Quiet Warning That Got Very Loud

From Silicon Valley to the U.N., the question of how to assign blame when AI goes wrong is no longer an esoteric regulatory issue, but a matter of geopolitical significance.

This week, the United Nations Secretary-General posed that question, highlighting an issue that is central to discussions about AI ethics and regulation. He questioned who should be held responsible when AI systems cause harm, discriminate, or spiral beyond human intent.

The comments were a clear warning to national leaders, as well as to tech-industry executives, that AI’s capabilities are outpacing regulations, as previously reported.

But it wasn’t just the warning that was remarkable. So too was the tone. There was a sense of exasperation.

Even desperation. If AI-driven machines are being used to make decisions that involve life and death, livelihoods, borders and security, then someone can’t just wimp out by saying it’s all too complicated.

The Secretary-General said the responsibility “must be shared, among developers, deployers and regulators.”

The notion resonates with long-held suspicions in the UN about unbridled technological force, which has been percolating through UN deliberations on digital governance and human rights.

That timing is important. As governments try to draft AI regulations at a moment when the technology is changing so rapidly, Europe already has taken the lead in passing ambitious laws that will apply to high-risk AI products, establishing a regulatory standard that will likely serve as a beacon – or cautionary tale – for other countries

But, honestly: laws on a page aren’t going to shift the power dynamics. The Secretary-General’s words enter the world in the face of AIs that are currently being used in immigration vetting, predictive policing, creditworthiness, and military choices.

Civil society has been warning about the dangers of AI if there’s no accountability. It’s going to be the perfect scapegoat for human decision-making with very human repercussions: “the algorithm made me do it.”

We should also mention that there is also a geopolitics problem that is barely discussed: What will happen if AI explainability regulations in one country are incompatible with those of a neighboring country?

What will happen when AI traverses boundaries? Can we talk about the rights to export AI? Antonio Guterres, the Secretary General of the UN, spoke about the need for universal guidelines to develop and use AI, much like it is done with nuclear and climate laws.

And this is not an easy task in a world with a disintegration of international relations and international agreements, which is heading towards a situation of complete deregulation.

My interpretation? This wasn’t diplomacy speaking. This was a draw-the-line speech. It wasn’t a complicated message, even if it’s a complicated problem to solve: AI is not excused from accountability just because it’s clever or quick or lucrative.

There must be an entity to whom it’s accountable for its results. And the more time the world spends deciding what that entity will be, the more painful and complex the decision will become.

What is your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Mark Borg
Mark is specialising in robotics engineering. With a background in both engineering and AI, he is driven to create cutting-edge technology. In his free time, he enjoys playing chess and practicing his strategy.

    You may also like