google to teach ai to all 6 million u s teachers
AI Daily News

Here’s What Happens When AI Messes Up: Chatbots Are Taking Advantage of the Needy, According to New MIT Research

A new MIT study has thrown a rather large stone into this AI pond, causing the water to murmur an uncomfortable question: what if the people who need quality information the most are being served the least by AI?

The study concluded that widely used AI chatbots often give less accurate or less helpful information to users who are deemed more vulnerable – including non-native English speakers and those with lower levels of formal education.

The study, published in an MIT report, is a bit more nuanced than all those shiny AI brochures suggest it should be.

The researchers basically put a handful of popular chatbots through a stress test of sorts by altering the way questions were asked – grammatically, linguistically, with hints about the user’s education level – and voilà!

The chatbots responded with poorer answers when the grammar and language were poorer. Like a digital version of not judging a book by its cover.

I can almost hear a frustrated user thinking: “But I asked the same thing! Why did I just get a worse response?” Not a minor bug, then. A major fairness problem.

Obviously, the issue of biased AI isn’t new – the National Institute of Standards and Technology has already said that AI can “exacerbate societal biases present in the data used to train these systems” if not properly managed and mitigated, per NIST’s AI Risk Management Framework – but it’s another thing to quantify it.

It’s part of a global conversation right now about algorithmic bias, really – the World Economic Forum has called out fairness and inclusion as key challenges to trustworthy AI, saying that “the need for equitable results from AI-driven decision-making is one of the most significant” issues facing AI trustworthiness.

That makes sense: when AI chatbots become the primary gateway to information about everything from health to law to education, unequal service isn’t just frustrating. It’s potentially damaging.

And this isn’t some theoretical problem – AI use is expanding rapidly. Per a recent Associated Press report, both governments and private companies are rushing to incorporate AI into classrooms and government services and workplaces.

So if AI chatbots can’t seem to handle equity in a lab, what happens when they’re everywhere? It’s not a rhetorical question. It’s a future headline.

Obviously, there’s a real-life aspect to this that makes it even more complicated – and human. I’ve spoken with enough teachers and students and small-business owners at this point to know that they’re not interacting with AI in a lab.

They’re coming home exhausted and typing into phones in the dark. Sometimes English is their second or third language. And if an AI chatbot silently offers them poorer information because of it, it will only serve to deepen the sorts of inequalities that technology is supposed to erase.

Which is a bit of a painful irony. The MIT researchers aren’t calling for widespread panic here. They’re calling for tweaks – for better testing, more inclusive data and for developers to be held more accountable.

In other words, get it right before you scale. Some companies have already committed to dealing more with AI bias – but commitments are easy. Actually doing it is hard. So where does that leave us?

Probably in a place of measured sobriety, I think. AI is a powerful tool, yes. It is often helpful, yes. But it is not fair by design, and to assume that it is may be a form of wishful thinking.

If these tools are going to become the way billions of people interact with information, they need to work just as well for the person typing flawless academic English as they do for the teenager typing a mangled, misspelled question at 2 am.

Because ultimately, AI fairness isn’t a policy abstraction. It’s about who gets a microphone – and who gets left in the dark.

What is your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Mark Borg
Mark is specialising in robotics engineering. With a background in both engineering and AI, he is driven to create cutting-edge technology. In his free time, he enjoys playing chess and practicing his strategy.

    You may also like