Microsoft-Backed Tech Group Urges for AI Regulation
BSA, a tech advocacy group partly backed by Microsoft, is pushing for the implementation of rules governing the use of artificial intelligence (AI) in national privacy legislation. The group, which represents business software companies like Adobe, IBM, and Oracle, released a document on Monday outlining its suggestions.
With Microsoft’s recent investment in OpenAI, the creator of the generative AI chatbot ChatGPT, the company is one of the leaders in AI. However, Google, the other significant US player in advanced AI, is not part of BSA.
Many members of Congress, including Senate Majority Leader Chuck Schumer, have expressed interest and urgency in ensuring that regulation keeps pace with the rapid development of AI technology. The push for AI regulation has accelerated with the introduction of accessible advanced AI tools like ChatGPT. While the US has created a voluntary risk management framework, many advocates have pushed for even stronger protections. Meanwhile, Europe is working to finalize its AI Act, which creates protections around high-risk AI.
To address this issue, BSA is advocating for four key protections:
- Congress should make clear requirements for when companies must evaluate the designs or impact of AI.
- Those requirements should kick in when AI is used to make “consequential decisions,” which Congress should also define.
- Congress should designate an existing federal agency to review company certifications of compliance with the rules.
- Companies should be required to develop risk-management programs for high-risk AI.
“We’re an industry group that wants Congress to pass this legislation,” said Craig Albright, Vice President of US Government Relations at BSA. “So we’re trying to bring more attention to this opportunity. We feel it just hasn’t gotten as much attention as it could or should.”
BSA suggested that the American Data Privacy and Protection Act (ADPPA), the bipartisan privacy bill that passed out of the House Energy and Commerce Committee last Congress, is the right vehicle for new AI rules. Although the bill still faces a steep road ahead to becoming law, BSA said it already has the right framework for the sort of national AI guardrails the government should put in place.
BSA hopes that when the ADPPA is reintroduced, as many anticipate, it will contain new language to regulate AI. Albright said the group has been in contact with the House Energy and Commerce Committee about their suggestions, and the committee has had an “open door” to many different voices.
Albright added that passing any piece of legislation involves a heavy lift. “What we’re saying is, this is available. This is something that can reach agreement, that can be bipartisan,” he said. “And so our hope is that however they’re going to legislate, this will be a part of it.”
Conclusion
As the use of AI in business operations continues to grow, it is crucial to regulate its impact on society. BSA’s advocacy for AI regulation is a significant step towards this goal. The organization’s suggestions for national privacy legislation governing AI use can ensure the responsible development and deployment of AI technology. Congressional approval of the legislation can create a national agenda for digital transformation, which includes rules around AI, national privacy standards, and robust cybersecurity policy.