Excitement over futuristic artificial intelligence technologies, like OpenAI’s ChatGPT chatbot, has already given way to fear of the risks they could pose.
On Monday, researcher Geoffrey Hinton, known as “The Godfather of AI,” said he’d left his post at Google, citing concerns over potential threats from AI development. Google CEO Sundar Pichai talked last month about AI’s “black box” problem, where even its developers don’t always understand how the technology actually works.
Among the other concerns: AI systems, left unchecked, can spread disinformation, allow companies to hoard users personal data without their knowledge, exhibit discriminatory bias or cede countless human jobs to machines.
The fears are justified, says Suresh Venkatasubramanian, a Brown University computer science professor who researches fairness and bias in tech systems.
“These are not potential [risks], because these harms have already been documented over the years,” Venkatasubramanian, who recently served as an advisor to the White House Office of Science and Technology Policy and co-authored the Biden Administration’s “Blueprint for an AI Bill of Rights,” tells CNBC Make It.
In the “Blueprint for an AI Bill of Rights,” Venkatasubramanian helped lay out proposals for “ethical guardrails” that could safely govern and regulate the AI industry. With them in place, most people would barely notice the difference while using AI systems, he says.
“I see these guardrails, not just as protections against harm, but as ways to actually build a much more pleasant and beneficial future, which is what we really all want,” Venkatasubramanian says.
Here are the guardrails that he and other experts suggest, and what they’d actually look like in practice.
5 guardrails recommended by AI experts
AI needs oversight in five specific areas, Venkatasubramanian says:
- Rigorous and independent testing of products using AI
- Protections from discriminatory bias in algorithms
- Data privacy
- A requirement that users must be notified when they’re using an automated product, and how its decisions may affect them
- A similar requirement that users be allowed to opt out of AI systems in favor of human alternatives
Often, even as some AI systems approach a human-like level of intelligence, they’re still imperfect, Venkatasubramanian says: “They claim to solve a problem they don’t actually solve, and they make mistakes.”
But tech companies may not be trustworthy enough to test their own products for safety concerns, he says. One potential solution: a federal agency similar to the U.S. Food and Drug Administration.
“I’m not saying this is the only model, but when the FDA approves drugs, they have the companies do their own internal testing [and] an expert at the FDA will look at that result to decide whether that drug should be approved or not,” Venkatasubramanian says. “That’s one example that we already have in place.”
Third-party oversight could help for most of his proposed guardrails, he says. There’s one exception, a guardrail that isn’t technologically possible yet and may never be: protections from discriminatory bias.
Every AI system is created by a human or group of humans, and trained on data sets. Every human has some form of inherent bias, and even gigantic data sets can’t represent the entirety of the internet or totality of the human experience, Venkatasubramanian says.
His partial solution, at least for now, is to mitigate bias by building AIs “with inputs and understanding from everyone who’s going to be affected,” submitting them to independent testing and relying on third-party oversight to hold developers accountable and fix obvious biases when they’re discovered.
“I don’t think there’s any way for any system — human- or AI-based — to be completely free from bias,” Venkatasubramanian says. “That’s not the goal. The goal is to have accountability.”
The best-case scenario for AI ‘is not that far off’
At first glance, much of this seems doable — especially since some tech leaders are showing an openness to AI regulations.
Earlier this year, chipmaker Nvidia CEO Jensen Huang specifically called for new government regulations as a response to AI. Google and Pichai made similar statements in April, and Pichai joined other AI tech executives — including Microsoft CEO Satya Nadella — at the White House this week to hear President Joe Biden’s concerns about the need for safe AI products.
There’s a reason these companies can’t simply adopt these guardrails on their own, without government involvement: Implementing new rules and requirements is “expensive,” Venkatasubramanian says. Some tech businesses won’t ever fully commit, even if they express a desire for change, he adds.
That’s why he expects it’ll take “a mix of enforcement actions, legislation, as well as voluntary action.” On Thursday, the White House introduced plans to invest in ethical AI initiatives, and the U.K. government launched an investigation into risks associated with AI technology.
Functionally, those are baby steps. More action will likely come down the road, Venkatasubramanian says.
“The best-case scenario is not that far off from where we are right now,” he says. “We have AI to help generate new ideas that scientists can then explore, to help cure disease, to help improve crop yields, to help understand the cosmos.”
DON’T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!
Get CNBC’s free report, 11 Ways to Tell if We’re in a Recession, where Kelly Evans reviews the top indicators that a recession is coming or has already begun.