Microsoft is one of the key players in the artificial intelligence revolution. So how does the software giant, which has been integrating ChatGPT technology into the products most of us use, go about making its AI products responsible and ethical?
In the second episode of the World Economic Forum’s podcast series on generative AI, Microsoft’s Chief Responsible AI Officer Natasha Crampton talks responsibility and regulation.
And we hear from the Co-founder of Silicon Valley start-up Hugging Face on how it feels to launch an AI chatbot to the world.
Here are some of the key quotes:
Developing responsible AI
“My job is to put into practice across the company, the six AI principles that we’ve adopted at Microsoft, ” says Natasha Crampton. “Our six principles that form our north star are fairness, privacy and security, reliability and safety, inclusiveness, accountability and transparency.”
“Microsoft has long taken the view that we need both responsible organizations like ourselves to exercise self-restraint and put in place the best practices that we can to make sure that AI systems are safe, trustworthy and reliable,” she adds.
“We also recognize that we need regulation. There’s no question that we will need new laws and norms and standards to build confidence in this new technology and also to give everyone the protection under the law.”
“While we would love it to be the case that all companies decide to adopt the most responsible practices they can, that is not a realistic assumption,” she says. “We think it’s important there is this baseline protection for everyone and that will help to build trust in the technology.”
Transparency is key to building trust in AI
Thomas Wolf, Co-founder of Hugging Face says his company’s open source approach is one important way to achieve responsible AI.
“Having an open model is super important,” says Wolf. “People need to understand where AI might fail or trust in where it will work. They need to be aware of biases that you may have in these tools, or how they could be misused.”
Wolf says allowing everyone access to the source code of his platform is the ultimate expression of transparency. “If the model is open, you don’t have to believe just the people who made it. You can audit the model yourself, read how it was made, dive into training data and raise issues,” he adds. “We have a lot of discussion pages on our models where people can flag models and raise questions.”
Should we pause the development of AI until we know it is safe?
A recent open letter signed by thousands of AI engineers and business leaders called for a six-month pause in the development of AI models more powerful than ChatGPT. The idea was to give regulators time to catch up with the pace of development and for tech companies to gain more understanding of the potential risks. Crampton recognizes the concerns but says pausing development would be counter-productive.
“Rather than pausing important research and development work underway right now, including into the safety of these models, I think we should focus on a plan of action,” she says. “We should be bringing our best ideas to the table about the practices that are effective today to make sure we’re identifying and measuring and mitigating risks. And we should also be bringing our best ideas about the new laws and norms and standards that we need in this space.”
Sector-specific AI regulation
Wolf believes the pace of AI development will only accelerate and no blanket regulatory framework will work. He prefers the idea of sector-specific regulation, akin to the governance of commercial aviation or the nuclear industry.“If it’s in specific fields that would make sense. When you are talking about airlines, it’s a very specific field, it’s commercial airlines. You don’t have the same for other types of aviation,” Wolf says.
“I think once you have nailed down a specific sector where there is a specific danger we want to prevent, then it makes sense. I think regulation at this level would be super positive and super interesting. But something that just generally covers AI feels to me like it would just be too wide-ranging to be effective.”
AI and the workplace
The impact of AI on the workplace and the jobs we all do has led the discussion on responsible AI. Questions have been raised about its use in recruitment, to shortlist and even select candidates. It’s recognized that AI can introduce bias in selection processes as a result of the data they have been trained on.
Perhaps the biggest question of all is who will do the work – human or AI? The Forum’s Future of Jobs Report 2023 has identified a trend where machines are increasingly performing tasks once done by people.
The balance of work tasks performed by humans and machines is changing rapidly. Image: World Economic Forum
This trend is likely to accelerate in the years to come, but Crampton doesn’t see AI replacing humans, even though she’s been impressed with what ChatGPT can do.
“I prompted an early version of GPT-4 to produce a bill that could regulate AI based on an impact assessment methodology. I got an output that was a very decent first draft.
“Of course, it’s very, very important to be judicious about it and as a trained lawyer I certainly picked up some errors. It’s important to understand where the technology is good and to recognize its flaws.
“We need to strike a balance that combines the best of AI with the best of humans. This technology is essentially a co-pilot for doing these tasks and enhancing human ability.”