Artificial Intelligence (AI) does not represent an immediate threat to the existence of humanity, says the president of Microsoft, but he believes that governments and companies need to act more quickly to face the risks of technology, implementing what he calls “security brakes”. security”.
“We don’t see any risk in the next few years, over the next decade, that artificial intelligence will pose an existential threat to humanity, but… let’s solve this problem before it happens,” says Brad Smith during an interview with La Presse Canadian .
Smith, a strong Microsoft supporter who joined the company in 1993, emphasizes that it is important to address the problems posed by technology so that the world is not “constantly worried and talking about it.”
He believes the solution to potential problems lies in “safety brakes,” which could function like the emergency mechanisms built into elevators, school buses and high-speed trains.
They are expected to be integrated into high-risk artificial intelligence systems that control critical infrastructure such as power grids, water networks and traffic.
“Let’s learn from art,” says Smith.
“All films in which technology poses an existential threat end the same way: humans turn off technology. We must therefore provide a switch, a safety brake and ensure that the technology remains under human control. Let’s embrace this and do it now. »
Smith’s comments come as a race to use and innovate AI has begun across the technology sector and beyond, following the launch of ChatGPT, a conversational bot designed to generate human-like responses to text requests.
Microsoft has invested billions in OpenAI, the creator of San Francisco-based ChatGPT, and also has its own AI-based technology, Copilot, which helps users create draft content, suggests different ways to phrase a text they’ve written, and the helps you create PowerPoint presentations from Word. documents.
But many are concerned about the pace of progress in AI. For example, Geoffrey Hinton, an Anglo-Canadian pioneer of deep learning often considered the “godfather of AI,” said he believed the technology could lead to prejudice and discrimination, unemployment, echo chambers, fake news, combat robots and other risks.
Several governments, including Canada, have begun to develop safeguards around AI.
In a 48-page report released Wednesday by Microsoft, Smith said his company supports Canada’s efforts to regulate AI.
These efforts include a voluntary code of conduct launched in September, whose signatories – including Cohere, OpenText, BlackBerry and Telus – pledge to assess and mitigate the risks of their AI-based systems, monitor them for incidents and act on problems that arise. develop.
While the code has critics, such as Tobi Lütke, founder of Shopify, who sees it as an example of how the country uses too many “arbiters” when it needs more “builders,” Smith noted in the report that by developing a code, Canada “demonstrated early leadership” and is helping the entire world work toward a common set of shared principles.
The voluntary code is expected to be followed by Canada’s next Artificial Intelligence and Data Act, which would create new criminal provisions to prohibit the use of AI that could cause serious harm.
The legislation, known as Bill C-27, passed first and second readings but is still being considered in committee. Ottawa has said it will not come into force until 2025.
Asked why he thinks governments need to move faster on AI, Smith says the world has had an “extraordinary year” since ChatGPT was launched.
“When we say go faster, that’s frankly not a criticism,” he says.
“It’s about recognizing the current reality, where innovation has advanced at a faster rate than most people expected”
However, he sees Canada as one of the countries best prepared to keep pace with AI, as universities have long focused on the technology and cities such as Montreal, Toronto and Vancouver have been hotbeds of innovation in the area.
“If there’s any government that I think has a tradition that it can draw on to pass something like this, I think it’s Canada. I hope it’s the former,” says Mr. Smith.
“It won’t be the last if it’s the first. »
An international approach needed
However, as Canadian law on AI undergoes “careful consideration,” Smith says Canada should consider how it can adopt additional safeguards in the meantime.
For example, during the procurement process for high-risk AI systems, he believes that partners seeking contracts may be forced to rely on third-party audits to certify that they comply with relevant international AI standards.
In the report, Smith also supports an approach to AI that will be “developed and used across borders” and that “ensures that an AI system certified as safe in one jurisdiction can also be described as safe in another.
He compared this approach to that of the International Civil Aviation Organization, which uses uniform standards to ensure that a plane does not need to be refurbished mid-flight between Brussels and New York to meet the different requirements of each country.
An international code would help AI creators certify the security of their systems and increase compliance globally because they would be able to use internationally recognized standards.
“The voluntary code model offers Canada, the European Union, the United States, the other G7 members, as well as India, Brazil and Indonesia, the opportunity to move forward together based on a set of common values and principles, ,” he said in the report.
“If we can work with others on a voluntary basis, we will all move forward faster and with more attention and focus. This is not just good news for the technology world, but for the entire world. »