Does AI need to be regulated? Views from the World Economic Forum
Artificial intelligence (AI) is already being used in a variety of applications, including automated analysis of video captured by security cameras or to assess the quality of items moving down a manufacturing line, for instance. But with converging technological trends, particularly 5G and pervasive internet of things (IoT), AI is poised to play a increasingly large role in our day-to-day lives. And leaders in Davos, Switzerland, for the World Economic Forum are taking a moment to consider the implications of AI.
In an interview with World Economic Forum Founder Klaus Schwab, Alphabet CEO Sundar Pichai used the example of automated facial recognition as highlighting important AI considerations.
“When you think about technologies like facial recognition, it can be used to benefit, it can be used to find missing people, but it can be used for mass surveillance,” Pichai said, calling AI “one of the most profound things we are working on as humanity; it’s more profound than fire or electricity or any of the other bigger things we have worked on. It has tremendous positive sides to it, but it has real negative consequences.”
As such, Pichai called for a “common framework” that reflects “a shared set of values…We need to build on those values and make sure when we approach AI, we are doing it in a way that serves society. And that means making sure AI doesn’t have bias, that we build it and test it for safety. We make sure that there is human agency, that it’s ultimately accountable to people.”
The AI question came up again in an interview between Huawei CEO Ren Zhengfei and Yuval Noah Harari, a historian and author of the excellent “Sapiens: A Brief History of Humankind.”
Again on the surveillance angle, Harari said, “The point is when you gather enough data on people, you get to know people better than they know yourself. Are we at the point where companies or governments can hack millions of people, that means they know my medical history, personal weaknesses?”
He said technology is the linchpin of “the current arms race…You don’t need to send the soldiers in if you have all the data on a country.”
Zhengfei, who leads the Chinese ICT behemoth, took a less alarmist view on the perpetuation of AI. He said, as new tech becomes mainstream, “Humanity will be able to use them to benefit us. Most people aspire to a good life, not a miserable life.”
On an unrelated note, the Huawei founder also touched on the company’s tense relationship with the United States, which has moved to ban Huawei from telecom networks and taken steps to curb the flow of American electronics and componentry to Huawei.
“Huawei used to be an admirer of the U.S., we learned a lot from them. We hired dozens of American consulting firms to learn how to manage our business. The U.S. should feel proud of [Huawei], they have the U.S. management system exported and implemented. They should not be overly concerned about Huawei and our position in the world.”
In a WEF blog post, Worday’s Chief Privacy Officer Barbara Cosgrove laid out eight steps businesses can take to ensure the ethicality of AI. Those include defining “AI ethics;” look to customers in the design, development and deployment of solutions using AI; and transparency.
To that last point, Cosgrove wrote, “Because [machine learning] can be so complex, companies should go above and beyond in explaining what data is being used, how it’s being used, and for what purpose. Explain to customers how your ML technologies work and the benefits they offer, and describe the data content needed to power any ML solutions you offer. Demonstrate accountability in your ML solutions to your customers.”
Indeed, as businesses or all sorts embrace AI, representatives from Singapore during the World Economic Forum announced the second iteration of the nation-state’s ethical AI framework, including exemplary use cases.
The Straits Times quoted Minister for Communications and Information S. Iswaran: “There are concerns about how [AI] will be used, and whether people can have trust in AI when it is used.