“Humans should be arbiters of AI decisions” – Bosch sets down ‘red lines’ on industrial AI
Germany industrialist Bosch has set out a ‘code of ethics’ around the usage of artificial intelligence (AI) in its connected products, which says humans should be the arbiter of any AI-based decisions.
The firm has issued a set of guidelines, a number of ‘red lines’, which make clear AI decisions “that affect people” will not be automated entirely, without approval from humans of their final course of action. Humans will be able to override AI processes in its own equipment.
Bosch wants its AI-based products to be “safe, robust, and explainable”, it said. It sees AI as a key tool in its own digital change strategy. It quoted anagement consultants PwC, which projects AI will boost GDP in the period to 2030 by 26 percent in China, by 14 percent in North America, and by around 10 percent in Europe.
The company, one of the manufacturing sector’s most outspoken champions of industrial 5G, said at CES in January it is seeking to mainline data in its both its products and its factories. By 2025, every product out of its 270 factories will either contain AI or else have been developed or manufactured with the help of it, it declared.
Pre-empting the introduction of binding EU standards on AI, Bosch has proposed three scenarios, whereby humans retain control over any decisions the technology makes, as follows.
“AI is purely an aid – for example, in decision-supporting applications, where AI can help people classify items such as objects or organisms.”
“An intelligent system autonomously makes decisions that humans can, however, override at any time. Examples of this include partially automated driving, where the human driver can directly intervene in the decisions of, say, a parking assistance system.”
“[This] concerns intelligent technology such as emergency braking systems. Here, engineers define certain parameters during the development process. Here, there is no scope for human intervention in the decision-making process itself.”
Bosch said these approaches determine whether or not AI decides to activate a system, based on data collected by its products. AI-based processes should be in constant review by enginners, it said.
Volkmar Denner, chief executive at Bosch, commented: “AI should serve people. Our AI code of ethics provides our associates with clear guidance regarding the development of intelligent products. Our goal is that people should trust our AI-based products.”
Michael Bolle, chief digital officer and chief technology officer at Bosch, said: “If AI is a black box, then people won’t trust it. In a connected world, however, trust will be essential.”
“to actively engage with the ethical questions that the use of this technology raises,” it said. “The moral foundation for this process is provided by the values enshrined in the Universal Declaration of Human Rights.”
Bosch said it will train 20,000 “associates” in the use of AI during the next two years. Its AI code of ethics will be part of its training programme. Bosch is investing €100 million in a new AI campus, part of Germany’s so-called Cuber Valley, where 700 of its own staff will work with external researchers and startup associates.
The company has also signed up to the European Commission’s High-Level Expert Group on AI, appointed to examine issues such as the ethical dimension of AI. The High-Level Expert Group on AI has seven locations, and ties with the University of Amsterdam in the Netherlands and Carnegie Mellon University Bosch in the US, among others.
Bosch has consolidated its IoT activies into a new subsidiary, called Bosch.IO, with 900 associates and a wide remit, covering consulting services, implementation and operation.