YOU ARE AT:ChannelsWhat do AI and ML promise for industrial control?

What do AI and ML promise for industrial control?

Modern manufacturing generates a flood of data that hold the promise of significantly improving many areas, from supply chain management to sensor process variable readings and internal diagnostics to in-process quality data. But the problem is that it is just that — a flood of data and only computers can keep up with it. What’s more, the internal relationships in the data and thus the possibility of making real use of it, are not always obvious a priori. While certain approaches can be set up in advance, in reality, the only way to make use of this universe of data is to experience it, which is another way of saying our data processing systems must learn what the data means and what to do with it. Hence the need for machine learning (ML), which is mostly an iterative process of optimization.

“Many of the algorithms being developed are iterative, designed to learn continually and seek optimized outcomes,” says Louis Columbus. “These algorithms iterate in milliseconds, enabling manufacturers to seek optimized outcomes in minutes versus months.”

If the analysis can be done and applied in real time, or near real time, the benefits could be enormous. While machine learning is increasingly being applied to such activities as asset management and supply chain management and optimization, this article will concentrate on ML’s use in control.

The need for simplicity and directness

I have a saying: You can do almost anything if you put in enough microprocessors, but true ingenuity is doing the same job with one large rock and one small rock. In other words, the more simply one can do something, the better the result is likely to be. It will also generally be both more robust and less expensive. David A. Sanders and Alexander Gegov echoed this view in 2015: “Whenever people do better than computers on some task, or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently. Or perhaps the task can be done better in a different way. “

One recent example of what appears to be excessive complexity comes from researchers at UC Berkeley’s Automation Sciences Lab who have built a robot called DexNet 2.0. The robot can pick up objects of any shape with a 99 percent success rate; which seems impressive (robots have traditionally had to be specifically programmed to pick up specific objects). But it does this not as a human would, by applying a few general principles like “how big is the object” and “how would I hold my hand to grasp the shape I see,” but instead was trained with a cloud database of thousands of 3D objects to collect 6.7 million data points. Graham Templeton, however, suggests that the robot learns by experience: “Each time it successfully analyzes an all-new shape and picks it up, it gets a bit better at picking up the next shape even if that shape is entirely new as well.”

After all, a human’s suppositions of the world are based on millions of previous observations and experiences, and the image of reality gradually changes as those experiences and observations accumulate.

On the side of the machines, David A. Sanders and Alexander Gegov point out that the challenges of industrial control are much simpler than those facing general artificial intelligence: “Control engineering tools and methods tend to have less computational complexity than some other AI applications, and they can often be implemented with low-capability microcontrollers.” So this turns out to be a good fit for computers.

There is some interesting work going on into concurrent learning adaptive model predictive control for aeronautical applications; the idea is to allow to use of adaptive control in rapid-response situations where stability must be maintained despite modeling uncertainty. But what about industrial (i.e., process) control?

Machine learning and adaptive industrial control

Programmed adaptive control has been around for many decades, and its on-the-fly modification of control system parameters in response to changing process conditions certainly does not merit the name machine learning. However,  there are also self-adaptive controllers, which use a control loop’s performance as an input (e.g., the self-tuning regulator, the model reference controller and the pattern recognition adaptive controller), may be considered as the beginning of machine learning.

Tuomas J. Lukka pointed out in 2007 that an adaptive controller “depends on very recent history.” It has no memory and reacts to the system’s current state only. A learning controller, on the other hand, “depends on long-term history. It has a memory, and it remembers previous states and the appropriate responses.”

It is useful to remember, however, as Liptak[1] points out, that a self-adaptive control system cannot respond as quickly to changes in its environment (as contrasted with changes in input or process variable) as a programmed one, because it must have several output samples to make a decision. Other types of adaptive control include model reference and pattern recognition; one could say that the latter embodies a form of machine learning.

With neural network control, genetic algorithm based control, genetic programming control, reinforcement learning control, and more all being thrown at the problem, it’s a pretty good bet that we’ll see accelerating progress over the next few years.

Sounds exciting.

[1] Bela Liptak and Kriszta Venczel, Instrument Engineer’s Handbook, Vol 2(Process Control), 1985, Radnor, PA, pp `59-63

ABOUT AUTHOR