State of things | Smart manufacturing (part 3): Use case modelling
This article is the third instalment in a series taken from a longer report, Smart manufacturing: asset management, predictive maintenance, dynamic scheduling and other use cases, from August 2018.
The report tells the story of smart manufacturing in four chapters. The first two examine the expectations surrounding the IIoT movement (‘The hype and the glory’), and its application in the manufacturing sector to date (’An alternative truth’).
The second two chapters consider practical advice for manufacturers starting down the road to digital transformation, by turns offering guidance about how to approach the technology (‘Use case modelling’, see below), and, how to fund and recover investments in digital change projects (‘Buying digital change’).
The full report features additional information and use cases – as well as the article in its entirety. Click here to download the full report. See links at bottom for all instalments.
How should the manufacturing sector move forward? How should manufacturers define their needs, and vendors make use cases digestible for a conservative and secretive bunch? Telit neatly deconstructs its proposition according to the benefits it produces. “Industrial IoT should focus on three goals: to make money, save money, and stay compliant,” explains Buranello. The company says it is helping semiconductor makers offer new and better services, car makers get more out of the same kit, and pharmaceutical companies guarantee the quality and security of their drugs.
Hitachi has an even neater, and more instructive, way to categorise smart manufacturing use cases. It sets about the task of industrial transformation by posing three simple, but pointed, questions. “What if you could predict and prevent production downtime? What if you could predict and prevent production bottlenecks? What if you could predict and prevent production defects?”
The point is to eliminate, or at least manage and reduce, downtime, bottlenecks, and defects. The first describes the availability of a factory; its uptime might be impacted by machine breakages, staff absences, or missed deliveries. Kinsey offers up a version of ‘predictive maintenance’ as a basis for operational excellence. He does not care for the term, however. “It’s over-used, and mis-used,” he says. Instead, Hitachi deals in ‘predictive downtime’, which describes the challenge, rather than the solution.
“If a machine is broken, or people don’t show up to work, or the trucks don’t deliver, then you have downtime,” explains Kinsey.
The second question, about bottlenecks, covers how factories use their capacity most efficiently, avoiding production delays from mis-managed workflow. Like GE, Hitachi developed its smart manufacturing techniques for its own purposes first. It hit upon the technique for ‘dynamic scheduling’ at its site in Ōmika, in Japan. “We developed an algorithm to predict bottlenecks before they happen,” says Kinsey. Dynamic scheduling works on the same big-data principles as the classic analytics-based prescription for quality control, as described in the Auerlia gold mining case (see full report).
“It looks at a number of factors – incoming orders and outgoing materials; the flow through the factory. The more you know, the more you can anticipate change.”
Hitachi’s third gambit deals with the headache of quality control, and seeks ways to stop defective products from disrupting batches and yields, and ultimately revenues and reputations. The more a manufacturer is forced to re-work or chuck out, the flakier their production processes become. Defects are dealt with under the header ‘predictive quality’. “The data from the production line is used to drive the quality equation – where Y is a function of many Xs. The challenge is to understand the right combination of all those Xs. When the variables are correct, the process runs fine; as soon as there is a drop in speed, or temperature, or tool sharpening, then the mix changes,” explains Kinsey.
PTC describes the same set-ups, talking about managing bottlenecks and quality. Incredibly, one German car maker has reduced downtime losses by €40,000 per minute since applying its predictive maintenance tools, it says.
Industry benchmarks in North America say the cost of unplanned downtime for automobile manufacturers is $1.3 million per hour. “That’s $361 per second,” comments Justin Hester, senior researcher at Japanese auto parts maker HIROTEC (see full report), a PTC client. “If it takes a three minute phone call to report an issue, you can lose $70,000 just to tell someone you have a problem.”
Gallant at PTC says: “There are 100s more use cases”. But the slide (see full report) PTC presents for discussion of use cases is oriented around ‘personas’, or manufacturing departments; they are addressed directly to the manufacturing operation, itself. This way, fault detection and predictive analytics fall under ‘maintenance and service’, machine calibration appears in ‘quality and test’, and de-bottlenecking is variously categorised under ‘production execution’, ‘supply chain and logistics’, and ‘operations management’ (see full report).
“Our conversations tend to be with different individuals. So we’ve grouped use cases in the way plant engineering teams think about things, say, or lean six-sigma ‘black-belts’ think about things,” explains Provencher. “Predictive maintenance is typically a conversation with the maintenance team, even if executives know what it means. And if a customer wants to apply analytics to predict the quality of the end product, instead of the failure of the equipment, then that shows up under quality and test.”
PTC shows another slide, a value matrix, showing the relevant technologies for different manufacturing sectors (see full report). “We have done a lot of work to understand this – what your biggest value drivers are, given the industry you‘re in.”
Gallant rejoins. “It is really driven by the industry,” he says. “For a factory making soap or cola or potato chips, it’s less about the product, and more about managing volumes and machinery, and detecting anomalies on conveyor belts and pumps, or whatever. For more highly-complex, low-volume manufacturing, the use cases go deeper.”
It has another visual, showing the manufacturing journey (see full report). It is in line with Hitachi’s collaboration model, articulating a process of digital enlightenment, from a first position of ‘understanding’, after the sales team has engaged, through to ‘advancement’, where the technology is applied, through ‘out-performance’, where it is reviewed, tweaked, and extended.
Most of manufacturing is still a way short of this first step to ’understanding’, and will only proceed by joining up assets. Connectivity is the gateway technology for digital transformation in any space. The whole revolution starts with asset performance, says Ruh.
“Just the first phase – just the connection of machines – is going to be considerable. The whole of the industrial space will go through this, and we will see all business models change, and the winners and losers will change as well.”
The “idea of connected machines”, of monitoring asset performance from afar, had been made real in just five per cent of industrial enterprises five years ago; the figure is now close to 50 per cent, he says, and will climb to 75 per cent within a “few years.”
Just the act of putting sensors on equipment affords transformative gains, says Amos. “You might have two pumps coming off the line, one serial number apart from each other. One is in Minneapolis St Paul, in the upper Continental 48, and the other is way down in Bay Town, Texas, which is right on the Gulf of Mexico. And one is failing every six months,” he explains.
“By putting sensors on them, and bringing in atmospheric data, you can determine pretty quickly it’s the solidity level in the air that’s causing problems. And you can notify the whole supply chain, and check the seals on other pumps. By taking the data back into the MES systems, we find ways to shore them up so we never have that problem again. Bringing in all that data, and looking for patterns is incredible.”
He gives another example, held up by GE Digital as a definitive work in the industrial IoT space. Brazilian firm Gerdau is the largest product of steel in the Americas. Eighteen months ago, it set down GE Digital’s asset management and reliability software to predict equipment failures. “Downtime is a bad thing. Gerdau was spending $305 million annually on maintenance. When an asset failed, and its plant went down, it was the equivalent of $12,000 per hour – per asset,” says Amos.
GE connected 600 assets across 11 plants – “everything from conveyor belts to smelters to cooling” – to provide actionable warnings of impending failures. “They have insights they never had before,” says Amos. At least two major incidents, equivalent to 30 hours and $1.5 million of downtime, were avoided in the first phase, he says. “This is a company that took digital transformation by the horns, and said, ‘we want to be world class’. It had the existing components in place already, but wasn’t satisfied. So they added our asset and field management services on top. It’s an example of a company taking it to the next level.”
OSIsoft partners and competes with the likes of GE. Its PI System underpins GE Digital’s work with the New York Power Authority, listed by Sherry in her London round-up (“we are capturing all of the data, prepping it and serving it up”), as well as the Aurelia Metals programme devised by Convergio (see full report). It has been skimming data off MES systems for years, says Perry Zalevsky, director of industry at OSIsoft.
He suggests the rush to ‘sensor-ise’ industrial plants is hasty. “Sometimes new hardware is needed, but often it is not,” he says, giving parallel examples of water companies in Tennessee, in the US, and Tasmania, in Australia, where the first installed 70 water meters to identify leaks, and the second just looked again at the data.
The impact was the same, he says. Whitehouse Utility District, producing $10 million of water per year, and billing for just $6.8 million, found $400,000 of leaks in the first year and $900,000 in the second. “Within two years, it had paid for the whole thing – software, labour, everything.” Tasmania Water, whose sewer overflows had poisoned local oyster beds, took a closer look at the data coming off PI, and put the fixes in place.
“They’re both examples of IoT-like things taking machine data, interpreting it, and solving a problem. One needed extra hardware, and one didn’t,” says Zalevsky.
Much of PTC’s modelling work around use cases has been to guide manufacturers’ first steps. Provencher explains: “Before you start to optimise production, you have to understand current performance – and that means connecting your data sources, and gaining real-time visibility of performance. That’s the first step.”
Adjacent to him, Gallant remarks: “Most factories have new and old equipment right next to each other. Some machines are 50 years old, andtill running. We get those online right away, so data starts to wash through. In manufacturing, it’s not about the tech, but the business problem – to keep those machines running at their very best.”
Amos says the same. “We go in and understand their most critical assets – it could be a smelter, it could be a truck. These are the things we monitor like a laser, so if operations go down, the impact is minimal. Every industry has its critical assets. We work out what they are, and build efficiencies around them,” he explains.
Again, OSIsoft resists the idea industrial connectivity is new, or urgent. Its PI System, like other platform equivalents, has worked as a mixing pot for operational data for a generation. “I remember, working in aluminium, when they switched from manual control to process control on the rolling mills,” says Zalevsky. “I remember the guy, reluctantly ceding control.”
He explains: “We’ve been doing this for 35 years; most big industrial companies have always taken data from their machines, and put it into systems like ours. There is some equipment, like smoke stacks, that hasn’t had capacity yet, and that is changing with these cheap devices, with intelligence on board.”
For most manufacturers, just starting out on the road to digital transformation, the data is too hot to handle. It is complex, relentless, and frequently useless. “We call it the data witch hunt,” he says. “You can pursue it, but unless you know what you’re doing, you might just spend lots of time collecting data – when you could have already solved the problem already. Asset management and energy management are great – they’re really good use cases – and we will help get that data. But let’s start with one thing, only. Understanding and managing data is a skill. No one wants to collect data just for the sake of it.”
The modern magic, says Zalevsky, comes from crossing the IT/OT divide, with lightning connectivity and edge computing for rapid decisions, and cloud processing for deeper analytics.
“You’re looking at the vibration analysis of your motors, because it sure looks like the rotors are failing faster than expected. The operational data shows when and why, and how fast they failed. But combining it with business data reveals where they came from. That’s a better analysis.”
Provencher agrees, essentially. New industrial IoT platforms (see full report) combine these powers, and make them easily available and usable. They represent the springboard for digital change. Think of them like cakes, he says, with the sponge of industrial OT systems topped with the jam of IT systems, and role-based analytics and simulation tools layered on the top like icing.
The difference is manufacturers are not picking layers, anymore; they’re taking slices. “They’ll select a use case that leverages all the layers of the platform. We deliver in six to eight weeks, and then they go to the next use cases, and then to five, 10, 25, 50 factories.”
He adds: “They deploy what makes sense, and what delivers most value. They are developing solutions for specific use cases, and specific factories. You can add analytics and AR to further digitise your operations, and then apply the model to all your factories to benchmark performance, and expand to suppliers to gain supply-chain visibility. That’s the highest level, in terms of simple steps to industrial IoT.”
At the higher end, Predix offers three “swim lanes” of analytics, says Amos: one for self-service, one for existing run-times, and one for analytics and machine learning. “We have apps taking time series data from industrial machinery, and going through the learning every day, and making recommendations to customers in seconds – where there used to be three or four people just sitting there, crunching through the data all the time, and taking days, weeks and months.”
OSIsoft does not do machine learning; it leaves it to the likes of GE. But it is not sure the hype justifies the means; most manufacturers could just do with a clearer view of the data available to them. “In a way, we obviate the need for AI. If you are doing this the right way, you probably don’t need AI. There might be cases where you do, but most times you’re looking at brand new data, and seeing trends that get you the majority of the value,” says Zalevsky.
A full version of the article, including additional information and use cases, is available for download. Click here to download the full report. A new report and webinar on edge computing in industrial IoT setups, called AI and IoT at the cutting edge – when to move intelligence out of the cloud and closer to the action, is also available; go here for the webinar; go here for the report.