YOU ARE AT:Data AnalyticsComputer vision in manufacturing: definition, history and use cases

Computer vision in manufacturing: definition, history and use cases

In terms of digital technology, computer vision is as old as the hills. In popular terms, its theory of expanded machine consciousness emerged just as Bob Dylan and The Beatles took a more experimental musical turn.

It has its roots in a 1966 summer holiday project at MIT, where university staff – at a loose-end between semesters, and a full 12 months before post-war social consciousness took the trip of a lifetime at Haight-Ashbury – sought to attach a camera to a computer in order to have it “describe what it saw”.

Computer vision is different from digital image processing, as it was, in its desire to map scenes in three dimensions. It seeks to equip computers with the ability to discern, recreate, and render landscapes and objects from all sides, with a total depth of optical field, or ‘deep focus’.

The British Machine Vision Association puts it simply. “Humans use their eyes and their brains to see and sense the world around them. Computer vision is the science that aims to give a similar, if not better, capability to a machine or computer.”

Seminal early studies developed algorithms for visual processes such as extracting edges, labelling lines, and polyhedral modelling from images and video. These form the basis of computer vision today. But the science has developed.

In the past few decades, more rigorous mathematics and more sophisticated technology has seen the theory and the practice move faster. Algorithms now consider shading, texture and focus to create 3D models; images can be paired and compared, and cameras calibrated, to enhance reconstructions.

More than this, statistical learning techniques, backed by the rise of cloud computing and compute power, have brought new capabilities, including facial recognition, behavioural analysis, and new artificial intelligence ad machine learning techniques.

Higher definition imaging, from 4k and 10k cameras (increasingly deployed as the default resolution in smart-city surveillance), are enabling greater accuracy. Drones, or unmanned aerial vehicles (UAVs), are providing a more expansive, and hitherto unattainable, field of vision.

Today, computer vision is standard in a range of fields. It underpins medical image processing in the diagnosis of patients, achieved by scanning the body for malign changes. It is used by the military to enemy soldiers or vehicles, as well as in adaptive vision-based missile guidance.

Computer vision is essential for autonomous vehicles, too, including submersibles, land-based robots and moon-based rovers, UAVs, and military vehicles and weaponry. It is crucial for driverless cars, trucks, trains, and boats, as they are tested and readied on starting grids during the next decade.

Along with a dozen ultrasonic sensors, Tesla equips its cars with “eight surround cameras provide 360 degrees of visibility around the car, at up to 250 metres of range.” The point is to teach cars to ‘see’, to detect obstacles, find lanes, navigate journeys, and read traffic lights and road ‘furniture’.

But computer vision has found a particularly productive niche in industrial settings. Notably, analytical imaging tools, in cameras attached to UAVs, are being used for far-off site inspections of rigs, pipelines, plants, and fields.

More crucially, computer vision is being used to optimise production lines, and digitise processes and workers. In manufacturing, the application of ‘machine vision’, which automates image analysis and directs the robot workforce on the shop floor, is a growth area.

On the production line, the most prominent use cases are for inspecting parts and products, controlling processes and equipment, and flagging ‘events’ and inconsistencies. In an insighful blog piece, DevTeam.Space presents 10 general examples of machine vision in manufacturing. These are: predictive maintenance, package inspection, reading barcodes, product assembly, defect reduction, 3D inspection, health and safety, tracking and tracing, text analysis, and deep learning.

But as a starter kit, the best overview is from UK based Industrial Vision Systems, which presents a trove of case studies detailing the impact of machine vision tools on quality, traceability, and downtime, covering manufacturing disciplines as diverse as vehicles, injection moulding, circuit boards, drugs, medical devices, and food and beverages.

It is worth a visit.

ABOUT AUTHOR

James Blackman
James Blackman
James Blackman has been writing about the technology and telecoms sectors for over a decade. He has edited and contributed to a number of European news outlets and trade titles. He has also worked at telecoms company Huawei, leading media activity for its devices business in Western Europe. He is based in London.