Manufacturing involves sourcing the right parts, finding a suitable factory, getting your designs to the assembly line in time, and navigating compliance and security challenges required to scale globally.

It is desirable to employ automation technology to improve the manufacturing production lines and the resulting products with high efficiency. Visual inputs are the richest source of sensor information. Vision developments in manufacturing results in improvements in reliability, and product quality, and enable technology for a new production process. We will discuss how machine vision is used in industrial automation to ease the manufacturing process.

In factory automation, sensors are used to gather data for inspection or to trigger other devices. These sensors fall into multiple categories – photoelectric, fiber optic, proximity, ultrasonic, and vision are the most common. As these sensors cannot distinguish between patterns or colors, with their rigid mounting setup, they cannot handle misalignment or variability. Vision sensors play a distinguishing role that offers greater flexibility, perform multiple inspection types within a single image, and generate additional rich data for quality and process improvement.

Sensor fusion

Automation developers require a diverse array of sensors providing raw data and feedback, control systems, programmable logic, and connected embedded devices. The data is being analyzed by an algorithm and not a person, this presents the opportunity to introduce different sensing modalities in Machine vision systems. This may include modals other than image sensors, such as time of flight, radar and Lidar. With this comes the need for sensor fusion. Sensor fusion is the process of taking data from multiple sources and combining it into a single set that can be fed into a neural network. It is a crucial stage, because the way each sensor’s data is weighted will influence its relevance on the final data set.

Machine vision systems

Computer vision is the field of artificial intelligence (AI) that enables machines to "see", combining embedded systems with computer vision results in Embedded/Machine vision systems. Machine vision (MV) technology is well positioned in the industrial Internet of things (IIoT) where machines of all types are constantly connected. Machine vision is being adopted so quickly for IIoT applications due to its increasing affordability of machine vision components and systems, a wider range of solutions, better hardware, and AI-based software for deep learning capabilities. The machine vision system has the ability to process a large amount of information in a fraction of a seconds.

Broadly speaking the different types of vision systems include 1D Vision Systems, 2D Vision Systems, Line Scan or Area Scans and 3D Vision Systems. The main functional blocks of a typical vision system (figure-1) are an image acquisition unit (image sensor/camera module), processing unit, segmentation, and software and vision algorithms/pattern recognition and connectivity.

Building blocks of MV
Figure 1: Building blocks of MV

Machine vision uses cameras as their eyes which will capture visual information from the surrounding environment. Resolution and sensitivity are two important aspects of any MV system. Resolution is responsible for differentiating between objects, whereas sensitivity is the machine’s ability to detect objects or weak impulses despite dim lights or invisible wavelengths.

Major components of the system comprise lighting, lens, image sensors, vision processing, and communication devices, it often uses specialized optics to acquire images. The MV technology is a combination of software and hardware that provides operational control to devices that execute functions such as capturing and processing images and measuring various characteristics required for decision-making. Hardware components comprise several objects such as cameras, sensors, processors, frame grabbers, LED lightings, and optics. The software offering segment is bifurcated into application specific MV software and deep learning MV software sub-categories.

Machine vision providers offer mainly two types of services, namely integration and solution management. Machine vision system integrators are used for inspection, testing, assembly, and gauging applications and help customers to meet their product specifications. Solution management is used for single-step debug operations, inspection control (start and stop), and open and save solutions.

Choosing the right hardware

Many options exist when deciding upon the hardware that will be running your machine vision AI application. Field programmable gate arrays (FPGAs), graphics processing units (GPUs) and even microcontrollers (MCUs) each have their own benefits.

FPGAs are very powerful processing units that can be configured to meet the requirements of almost any application. Tailored-made FPGA architectures can be created for handling specific applications. FPGA achieves much higher performance, lower costs and better power efficiency compared to other options like GPUs and CPUs. GPUs are specialized processors that are mainly designed to process images and videos. Compared to CPUs, they are based on simpler processing units but host a much larger number of cores. This makes GPUs excellent for applications in which large amounts of data need to be processed in parallel, such as image pixels or video codecs. CPUs have a limited core count, which inhibits their ability to quickly process the large amounts of data needed for AI.

Image sensor and lighting

When developing a machine vision system, selecting the right image sensor could be one of the most important design decisions. The design requires high-resolution image capture, fast data transfer with minimal noise, and efficient processing power which is able to crunch data and generate outputs. The advancements in front-side (FSI) and back-side (BSI) illumination in CMOS sensor technology allow for higher-resolution images in low light.

Proper lighting is also an important consideration. The basis for all lighting performance comes down to three main image sensor characteristics: quantum efficiency (QE), dark current and saturation capacity. When implemented within a camera, the maximum QE of the camera should be less than that of the sensor, due to external optical and electronic effects.

Dark current and saturation capacity are also important design considerations in Machine vision systems. Dark current measures the variation in the number of electrons that are thermally generated within the CMOS imager and can add noise. Saturation capacity denotes the number of electrons that an individual pixel can store. They can be used with QE measurements to derive maximum signal-to-noise ratio (S/N), absolute sensitivity and the dynamic range of an application.

The right lighting will help increase the accuracy and efficiency of a machine vision application. Other factors to consider with lighting include wavelength, such as infrared, fixed lighting, and even lighting placement. Light sources and reflections that shine directly into the cameras of Machine vision systems have been shown to decrease object detection accuracy.

Choosing the right machine vision camera

Recent advancements in machine vision technology now let cameras transfer high-megapixel images at extremely fast frame rates. Selecting the best interface requires a review of several considerations such as Choosing a Sensor Type (CMOS or CCD), Color Camera or Monochrome Camera, Camera Output Format (GigE, Camera Link, CoaXPress, USB3, HD-SDI), and Frame Rates. CCDs have higher image quality, better light sensitivity, greater noise performance, and an ideal global shutter. CMOS sensors are known for their high speed, on-chip system integration, and low cost of manufacturing.

Camera manufacturers leverage the latest sensor developments and improvements in camera design, helping machine vision system developers and integrators create faster, more flexible, and more capable imaging systems. With higher camera resolutions comes the need for higher-quality, larger-format optics, which are readily available, with options including embedded liquid lenses for auto-focusing systems. Optics for nonvisible wavelengths enable new ways to detect things with specialized imaging using wavelengths that range from the UV through to the IR bands.

LED illumination products, critical to all machine vision applications, now come in a wide variety of wavelengths and form factors. They feature increased flexibility, with tunable angles and additional wavelengths, more consistent spectral response, and even programmable sources with embedded controls. An important enabler is the emergence of up to 100 G interfaces as well as the recently updated CoaXPress 2.0 interface and even PCI interfaces.

Picking a machine vision lens

Deciding on the right lens for a machine vision application calls for a review of the required specs, some math, and a consideration of how the lens will integrate with the camera setup. When choosing the lens used in a machine vision application, one must consider the sensor that will be used. Sensor size and pixel size are of extreme importance in the selection process. The lens must be able to properly illuminate the complete sensor area to avoid shading and vignetting.

Ideal lenses produce images that perfectly match the object captured, including all details and brightness variations. Standard lenses may be about a megapixel in fixed focal lengths of 4.5-100mm. Macro lenses are optimized for close-up focus. When selecting the correct lens for an application, designers calculate the needed working distance using 3 factors: focal length, the length of the inspected object, and sensor size.

Some of the use case / application scenarios

MV systems in the food and beverage industry are prominently used in packaging and bottling operations. Machine vision systems are likely to witness significant growth in pharmaceuticals and chemicals, printing and labeling, and other industry verticals, which include agriculture, rubber and plastic processing, solar paneling, machinery and equipment, security and surveillance. The market has been segmented into quality assurance and inspection, positioning and guidance, measurement, and identification. The systems are extensively used for scanning and identifying labels, barcodes, and texts, especially in the packaging sector. This automates packaging activities, thereby saving time, avoiding human errors, and increasing efficiency.

Machine vision solutions make manufacturing processes more efficient and competitive. MV cameras will perform flawlessly in a wide range of manufacturing environments. Typical application block is as shown in below Figure 2.

system-level-components-of-tof-cameras
Figure 2: Typical application block for machine vision

Avnet Integrated's intelligent vision platform

Avnet has developed modular development platforms specific to embedded vision, they combine smart cameras, edge- or cloud-based computer technologies, software, and artificial intelligence for the of systems that can detect and identify people and objects. Avnet’s Integrated intelligent vision technology platform (Figure 3) is designed for deep learning-supported video analytics at the edge. This design has been collaborated with leading processor, image sensor, and software tool suppliers to integrate their products into a cohesive solution.

system-level-components-of-tof-cameras
Figure 3: Avnet AI technology platform vision - Infinity® AI Cube

For these high-performance applications, the intelligent vision platform is equipped with the extremely powerful COM Express™ Type 6 module of the MSC C6B-CFLR family from Avnet Integrated, which is based on an Intel® Core i7 or Intel® Xeon™ processor.

The technology platform can integrate various accelerator technologies for fast data processing at the edge. Depending upon the specific application, the number of video channels and the latency time play roles. The solutions range from AI inferencing on powerful CPUs, graphics processing units to the optimized Intel® Movidius Vision Processing Units (VPUs).

SharePostPost

Stay informed


Keep up to date on the latest information and exclusive offers!

Subscribe now

Data Protection & Privacy Policy

Thanks for subscribing

Well done! You are now part of an elite group who receive the latest info on products, technologies and applications straight to your inbox.

Technical Resources

Articles, eBooks, Webinars, and more.
Keeping you on top of innovations.