An automation system is an integration of sensors, controls, and actuators designed to perform a function with minimal or no human intervention. The field concerned in this subject is called Mechatronics which is an...
Please fill out the following form to submit a Request for Quote to any of the following companies listed on
This article takes an in-depth look at machine vision systems.
Read further and learn more about:
Machine vision systems are assemblies of integrated electronic components, computer hardware, and software algorithms that offer operational guidance by processing and analyzing the images captured from their environment. The data acquired from the vision system are used to control and automate a process or inspect a product or material.
Many manufacturing industries adapt machine vision systems in performing tasks that can be mundane, repetitive, tiring, and time-consuming to the workers, resulting in increased productivity and reduced operational cost. For instance, a machine vision system in a production line can inspect hundreds and thousands of parts per minute. A similar type of inspection can be performed by human workers manually; however, it is much slower and expensive, prone to error, and not all running parts can be quality-checked offline due to time limitations.
Machine vision systems also promote high product quality and production yield by providing accurate, consistent, and repeatable detection, verification, and measurement systems. They can help detect defects earlier in the process, which prevents the production and escape of defective parts. They improve the traceability and compliance to regulations and specifications of products and materials in industrial processes.
Machine vision systems are typically composed of five elements (or components), as discussed below. These components are common and may be seen in other systems. However, when these components work together by playing their distinct roles, they create a vision system capable of sophisticated functions.
Lighting is responsible for illuminating the object and highlighting its distinct features to be viewed by the camera. It is one of the critical aspects of machine vision systems; the camera cannot inspect objects that it cannot see. Therefore, lighting parameters such as distance of the light source from the camera and object, angle, intensity, brightness, shape, size, and color of lighting must be optimized to highlight the features being inspected. In addition, the object must be seen clearly by the camera when it is struck by light; hence, the object's surface properties must also be considered during lighting optimization.
Lighting can be provided by LED, quartz halogen, fluorescent, and xenon strobe light sources. It can be directional or diffusive. Lighting techniques in machine vision systems are classified as follows:
Back lighting illuminates the target from behind. It creates contrast as dark silhouettes appear against a bright background. Back lighting is used to detect holes, gaps, cracks, bubbles, and scratches on clear parts. It is suitable for measuring, placing, and positioning parts. It is advisable to use monochrome light with light control polarization if very precise (subpixel) edge detection is necessary.
Diffuse (or full bright) lighting is used to illuminate shiny specular and mixed reflective targets, requiring even and multi-directional lighting. There are three types of diffuse lighting:
In partial bright field or directional lighting, the light rays from an angled directional light source strike the material directly. The camera and the object are in a co-axial position with each other. Partial bright field lighting is good in generating contrast and emphasizing topographical features of the surface. However, this lighting arrangement is less effective with specular surfaces as it creates lighting hotspot reflections.
In dark field lighting, the light rays from a directional light source (e.g., bar, spot, or ring light) strike the object at a low angle (10-150) from the surface. This lighting arrangement makes surface flaws such as scratches, imprints, and notches appear bright by reflecting light to the camera; the rest of the surface appears to be dark.
Devices such as color filters and polarizers may be used in machine vision lighting. Color filters are used to lighten or darken targeted features on the surface. Polarizers are installed in cameras to reduce lighting noises such as glares and hotspots and increase the contrast.
The lens captures the image and relays it to the image sensor inside the camera in the form of light. Most lenses are equipped with color recognition capability. The lens of a machine vision camera can be an interchangeable lens (C-mount or CS-mount) or a fixed lens. Lenses are characterized by the following properties, which describes the image quality they can capture:
The image sensor inside the machine vision camera converts light captured by the lens into a digital image. It typically utilizes charged coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology to translate photons into electrical signals. The output of image sensors is a digital image composed of pixels that shows the presence of light in the areas that the lens has observed.
Resolution and sensitivity are critical specifications of image sensors. Resolution is the number of pixels produced by the sensor in the digital image. Sensors with a higher resolution produce higher quality images, meaning more details can be observed in the object being inspected, and more accurate measurements can be obtained. The resolution also refers to the ability of the machine vision to perceive small changes. Sensitivity, on the other hand, refers to the minimum amount of light required to detect a distinguishable output change in the image. Resolution and sensitivity are inversely related to each other; an increased resolution will decrease the sensitivity.
The vision processing unit of a machine vision system uses algorithms to analyze the digital image produced by the sensor. Vision processing involves a series of steps, performed externally (by a computer) or internally (for stand-alone machine vision systems). First, the digital image is extracted from the image sensor and is relayed to the computer. Next, the digital image is prepared for analysis by making the necessary features on the image stand out. The image is then analyzed to locate the specific features needed to be observed and measured. Once observations and measurements of the feature are completed, they are compared to the defined and pre-programmed specifications and criteria. Finally, the decision is made, and the results are communicated.
The communication system quickly passes the decision made by the vision processing unit to specific machine elements. Once the machine elements have received the information (or signal), the machine elements will intervene on and control the process based on the output of the vision processing unit. This mechanism is accomplished by discrete I/O signals or data communication by a serial connection in the form of RS-232 serial output or Ethernet.
The types of machine vision cameras are the following:
A line-scan camera precisely and quickly captures digital images one line at a time. The camera still views the whole of the object. The complete image is constructed in the software pixel line by pixel line. Either the part or the camera must be moving during the inspection.
Line scan cameras can inspect multiple objects in a single line. They are ideal in high-speed conveying systems and continuous processes. They are suitable in continuous webs of materials, such as paper, metal, and textiles, large parts, and cylinders.
Area scan cameras use rectangular-shaped image sensors used to capture images in a single frame. The resulting digital image has a height and width based on the number of pixels on the sensor. The vision processing unit analyzes the scene image by image. Area scan cameras can perform almost all common industrial tasks and are easier to set up and align. Unlike line scan cameras, area scan cameras are preferred in inspecting stationary objects. The objects can be paused momentarily in front of area scan cameras to allow inspection.
3D scan cameras can perform inspections at X, Y, and Z planes and calculate the object’s position and orientation in space. They utilize single or multiple cameras and laser displacement sensors. In a single-camera setup, the camera must be moved to generate a heightmap that resulted from the displacement of lasers’ location on the object. The height of the object and its surface planarity can be calculated using a calibrated offset laser. In a multi-camera setup, laser triangulation is deployed to generate a digitized model of the object’s shape and location.
3D scan cameras are ideal for inspecting 3D-formed parts and robotic guidance applications. This type of machine vision camera can tolerate slight environmental disruptions (e.g., light, contrast, and color variations) while providing precise information. Hence, they are widely used in metrology, factory automation, and defect analysis of parts.
Hyperspectral imaging is like other spectral imaging techniques except that it uses a larger number of spectral imaging techniques with greater numbers of wavelengths of light for gathering data one pixel at a time. A normal spectral imager will identify red, green, blue, and near infrared color by color. A hyperspectral imager is able to identify hundreds of colors. This capability makes it possible for hyperspectral machine vision systems to detect differences and impurities on the inside of an object, not just surface deformities or inaccuracies.
The rapid growth of the use of hyperspectral imaging machine vision systems is due to the ability of the process to provide faster and more accurate data. It has become an essential part of sorting processes due to its low cost and error free quality control. Hyperspectral imaging is capable of classifying substances that have no visual differences but have chemical differences such as plastics. The process is ideal for categorizing and gathering data on substances that are solid and not transparent to visible light.
A hyperspectral imaging system is able to do inspections that are not possible with typical cameras that assess the surface of an object. It is a highly advanced technology that will continue to become an industry standard for product quality control.
Industries that use hyperspectral imaging as part of their machine vision system are:
All pills in a batch look the same to a camera or the naked eye. In the batch, there may be impurities and defects beneath the surface that are undetectable. Hyperspectral imaging detects and marks them for removal.
Hyperspectral machine vision systems can detect contaminants such as maggots and identify non-food objects like rocks or branches in batches of vegetables. It can also be used to detect impurities or contamination in factory produced foods like cheese or sausage.
Hyperspectral machine vision is effective for detecting impurities, damp spots, knotholes, or resin pockets in wood.
Hyperspectral imaging is used to detect and identify hidden cancer cells.
Hyperspectral cameras separate a signal into its spectral components and project each component to a single pixel with the amount of energy that falls on a pixel being quite low. The technology for hyperspectral imaging is rapidly growing and opening new spectral ranges. The normal spectral range is between 930 up to 1700 nm. The advancements in hyperspectral cameras has expanded that range to 1700 nm up to 2500 nm for all types of materials.
Machine vision systems can provide innovative and quick solutions by automating tasks commonly performed in industrial processes:
Presence inspection is the process of confirming the quantity and presence or absence of parts. It is one of the basic operations performed by machine vision systems and the most widely performed tasks in most industries. Practical applications of presence inspection include counting of countable products (e.g., bottles, screws) and checking the presence of labels on food packaging, electronic components on PCBs, adhesive application, and screws/washers in fastened parts.
The image processing methods employed by machine vision systems are the following:
In binary processing, the image captured by a monochrome camera is converted into pixels with two shade levels, black and white, making vision processing and decision-making easier. The conversion of each pixel is based on a specific threshold.
The digitized image produced by binary processing is further analyzed using Blob analysis. A blob refers to a “lump.” In blob analysis, a blob is a cluster of pixels having the same shade. The digitized image is plotted to a coordinate system, and the X and Y coordinates of each lump are determined and analyzed.
Blob analysis is used in a variety of tasks such as counting (based on area), measuring length and area, locating the target’s position in space, distinguishing the orientation of targets, inspecting defects, and others.
Other image processing and analysis techniques are:
Positioning is the process of comparing the location and orientation of the part to a specified spatial tolerance. The location and orientation of the part in 2D or 3D space are communicated to a robot or a machine element for it to align or place the target in its proper position or orientation. Machine vision positioning systems offer more accuracy and speed than manual inspection, alignment, and positioning. Practical positioning applications include robotic pick-up and placement of parts on and off the conveyor belt, positioning of glass substrates, checking of barcode and label alignment, checking of IC placement in PCB, and arrangement of parts packed in a pallet.
Machine vision identification scans and reads barcodes, 2D codes, direct part marks, and characters printed on parts, labels, and packages. These markings contain product name, manufacturer, date code, lot number, and expiration date. Identification is useful in improving the traceability of parts, inventory control, and verification system of products. Identification is accomplished by either an optical character recognition (OCR) or an optical character verification (OCV) system. In OCR systems, the machine vision reads the printed alphanumeric characters on the target without prior knowledge of the characters to look for. In OCV systems, the machine vision verifies the presence of the character strings.
Flaw detection is one of the most fundamental quality control tasks in manufacturing industries and the most utilized function of machine vision systems. In flaw detection, the machine vision searches for defects such as cracks, scratches, blemishes, gaps, contaminants, discoloration, and other irregularities present on the part's surface, which can affect the product functionality and reliability. Those defects appear randomly, so the machine vision algorithm looks for pattern changes, changes in color or texture, discontinuities, or connected structures. Next, the presence of these defects is monitored. The machine vision system can categorize the defects by type, color, texture, and size and sort out the defective parts failing the criteria. Machine vision systems can quickly and effectively detect small and microscopic flaws, which can be invisible to the human eye; these systems can work or operate for long periods of time, unlike human inspectors.
Flaw detection is widely used to inspect semiconductor and electronic components, appliances, tooling conditions, food products and their packaging, materials produced in continuous webs (e.g., paper, plastics, metals), and others. Flaw detection is useful in online inspections; once a failing part is detected coming from a process, the process is halted immediately and corrected, and the failing part is separated from its lot. Flaw detection is typically incorporated in machine vision systems together with presence inspection, measurement, and positioning functions.
Measurement is the checking of dimensional accuracy and geometric tolerances of parts. The machine vision system calculates the distances between two or more points and the location of the targeted features on the object to determine whether the measurement is within specifications. The lighting and optical system of the machine vision system must be optimized in order to obtain highly accurate, precise, and repeatable measurements.
The measurement function of machine vision systems can measure features as small as 25.4 microns. It typically comes together with flaw detection to measure the irregularities detected in parts. It is also used in calculating the volume of parts.
Machine vision systems have countless practical applications. The industry-specific applications of these systems are the following:
Packaging and identifying food, beverage, pharmaceutical, and consumer products require a reliable and robust inspection system.
Flaw detection and positioning are critical in semiconductor quality control; hence, machine vision systems are extremely helpful in this industry.
The difficulties of machine vision system design are the challenges of bringing together a variety of unrelated components and subsystems and engineering them to function as one uniform and streamlined unit. The initial steps in the process are determining what the system must do and how it will operate.
All aspects of the design project are carefully evaluated and stipulated such that the machine vision parts, components, and concepts will meet the desired requirements and outcomes of the application.
The term camera is a descriptor for the image acquisition aspect of a machine vision system. The requirements are feature detection, identification, location, or measurement, and the rate of the process. Once the requirements are established, spatial resolution, image resolution, and framing rate for the application can be determined.
The spatial resolution refers to the number of pixels of the smallest feature to be processed or the precision and repeatability that must be met. Very small features, such as holes or bolts, require very few pixels but resolution will not be reliable. To improve resolution, more pixels will be necessary to improve spatial resolution.
For measurement applications, fractions of pixels can be used with a lower limit of one tenth of a pixel. The size of the allowable pixels or fractions of pixels is dependent on the required precision of the measurement.
When size and measurement are examined, both types of spatial resolution are needed with the smaller one being the best choice.
The image resolution is the columns and rows necessary to achieve the proper spatial resolution. The calculation for getting the image resolution involves dividing the image area by the spatial resolution to determine the number of pixels, their width and height, that are needed. The camera should have a row and column count greater than the answer from the division.
The final step in the camera selection process is to determine how many frames per second are necessary for the application. The majority of machine vision systems operate at 10 to 15 parts per second or slower. When the resolution has to be higher, the image rate will be much slower.
The lens selection process is based on its format, field of view, distance from the components, and optical resolution. The necessary calculations include optical resolution, magnification, and the focal length of the lens.
Lenses are designed to work with a certain sensor size and are formatted to the size of the circle of the lens. It is essential that the lens format be matched to the sensor format. The mounting of the lens is determined by the camera and sensor sizes with C mounts being used for low to medium resolution sensors.
The optical resolution refers to the ability of the lens to determine the difference between the sizes of component features from small to large.
The magnification factor is determined by dividing the smallest measurement of the field of vision by the smallest dimensions of the sensor. Part of the magnification is based on the focal length of the lens and the working distance.
The light source for a machine vision system creates a contrast between the component and its background. This aspect of the process requires precision calculations since there are so many different lighting techniques.
Once the camera, lens, and light source are selected, it is important to test them to ensure they match the desired performance parameters. It is essential that the actual tools for the application are used during the test phase.
The references for the evaluation should include:
The machine vision system testing process may indicate the need to change or adjust components to better meet the goals of the application.
An automation system is an integration of sensors, controls, and actuators designed to perform a function with minimal or no human intervention. The field concerned in this subject is called Mechatronics which is an...
A bowl feeder is a mechanism for supplying small parts and components to a production line or for sorting bulk items for rapid use. A self contained bowl feeder system has a bowl that sets on a spring loaded base that moves vertically...
A conveyor system is a method for moving packages, products, supplies, parts, and equipment for production, shipping, or relocation. The different types of conveying systems include pneumatic, screw, belt, and roller. The construction of individual systems depends on the materials...
Collaborative robots, also abbreviated as Cobots, are the newest technology in robotics. They have changed the automation world significantly. These robots can work safely together with workers, hence are...
An industrial robot is an autonomous system of sensors, controllers, and actuators that executes specific functions and operations in a manufacturing or processing line. They operate continuously through repetitive...
Labels are an important aspect of product packaging, identification, presentation, and traceability. They are a way of communicating the manufacturer to the customers and the rest of the world. Labels promote the brand of the product and...
Packaging equipment is utilized throughout all packaging processes, concerning primary packs to distribution packages. This involves many packaging operations: cleaning, fabrication, filling, sealing, labeling...
Vibratory feeders are short conveyors used to transport bulk materials utilizing a controlled vibratory force system and gravity. The vibrations impart a combination of horizontal and vertical acceleration through tossing, hopping, or sliding-type of action to the materials being handled...
Warehouse automation is the process of replacing repetitive tasks with systems that are automated. The main goal is to remove labor-intensive duties that consume time. As a result, the workers can focus more on...