Introduction to Machine Vision
A typical machine vision system consists of a digital or analogous camera, embedded systems consisting of processors and software, a frame grabber, and lighting. Unlike the traditional, expensive manual laborers that mess up repetitive tasks, waste a lot of time and create bottle-necks in the workflows, machine vision systems get the same job done with formidable accuracy and speed. And the best thing is that these systems are cost-effective and work with top-notch efficiency. In 2017, MarketWatch cited that the machine vision market valued at USD 6.41 billion in 2017, is expected to reach USD 13.53 billion by 2023, at a CAGR of 13.27% during the forecast period 2018-2023. These statistics are tell-tales that scream the rising prominence of machine vision.
Also, Read Different Types of Vision Systems
Types of Sensors
Image sensors are solid-state devices that are one of the most crucial elements of a machine vision system. The fundamental purpose of image sensors is to convert the incoming light into electrical signals that can be viewed, analyzed, and stored. Based on its structure, these can be categorized into two types: CCD and CMOS.
- Charge-Coupled Device (CCD)
The CCD sensor is a silicon chip that consists of numerous photosensitive sites. The term charge-coupled device refers to how charge packets are transferred from the photosensitive locations to the readout, analogous to a bucket brigade. Potential wells created by clock pulses facilitate the movement of charge packets around the sensor. While the CCD sensor is itself an analog device, the output is immediately converted into a digital signal using an analog-to-digital converter. Some reasons to adopt CCDs are:
- Higher fill factor or effective collection efficiency.
- Higher dynamic range and uniformity.
- High image quality with lesser complexity.
- Complementary Metal Oxide Semiconductor (CMOS)
In the CMOS sensor, the charge from a photosensitive pixel is converted into a voltage. The signal is then multiplexed by row and column to multiple on-chip digital-to-analog convertors or DACs. Each photosensitive site is composed of a photodiode and three transistors, which activate the pixel, perform amplification and charge conversion and multiplexing. An electronic rolling shutter often accompanies the multiplexing configuration of a CMOS sensor. Some reasons to adopt CMOS are:
- Lower power consumption or dissipation due to lower charge flow.
- Can handle high light levels and can be used to image welding seams or light filaments.
- More compact than CCD counterparts.
- Not subject to smearing, unlike CCD counterparts.
Alternative Sensor Materials
Short-wave infrared (SWIR) is a recent advancement in imaging. SWIR wavelengths facilitate the imaging of density variations, even through obstructions such as fog. CCD or CMOS images are not sensitive enough in the infrared range to be useful. This issue can be resolved by using special Indium Gallium Arsenide (InGaAs) sensors. The InGaAs material has a bandgap that leads to the formation of photocurrent from infrared energy.
Before choosing apt sensors for your vision systems, it would be beneficial to understand the commonly used terminology and the sensor features which are mentioned below:
When light from an object falls on a camera sensor, the optical data is collected by minute potential wells called pixels. These pixels can be photodiodes or photo-capacitors that generate a charge proportional to the intensity of the incident light. The data from these pixels is collected, organized and transferred to a screen to be displayed.
2. Sensor size
The size of the camera’s sensor plays a significant role in determining the vision system’s field of view (FOV). If a sensor too large for the lens is chosen, the resulting image may degrade or appear to fade away due to vignetting effects. Smaller sensors can instead be used to nullify this tunnel effect.
3. Frame rate and shutter speed
The frame rate refers to the number of full frames composed in a second. It is recommended for high-speed applications to use a higher frame rate to acquire more images as the object passes across the sensor. The shutter speed corresponds to the exposure time of the sensor, which in turn determines the amount of incident light received.
4. Electronic shutter
In the past, CCD cameras used electronic or global shutters, while the CMOS ones were restricted to rolling shutters. A global shutter is like a mechanical shutter, in which all pixels are exposed and sampled simultaneously, and the readout occurs sequentially. A rolling shutter, on the other hand, exposes, samples, and reads out sequentially.
- Monochrome cameras
CCD and CMOS sensors function on wavelengths in the range from approximately 400-1000 nm. The sensitivity can be determined from the sensor’s spectral response curve. Owing to their increased area depth, CMOS sensors are generally more sensitive to IR wavelengths than their CCD counterparts.
- Color cameras
Since the solid-state sensor is based on the photoelectric effect, it cannot differentiate between colors.
There are two types of CCD color cameras: Single-chip and three-chip.
Single-chip ones use a mosaic optical filter to segregate the incoming light into different colors. These colors are then directed to a different set of pixels. The three-chip alternative, on the other hand, uses a prism to direct different colors to different chips.
Also, Read Camera Types for Machine Vision Applications
Undoubtedly, sensors are the heart of all machine vision cameras. Modern sensors are solid-state silicon chips that convert the incoming photons into digital signals. These signals can further be analyzed, viewed, and transferred. However, understanding the key concepts, characteristics and basic terminology is a prerequisite for selecting the apt camera sensor for your vision system.