Every vision system relies on image sensors, processing hardware, and software algorithms to automate repeated and mundane visual inspection applications.
Image sensors are solid-state devices that are one of the most significant elements of an MV system. These small digital sensors placed inside industrial cameras work with specialized optics to acquire images so that they can be processed and analyzed further. Their fundamental objective is to convert incoming light energy into electrical signals.
But how do these sensors work? So, when the rays from an object fall on the sensor, tiny potential wells called pixels collect the optical data. The photo-diodes within the sensor generate a charge proportional to the intensity of the incoming light. The digitized data so generated is then collected, organized, and transferred for being displayed on screens.
In this post, we will be discussing two parameters related to these image sensors: Quantum efficiency and sensor size.
What is Quantum Efficiency?
The first step in converting light to a digital signal is to convert the photons to electrons. The term “Quantum efficiency” is a measure of the efficiency of this conversion. More precisely, quantum efficiency (QE) is the ratio of electrons generated in the digitization process to the number of photons that fall on the sensor. For example, if a sensor has a QE of 50%, then one electron will be generated for every two incident photons. Therefore, the higher the QE, the greater the effectiveness.
Quantum efficiency depends on the texture of the object. When you use a monochrome camera or a color camera, the sensitivity of the sensor will be the maximum for a certain bandwidth of light. The sensitivity of a color camera might be maximum at red for some sensors or blue for some. Conclusively, it is recommended to make sure that the QE is high, corresponding to the bandwidth being used in the lighting.
What is Sensor Size?
Sensor size is defined as the diagonal length between two opposite corners. The sensor size fundamentally depends on the resolution and pixel size. Say you have two 5 MP cameras (resolution). The pixel size in one is 1 micron and in the other is 5 microns. Then, the sensor size of the second camera will be five times the first one.
The sensor size also plays a significant role in determining the MV system’s field of view (FOV), magnification and inverse magnification, and choosing the lens with the right focal length.
What is Focal Length?
In optical terms, the focal length is the distance between the principal focus and the lens. The principal focus is the point where the maximum light rays converge. Since this point is commonly the sensor in machine vision cameras, it can also be understood as the distance between the sensor and the lens. Lenses typically have a fixed focal length.
Also, Read 3 Types Of Lens Mounts: Machine Vision
How To Calculate Focal Length of Machine Vision Cameras?
Before selecting the right lens, we first have to choose the right camera. So, first, let’s find out the resolution for your vision application. Say, for your intended application, you need the following specifications:
- Field of View: 20 mm X 45 mm
- Minimum feature size (MFS): 0.2 mm
- Required number of pixels in MFS: 5
The pixels/mm is 5/0.2 that is 25px/mm. The required resolution along each dimension can be calculated by multiplying the FOV by the pixels/mm. In this case, the resolution turns out to be 500 X 1125.
However, our calculated resolution might not always be available in standard camera options. So, we have to look for options that provide resolution just greater than what we need. In our case, a 1.3 MP camera with a resolution of 1024 X 1280 might be a suitable choice.
Now that we have the resolution and have chosen the pixel size, we can calculate the sensor size. The sensor size can be calculated by multiplying the pixel size by the resolution along each of the two dimensions. The focal length can be calculated by using the formula: Focal Length x FOV = Sensor size x Working distance.
Once you have calculated the focal length, you can then also calculate the appropriate working distance for your application.
However, merely calculating the focal length will not suffice. By now, we know that the image sensor is rectangular, and lenses are circular in shape. This is where the concept of the image circle comes in.
In the best-case scenario, the lens encloses the image sensor perfectly, i.e., the diagonal of the sensor is overlaps with the diameter of the lens. The size of the lens can also be greater than the size of the sensor. In any other cases, you will witness vignetting effects (dark corners where the lens doesn’t cover the sensor) and not get the entire image.
Also, Read CAMERA FUNDAMENTALS IN MACHINE VISION
Image sensors are a critical constituent of machine vision systems. Foremostly, in this post, we discussed two parameters affecting the performance of sensors: quantum efficiency and sensor size. Based on the required resolution and sensor size, we then computed the right focal length and lens dimensions for our vision applications.