For a while now, Industrial Machine Vision Software and AI have been rapidly accelerating digital transformation across verticals and creating innovative, new business models generating real business value by increasing automation, minimizing downtime, improving worker safety and identifying defects. We are at a point where we can train any smart camera to see anything faster than the human eye and process data within milliseconds. Naturally, the potential is phenomenal. The challenge is to find a cost-effective solution that can manage such high-volume image data streams, overcome bandwidth and latency issues and harness insight from machine learning to enable real-time decisions and action.
At Qualitas Technology we took this challenge up and developed our EagleEye Platform – a comprehensive solution for the entire machine vision software lifecycle. Want to know how it works? Let’s take a deep dive.
WHAT IS THE QUALITAS EAGLEEYE® PLATFORM?
The Qualitas EagleEye® Platform (QEP) is an end-end solution development engine backed by the power of the cloud and Deep Learning to develop quick powerful visual inspection solutions 10x the time that it would take with existing solutions and tools. The platform provides you with powerful accuracy verification to cut down your False Positive Rates and increase error detection rates.
Here is what the EagleEye® Platform can offer to manufacturers across various industries:
- Enhanced, Accurate and Consistent Quality Control Automation
- Modular Image Acquisition with multiple choices of resolutions, lighting and optics
- Complete traceability of quality inspection steps with images
- Business analytics and statistics of quality parameters
- A web dashboard to monitor System performance and Accuracy – providing 100% visibility into system performance
THE CORE COMPONENTS OF THE QUALITAS EAGLEEYE® PLATFORM
So what all does the system consist and how is it integrated together to form a solution that is efficient and accurate? Well, let us start with the hardware:
In the EagleEye® Platform, the cameras are responsible for taking the light information from a scene and converting it into digital information i.e. pixels using CMOS or CCD sensors. The light from the source is focused adequately by a lens on the sensor for it to capture the image with maximum clarity. The lens of the camera provides appropriate working distance, image resolution, and magnification for a vision system.
Additionally, illumination is arguably the most important factor in the EagleEye® Platform. The lighting provides uniform illumination throughout all the visible object surfaces. The illumination system is set up in a way that avoids glare and shadows. Spectral uniformity and stability are at the forefront of the illumination system. The primary light sources used are spotlights, dome lights, line lights, and ring lights. Additionally, the colour of the light is calibrated and adjusted in accordance with the surface of the material that needs inspection.
We now have our images captured thanks to the cameras and illumination system. So what now? Well, we cannot start anything without first preparing the data for training and processing. The first step of the process is using the EagleEye® Cloud – a machine vision software to prepare the data for subsequent processing.
- Data Labelling
Data labelling is the process of manually annotating content, with tags or labels. In the EagleEye® Platform, the label identifies elements within the image. The annotated data is then used in supervised learning. The labelled dataset is used to teach the model by example. Data labelling is critical in the success of the EagleEye® Platform.
- Label Verification
The EagleEye® Platform validates the presence, accuracy, and readability of codes applied to labels or marked directly onto products, flagging defects and those printed outside the field of view. In the case of no-read events, the tool analyses archived images to reveal the underlying cause, whether it be low-contrast printing, faulty placement, or damage.
- Training and evaluation
How does the training process work exactly? We first supply image data that has already been provided with labels. Each label corresponds to a tag that indicates the identity of the particular object. The system analyses this data, and on this basis-creates or “trains” corresponding models of the objects to be identified.
Due to these self-learned object models, the deep learning network is now able to assign the newly-added image data to the appropriate classes, such that, their data content or objects are also classified. Thanks to this allocation to certain classes, the items can then continue to be identified automatically.
The next step is to use the Qualitas EagleEye® Edge – A highly powered DL based inferencing software. Images are captured and are inferences with an AI accelerated vision controller to run our innovative Deep Learning Inferencing algorithm.
Deep learning inference is the process of using a trained Deep Neural Network (DNN) model to make predictions against previously unseen data. The Deep Learning training process actually involves inference, because each time an image is fed into the DNN during training, the DNN attempts to classify it.
This training process continues — with the images being fed to the DNN and the weights being updated to correct for errors, over and over again — until the DNN is making predictions with the desired accuracy. At this point, the DNN is considered “trained” and the resulting model is ready to be used to make predictions against never-before-seen images.
If you’re planning to optimize your Quality Control with the latest state-of-the-art and cutting-edge technology, there has been no better time than now. And we can help you achieve this with our EagleEye® Platform! Contact us to know more.