Image sensors have always been crucial for allowing machines and robots to recognize objects in factories and warehouses. Being able to use what they ‘see’ to learn about the environments in which they operate is raising the bar for image sensor performance. It’s essential to know that no single sensor solution is suitable for every application; therefore, a combination of different image sensors may be required. In this blog, you’ll learn about critical performance metrics for various image sensors that are required to design versatile autonomous mobile robots (AMR) or fixed robots.
Global Shutter Sensor for Navigation and Collision Voidance
Sensors in mobile phones use a ‘rolling shutter’ for image detection. This involves sequentially exposing rows of pixels to light from the image source. However, this approach brought a slight delay between the exposure of each row of pixels in the sensor, introducing ‘artefacts’ (visual anomalies) into the detected image. This effect is amplified by the angular motion and vibration experienced by an AMR travelling across the factory floor. onsemi has developed leading-class image sensors with a ‘global shutter’, simultaneously exposing all pixels to reduce the probability of artefacts presenting in the resulting image. This makes navigation easier for AMRs and makes them less likely to collide with other objects. Critical metrics for global shutter image sensors include:
- Global Shutter Efficiency (GSE) quantifies a pixel’s sensitivity to parasitic light contamination. A sensor with a high GSE is less likely to produce images with smears, leaky images, or shading artefacts.
- Modular Transfer Frequency (MFT) is essential for image sharpness. This is important in robots or machines, which must be able to read barcodes. In addition, the ability to detect edges is an essential requirement for machines using AI to learn about their environment, and image sensors with high MFT enable this.
High Dynamic Range (HDR) for Mixed Lighting Environments
A sensor with a high dynamic range (ranging from 100 to even greater than 120 dB) produces more detailed images in contrasting lighting conditions like those that can occur in factories or outdoor environments. HDR can be provided by using multiple exposures, and recombining long exposures (low light), medium (medium lighting) and short exposures (bright light). However, with higher-resolution sensors, this approach significantly increases the frame rate, producing large volumes of data to be transferred to the AMR system-on-chip (SoC), increasing power consumption, and potentially allowing the image data to be contaminated by noise. onsemi uses a different approach, providing the AR0822 8 MP sensor which performs embedded image recombination (eHDR) to reduce the data bandwidth to the SoC. This approach also removes the burden of image processing from the SoC, allowing it to focus on AI or other tasks. In addition, this sensor also features motion compensation to capture fast-moving objects moving accurately.
Large Format Image Sensor with High Resolution for Distance Applications and Accurate Object Placement
Fixed robots are sometimes required to read labels containing information about where an object should be placed. The distance between the sensor and a label can vary depending on the application, so a high-resolution sensor that can produce large-format images is required. onsemi’s higher resolution sensors ranging from 8 MP to 45 MP are ideally suited to this purpose, allowing a robot to read a label and precisely place an object at its intended destination.
Sensor Know-how Combined with Development Tools
AMRs and fixed robots perform various tasks in smart factories, so choosing image sensors optimized for specific automation tasks is essential. Whether it is superior global shutter sensors required for navigation and barcode reading, high resolution sensors required for long distances and precision object placement, or HDR sensors that image in non-deterministic mixed-lighting environments onsemi supplies image sensors to provide AMRs and fixed robots with the level of vision most appropriate to their application.
Watch Steve Harris' session on “AI/ML on the Factory Floor Image Sensor Importance in Smart Warehouses” here.