COM-HPC Modules Accelerate Machine Vision

COM-HPC Modules Accelerate Machine Vision
COM-HPC Modules Accelerate Machine Vision

Three-dimensional machine vision is admittedly not the simplest of technologies to recognize things. However, because it is closest to the human eye, 3D vision has a wide range of applications and is increasingly being used alongside machine learning. Large application fields in industrial manufacturing are Vision Guided Robotics (VGR) and Automated Guided Vehicles (AGV), where 3D vision is currently creating completely new solutions for Industry 4.0 applications. New COM-HPC modules, such as those from congatec, can deliver a major performance boost while strengthening the trend towards hardware consolidation in both areas.

congatec will soon be launching Server-on-Modules based on the new PICMG COM-HPC specification. Thanks to their standardized footprint, they guarantee longer availability and are ideal to balance the performance of rugged modular edge servers in mobile logistics vehicles and vision guided robotics systems.

The 3D machine vision market is developing very dynamically, with an annual growth rate of almost 15%. One of the main drivers of this growth is considered to be the aging world population. This aspect has two dimensions: On the one hand, it is assumed that the working-age population is decreasing. On the other hand, the number of people needing care is increasing. This creates a shortage of workers in both areas and calls for more robots. In the industrial manufacturing sector, robots are designed to produce all kinds of products more efficiently. In the healthcare sector, robots are used to facilitate care or to maintain autonomy and mobility. All in all, they are meant to make many things easier.

3D vision is looking into a great future

But before we will be surrounded by armies of two-legged humanoid robots, there is still a lot of development work to be done. Most inspection systems, for example, are still static and anchored in one place. While there is a lot of movement in the VGR sector, mobility is still comparatively limited, although this market is also developing highly dynamically. Robots that are firmly fixed in place have one main task–and that is to look very closely, and increasingly in 3D. This camera technology helps them identify objects from three sides (X, Y and Z axes), measure distances, and understand their task. Incidentally, the main market driver of the industry’s growth is the need for greater flexibility in discrete manufacturing, which is playing an increasingly important role due to the Industry 4.0 trend towards batch size 1.

To consolidate multiple edge applications in one system, congatec’s Server-on-Modules support real-time hypervisor technology from Real-Time Systems.

VGR systems often play together with AGVs, which are mainly used as feeding and discharging systems in discrete manufacturing. Demand for AGVs is also expected to grow dynamically, at an annual rate of 14.1% until 2027. Such AGVs move and transport products in manufacturing plants, warehouses, and distribution centers, eliminating or minimizing the need for permanent conveyor systems such as belt or roller conveyors. They follow configurable paths to optimize storage, picking and transportation processes, and are used primarily across factories’ central supply arteries, which mustn’t be blocked by conveyors. They ultimately converge with VGR systems when developed into mobile pick & place robots.

Modularity is key

Looking at very advanced mobile robotic systems, it is noticeable that they may have several subsystems. For example, there are four-legged mobile robots that use three Computer-on-Modules: one to find their way around, a second to move, and a third to perform tasks. This is an ideal approach because it allows the manufacturer to scale the Computer-on-Modules to the specific requirements of each of these tasks. It is also common practice in manufacturing cells to equip each robot with its own controls. However, it is also conceivable to consolidate all robot controls in a manufacturing cell in one system and let them, for instance, communicate directly and in real time via two-wire Ethernet with the actuators/frequency inverters of the drives.

Based on congatec Computer-on-Modules, the education kit developed by the Autonomous System Lab at Intel Labs China offers three levels of expansion.

However, such consolidation requires a mature platform strategy based on significantly more powerful modules, which have so far not been available in an industrial design. Incidentally, the first design of the four-legged robot mentioned above used 10 processor cores to ensure the required computing power and real-time capability. However, such processors are not yet available for ultra low-power mobile embedded systems.
Based on Deep Learning, the vision system consists of a Basler blaze time-of-flight (ToF) camera that can easily be combined with embedded systems from congatec.

Higher core count provides multi-purpose boost

However, with the ratification of the COM-HPC Computer-on-Module specification by the PICMG standardization committee, immensely increased performance far exceeding that of COM Express modules is now becoming available. This applies in particular to the server category as COM-HPC Server modules will in future make rugged and scalable solderable entry class modules available to server processors. This will enable multi-purpose embedded edge computing solutions on which even the most power-hungry, decentralized real-time control systems can be consolidated. This, of course, requires hypervisors for real-time capable virtual machines from vendors such as Real-Time Systems. Such real-time capable hypervisor solutions are necessary to ensure uninterrupted deterministic real-time control even if the HMI of the production cell is rebooting on the same processor, or the integrated IoT gateway is busy converting and evaluating large volumes of machine data and processing requests in parallel.

An accelerating need for more performance

But even without the integration of diverse subsystems on one module, COM-HPC is basically a must, because 3D image processing is a complex task that, for example, involves creating point clouds captured by time-of-flight (ToF) technology. This produces immense amounts of data, with 32 bits of spatial coordinates being generated per pixel. A resolution of 640 x 480 pixels at 30 frames per second (fps) therefore produces 35 MB of 3D data per second. Added to this is the color information of a classic 2D camera, with generally 4 times higher resolutions. At 1.2 mega pixels (1280 x 1024 pixels) and 8-bit color depth per channel, we are talking about an additional 112.5 MByte per second. So, raw data totaling around 150 MByte per second must be processed.

Regardless of the ultimate structure of the COM and carrier based embedded system design, Basler’s dart series models offer the perfect starting point for adding image processing to common processors.

There are also high workloads for stereo vision with two cameras and optionally structured light. This translates into exceptionally high demands for data throughput and heterogeneous CPU and GPGPU computing power. For these use cases, the first generation of COM-HPC modules based on 11th generation Intel Core processor technology (codenamed Tiger Lake) is currently recommended. Although these are actually COM-HPC Client modules, they offer attractive features that other module standards do not provide.

For one thing, they support the full PCIe Gen4 interface and therefore provide twice the bandwidth of PCIe Gen3 between cameras and processors, as well as between discrete GPUs – which are used to process massively parallel image data and AI algorithms. This is paired with native MIPI-CSI camera support, which lowers the cost of camera technology and boosts performance. In addition, the modules support highly configurable Ethernet options, ranging from 8x 1GbE and 2x 2.5GbE including TSN support, to dual 10GbE connectivity based on the congatec COM-HPC starter set. This support can be extended further to include two-wire Ethernet, which then makes it possible to connect event the smallest peripheral devices such as individual sensors and actuators efficiently.

AI ecosystem is crucial

congatec’s comprehensive AI support for MIPI-CSI connected cameras enhances the application-readiness of IIoT and Industry 4.0 networked embedded systems even further. AI and inference acceleration can be implemented on the CPU using Intel DL Boost based vector neural network instructions (VNNI), and on the GPU using 8-bit integer instructions (Int8). Another attractive feature in this context is support for the Intel Open Vino ecosystem for AI. This includes a function library and optimized calls for OpenCV and OpenCL kernels to accelerate Deep Neural Network workloads across platforms to obtain faster and more accurate AI inference results. The Autonomous System Lab of Intel Labs China has already introduced a COM Express based platform for educational purposes. In addition, an Intel certified “Ready for Production” kit for workload consolidation is also already available. With the new COM-HPC modules, it is now possible to evaluate this OpenVINO ecosystem from the software libraries to Adaptive Human-Robot Interaction (AHRI) or Simultaneous Localization & Navigation (SLAM) on the COM-HPC form factor.

ATX carrier board conga-HPC/EVAL-Client

The ATX carrier board conga-HPC/EVAL-Client offers everything needed for the evaluation of smart vision robotics and autonomous logistics vehicles. It features two massively performant PCIe Gen4 x16 interfaces as well as a wide range of LAN options for data bandwidth, transmission, and connectors–including 2x 10GbE as well as 2.5GbE and 1GbE support. Using mezzanine cards, the carrier can run even more powerful interfaces up to 2x 25GbE, making this evaluation platform a perfect solution for massively networked edge devices. The centerpiece of the presented starter kit for COM-HPC Client designs is the conga-HPC/cTLU Computer-on-Module, which is available in different processor configurations. For each of these configurations, there are three different cooling solutions to match the entire configurable 12 to 28 W TPD range of the 11th generation Intel Core processors.

About The Author

Zeljko Loncaric is marketing engineer at congatec.

Did you enjoy this great article?

Check out our free e-newsletters to read more great articles..