Catalog

Overview

Advantech Container Catalog provides pre-integrated, hardware-accelerated containers that simplify edge AI development and deployment.

This plug-and-play YOLOv8 object detection container is optimized for Qualcomm® QCS6490, abstracting SDKs, runtimes, and toolchains—so developers can focus on building real-world AI applications.

Built with full DSP/GPU acceleration, the container integrates QNN SDK, SNPE, and LiteRT in a fully preconfigured environment, delivering real-time inference out of the box on Advantech AOM-2721.


Key Capabilities & Benefits

  • Hardware-Accelerated Edge AI
    INT8 inference on Hexagon™ DSP 770 with optional Adreno™ 643 GPU acceleration via QNN, SNPE, and LiteRT.

  • YOLOv8 Ready Out-of-the-Box
    Supports Ultralytics and Qualcomm® AI Hub workflows for fast testing and optimized deployment.

  • Dual Workflow Flexibility
    Easily switch between rapid prototyping and production-grade optimization using script-based pipelines.

  • Multi-Model Format Support
    Compatible with TFLite, SNPE DLC, and QNN .so model formats.

  • End-to-End Tooling Included
    Preloaded export, quantization, and benchmarking scripts for streamlined development.

  • Real-Time Vision Pipeline
    GStreamer + OpenCV preconfigured for responsive video inference.

  • ROS-Ready Robotics Integration
    Compatible with Qualcomm® Robotics Reference Distro and ROS 2.0 (ROS 1.3-ver.1.1).


What’s Included

YOLOv8 Export & Optimization

  • Ultralytics Export – Rapid testing with TFLite
  • Qualcomm® AI Hub Conversion – INT8-optimized deployments

Integrated Runtime Stack

  • QNN, SNPE, LiteRT for DSP/GPU acceleration
  • GStreamer + OpenCV for vision pipeline development

Preloaded Scripts & Tools

  • advantech-coe-model-export.sh – Model export & conversion
  • advantech-aihub-model-export.sh – AI Hub optimization
  • wise-bench.sh – Runtime verification & benchmarking

Container Demo


Edge-Ready Use Cases

  • Industrial Automation – Defect detection, safety zone monitoring, predictive maintenance
  • Smart Retail – Customer analytics, shelf monitoring, automated checkout
  • Intelligent Transportation – Vehicle/pedestrian detection, traffic and in-cabin monitoring
  • Robotics & Drones – Autonomous navigation, obstacle detection, infrastructure inspection
  • Smart City & Surveillance – Crowd analysis, parking management, perimeter security
  • Healthcare & Assistive Systems – PPE compliance, patient activity monitoring
  • Agriculture – Crop/livestock monitoring, pest detection, yield estimation
  • Edge AI R&D – Model benchmarking, INT8 vs FP32 evaluation, custom YOLOv8 training

Host Device Prerequisites

Component Specification
Target Hardware Advantech AOM-2721
SoC Qualcomm® QCS6490
GPU Adreno™ 643
DSP Hexagon™ 770
Memory 8GB LPDDR5
Host OS Yocto 4.0 (LE1.3)

Container Environment Overview

Software Components on Container Image

Component Version Description
LiteRT 1.3.0 Provides QNN TFLite Delegate support for GPU and DSP acceleration
SNPE 2.29.0 Qualcomm’s Snapdragon Neural Processing Engine; optimized runtime for Snapdragon DSP/HTP
QNN 2.29.0 Qualcomm® Neural Network (QNN) runtime for executing quantized neural networks
GStreamer 1.20.7 Multimedia framework for building flexible audio/video pipelines
Python 3.10.12 Python runtime for building applications
OpenCV 4.11.0 Computer vision library for image and video processing

Container Quick Start Guide

For container quick start, including the docker-compose file and more, please refer to README.


Supported AI Capabilities

Vision Models

Model Format Note
YOLOv8 Detection TFLite INT8 Downloaded from Ultralytics` official source and exported to TFLite using Ultralytics Python packages
YOLOv8 Segmentation TFLite INT8 Downloaded from Ultralytics` official source and exported to TFLite using Ultralytics Python packages
YOLOv8 Pose Estimation TFLite INT8 Downloaded from Ultralytics` official source and exported to TFLite using Ultralytics Python packages
Lightweight Face Detector TFLite INT8 Converted using Qualcomm® AI Hub
FaceMap 3D Morphable Model TFLite INT8 Converted using Qualcomm® AI Hub
DeepLabV3+ (MobileNet) TFLite INT8 Converted using Qualcomm® AI Hub
DeepLabV3 (ResNet50) SNPE DLC TFLite Converted using Qualcomm® AI Hub
HRNet Pose Estimation (INT8) TFLite INT8 Converted using Qualcomm® AI Hub
PoseNet (MobileNet V1) TFLite Converted using Qualcomm® AI Hub
MiDaS Depth Estimation TFLite INT8 Converted using Qualcomm® AI Hub
MobileNet V2 (Quantized) TFLite INT8 Converted using Qualcomm® AI Hub
Inception V3 (SNPE DLC) SNPE DLC TFLite Converted using Qualcomm® AI Hub
YAMNet (Audio Classification) TFLite Converted using Qualcomm® AI Hub
YOLO (Quantized) TFLite INT8 Converted using Qualcomm® AI Hub

Supported AI Model Formats

Runtime Format Compatible Versions
QNN .so 2.29.0
SNPE .dlc 2.29.0
LiteRT .tflite 1.3.0

Hardware Acceleration Support

Accelerator Support Level Compatible Libraries
GPU FP32 QNN, SNPE, LiteRT
DSP INT8 QNN, SNPE, LiteRT

Best Practices

  • Prefer INT8 quantized models for DSP acceleration
  • Ensure fixed batch sizes when converting models
  • Use lower GST_DEBUG levels for stable multimedia handling
  • Always validate exported models on-device after deployment

Copyright © Advantech Corporation. All rights reserved.