Displaying 43 containers
Mobilint NPU – LLM Inference Demo ContainerReady-to-run environment for executing various large language models (LLMs) locally on Advantech’s edge AI devices embedded with Mobilint’s ARIES-powered MLA100 MXM AI accelerator module
Intel® OpenVINO™ Powered: Retail AI Self-CheckoutExperience the future of retail with Intel® OpenVINO™. Deploy this containerized AI self-checkout demo in minutes across CPU, iGPU, or NPU. Fast, repeatable, and optimized for edge performance!
ONNX Runtime on Qualcomm Hexagon – QCS6490Supercharge your Qualcomm QCS6490! 🚀 Run ONNX Runtime LLMs on Hexagon NPUs with one script. Our ready-to-use Python container delivers 25x faster inference. Just clone, run, and innovate!
Image Segmentation on Qualcomm® Hexagon™Accelerate edge AI with Qualcomm® Hexagon™-optimized image segmentation—hardware-accelerated container with dual workflows for real-time semantic vision on QCS6490 platforms.
Object Detection on Qualcomm® Hexagon™Accelerate edge AI vision with Qualcomm Hexagon-optimized YOLOv8 container — prebuilt, hardware-accelerated object detection for robotics, surveillance, and industrial vision.
Literal Labs LBN: Wind Turbine Power PredictionExperience the future of energy forecasting with Literal Labs’ Logic-based Network (LBN). Deploy ultra-efficient, deterministic wind power prediction on Advantech ROM-2620 with 50x more efficiency.
- LLM Ollama + OpenClaw on NVIDIA Jetson™
Stop just chatting; start doing. This Jetson-optimized stack uses OpenClaw to transform Ollama into a proactive AI Agent that executes tasks and manages files across 20+ messaging channels. 🚀🤖
Pose Estimation on Qualcomm® Hexagon™Unlock real-time human pose intelligence on Qualcomm® QCS6490. This Hexagon™ DSP-accelerated container runs YOLOv8-Pose and HRNet for ultra-low-latency, edge-ready AI perception.
LLM MLC LLM on Qualcomm® Adreno™Leverage MLC LLM for efficient, GPU-optimized LLM inference on Qualcomm® edge devices—lower latency, reduced memory usage, offline deployment, and OpenAI-compatible APIs.
eIQ GenAI Flow 2.0 on NXP i.MX95Empower your edge with Advantech WEDA! Deploy NXP’s eIQ® GenAI Flow 2.0 on i.MX95 for high-performance LLM, ASR, RAG. Seamless, scalable, and NPU-accelerated. 🚀
Neutron NPU Passthrough on NXP i.MX95Direct NPU passthrough for i.MX95—accelerate edge AI with containerized access to Neutron NPU for real-time, high-efficiency inference.
NPU Passthrough on NXP i.MX8M PlusEnable efficient edge AI on NXP i.MX with NPU passthrough. This ready-to-use container delivers INT8 acceleration, prebuilt runtimes, and rapid deployment for smart, low-power AI applications—no setup
1 - 12 of 43