Displaying 34 containers
Smart Surgery-MISARSArtificial Intelligence to identify Surgical Anatomy for Intraoperative Guidance during Laparoscopic Donor Nephrectomy, running on Advantech edge AI devices with hardware acceleration.
5Voxel VoxelCare5Voxel's VoxelCare combines 3D ToF sensing and AI to monitor human behavior in real time, enabling smart healthcare alerts and instant accident detection via mobile.
Mobilint NPU – LLM Inference Demo ContainerReady-to-run environment for executing various large language models (LLMs) locally on Advantech’s edge AI devices embedded with Mobilint’s ARIES-powered MLA100 MXM AI accelerator module
Overview.ai Industrial Manufacturing InspectionEdge-native AI vision inspection for industrial manufacturing – Smart cameras powered by NVIDIA Jetson deliver real-time defect detection, classification, and measurement at the production line.
MemryX NPU PPE DetectionSmart Safety at the Edge with MemryX NPU & Advantech IPCs — GPU-free, high-performance AI for PPE detection, inspection, and edge monitoring. Plug in, accelerate, and deploy instantly.
DEEPX NPU CLIP VLM SolutionDeploy CLIP/VLM on DEEPX DX-M1 for real-time multimodal AI. NPU-accelerated, low-power, multi-channel inference, integrated in Advantech IPC hardware for scalable AIoT deployment
- Deepseek-R1 1.5B Langchain RAG on NVIDIA Jetson™
AI-powered RAG solution for NVIDIA Jetson™! Extract insights from PDFs with DeepSeek-R1 1.5B + Langchain. Features conversational memory, tool integration & optimized performance for edge AI.
- Qwen2.5 3B AI Agent on NVIDIA Jetson™
Harness Qwen + LangChain AI Agent with EdgeSync Device Library on NVIDIA Jetson™, enabling natural language-driven control of peripherals and edge hardware via FastAPI, Ollama, and Qwen 2.5 3B.
- Deepseek-R1 1.5B Ollama on NVIDIA Jetson™
Unlock AI innovation on Advantech Edge with an Ollama-powered DeepSeek R1 1.5B container, optimized for NVIDIA Jetson with GPU passthrough, bundling dependencies, runtime, UI, and REST API—zero setup.
- Deepseek-R1 1.5B Llama.cpp on NVIDIA Jetson™
Enable real-time, offline AI on NVIDIA Jetson™ with DeepSeek R1 1.5B + LlamaCPP. This container delivers GPU-accelerated local inference, GGUF quantization, and modular AI workflows—no cloud needed.
- LLM Langchain on NVIDIA Jetson™
GPU Accelerated LLM Langchain Container offers a modular, accelerated AI chat stack for Advantech GPU-accelerated devices using Ollama with multiple LLM models compatible, such as Llama 3.2 1B, FastAP
- LLM Langchain AI Agent on NVIDIA Jetson™
GPU accelerated LLM AI Agent—powered by LangChain, OpenWebUI & DeepSeek LLAMA3.2-1B via Ollama. Full GPU acceleration for smart, on-device automation
1 - 12 of 34