Skip to main content

AI Foundation Addon

The AI Foundation Addon is the easiest way to run AI models on your device. Start running inference in under 15 minutes. No coding, no cloud APIs, no data leaving your device.

What is the AI Foundation Addon?

The AI Foundation Addon is an addon package for mimOE that includes everything you need to run AI models locally:

  • Model Registry: Service for managing your AI models with two-step provisioning
  • Generative Inference Service: OpenAI-compatible API for running models

Pre-configured and ready to run.

Why Use AI Foundation?

Zero Code Required

Provision a model and call the API. No programming, no configuration, no complexity. Perfect for testing AI models or building simple integrations.

Complete Privacy

Models run entirely on your device. No cloud API calls, no data transmission, no privacy concerns. Your data never leaves your computer.

Four Model Kinds

Support for different AI model types:

KindDescriptionUse Case
llmLarge Language ModelText generation, chat, reasoning
vlmVision Language ModelMultimodal (text + images)
embedEmbedding ModelSemantic search, similarity
onnxONNX ModelImage classification, predictive AI

Device-First Performance

Take advantage of local hardware acceleration (Metal on macOS, CUDA on NVIDIA GPUs) for fast inference without cloud latency.

OpenAI-Compatible API

The Inference API follows the OpenAI chat completions format, making it drop-in compatible with existing tools and libraries.

How It Works

AI Foundation workflow: Start mimOE, Create Model Metadata, Download/Upload Model, Call Inference API, Get Results

  1. Start mimOE: Run the startup script to launch the runtime
  2. Create Model Metadata: Register the model with its configuration
  3. Provision File: Download from URL or upload locally
  4. Run Inference: Call the OpenAI-compatible API
  5. All On-Device: Everything happens locally on your machine

Quick Start Path

The fastest way to get started:

  1. Quick Start: Install and run your first inference (recommended starting point)
  2. Upload a Model: Learn model provisioning options
  3. Inference API: Complete API guide

Examples & Tutorials

Explore practical examples that demonstrate different capabilities:

API Endpoints

ServiceBase URLDescription
Model Registry/mimik-ai/store/v1Model management (CRUD, upload, download)
Inference/mimik-ai/openai/v1OpenAI-compatible inference

Use Cases

Development & Testing

Test AI models locally before deploying to production. Experiment with different models and prompts without cloud costs.

Privacy-Sensitive Applications

Medical records analysis, legal document processing, and personal data analysis. Ideal for anything where privacy is paramount.

Offline AI

Build applications that work without internet connectivity. Field service apps, remote locations, air-gapped environments.

Cost Optimization

Eliminate cloud API costs for high-volume inference workloads. Run thousands of inferences for free.

Edge Deployment

Develop applications that run AI directly on end-user devices such as smartphones, tablets, and IoT devices.

System Requirements

Supported Platforms

  • macOS 10.15+ (Apple Silicon)
  • Ubuntu 22.04+ (x86_64)
  • Windows 10+ (x86_64)
Resource Requirements

mimOE itself is lightweight. RAM and disk requirements depend on the AI models you choose to run. See Finding Models for model size guidelines.

What's Next?

After mastering the AI Foundation Addon, explore:

AI Development

Learn to build custom AI applications with pre/post processing, custom mims, and AI agents:

Platform Guide

Dive deeper into the mimOE platform for advanced capabilities:

Mesh Foundation

Connect devices and build distributed AI applications:

Getting Help

Start Now

Ready to run your first AI model on-device? Choose your path: