AI Mesh
This example combines AI Foundation and Mesh Foundation. A lightweight node discovers a more powerful node on the network and sends an inference request to it, using the remote node's model instead of running one locally.
Prerequisites
- Two machines on the same local network, both running mimOE
- One node with a model loaded via AI Foundation Quick Start
| Role | Description |
|---|---|
| Node A (your machine) | The machine you're running commands from |
| Node B (remote) | Another machine with a model loaded (e.g., smollm2-360m) |
Both the ai-foundation and mesh-foundation addons are pre-installed with mimOE. No additional setup is needed beyond loading a model on Node B.
Step 1: Discover the Remote Node
From Node A, find other mimOE nodes on your network:
- cURL
- JavaScript
- Python
curl -X GET "http://localhost:8083/mimik-mesh/insight/v1/nodes?type=linkLocal" \
-H "Authorization: Bearer $API_KEY"
const response = await fetch(
'http://localhost:8083/mimik-mesh/insight/v1/nodes?type=linkLocal',
{
headers: { 'Authorization': `Bearer ${apiKey}` }
}
);
const { data: nodes } = await response.json();
const remoteNode = nodes[0];
console.log(`Found: ${remoteNode.name} at ${remoteNode.addresses[0].url.href}`);
import requests
response = requests.get(
"http://localhost:8083/mimik-mesh/insight/v1/nodes",
headers={"Authorization": f"Bearer {api_key}"},
params={"type": "linkLocal"}
)
nodes = response.json()["data"]
remote_node = nodes[0]
remote_url = remote_node["addresses"][0]["url"]["href"]
print(f"Found: {remote_node['name']} at {remote_url}")
Response
{
"data": [
{
"id": "b2c3d4e5-f6a7-8901-bcde-f12345678901",
"name": "Workstation",
"os": "linux",
"addresses": [
{
"type": "local",
"url": { "href": "http://192.168.1.101:8083" }
}
],
"services": []
}
]
}
Note the remote node's address: http://192.168.1.101:8083.
Step 2: Call Inference on the Remote Node
Use the discovered address to call Node B's AI Foundation inference API directly:
- cURL
- JavaScript
- Python
curl -X POST "http://192.168.1.101:8083/mimik-ai/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer 1234" \
-d '{
"model": "smollm2-360m",
"messages": [{"role": "user", "content": "What are the benefits of on-device AI?"}]
}'
import OpenAI from 'openai';
const remoteUrl = 'http://192.168.1.101:8083';
const client = new OpenAI({
baseURL: `${remoteUrl}/mimik-ai/openai/v1`,
apiKey: '1234'
});
const response = await client.chat.completions.create({
model: 'smollm2-360m',
messages: [{ role: 'user', content: 'What are the benefits of on-device AI?' }]
});
console.log(response.choices[0].message.content);
from openai import OpenAI
remote_url = "http://192.168.1.101:8083"
client = OpenAI(
base_url=f"{remote_url}/mimik-ai/openai/v1",
api_key="1234"
)
response = client.chat.completions.create(
model="smollm2-360m",
messages=[{"role": "user", "content": "What are the benefits of on-device AI?"}]
)
print(response.choices[0].message.content)
You just ran inference on a remote node, discovered through the local mesh, without any cloud services.
What Just Happened?
- Node A used the Mesh Foundation Insight API to discover Node B on the local network
- Node A called Node B's AI Foundation inference API directly using the discovered address
- Node B ran the model and returned the response
- All communication happened over the local network with no cloud involved
Why This Matters
- No model needed locally: Node A doesn't need to download or load any model
- Use the best hardware: Route inference to the node with a GPU or more RAM
- Zero configuration: Both nodes found each other automatically via link-local discovery
- Direct communication: The request went straight to Node B, no relay or cloud involved
- Standard APIs: Uses the same OpenAI-compatible API, just at a different address
Going Further
This example uses link-local discovery (same network). To reach nodes on different networks:
- Account-Based Discovery: Discover your nodes anywhere
- Tunneling: Route requests across network boundaries via SEP
For building custom AI agents with pre/post processing logic:
- AI Development: Build agents with tool use and multi-agent collaboration