Run bigger local AI models with the computers you already own over WiFi.

RiftIcon links your PC, laptop, and phone into one private neural mesh. Pool your VRAM to run Llama 3 70B and DeepSeek R1 — with built-in AI agents, a Canvas code editor, and one-click model downloads. No cloud APIs, no subscription fees, no data leaving your network.

SEE WHAT YOU CAN RUN → JOIN WAITLIST
// who is this for
THE AI TINKER

Break the VRAM Limit

Stop fighting llama.cpp out-of-memory errors. Pool your gaming PC, your Macbook, and your phone into a single logical GPU. Run the full Llama 3 70B offline using hardware you already paid for.

THE FOUNDER

$0 API Costs

Prototyping AI features shouldn't drain your runway. Turn 5 old office laptops into a localized compute cluster. Test multi-agent workflows continuously for exactly $0 per token.

THE PROFESSIONAL

100% Data Privacy

Working under NDA or HIPAA? Don't send client data to OpenAI. RiftIcon runs an air-gapped, fully encrypted local mesh that guarantees your intellectual property never leaves the network.

// how it works
01

Node Setup

Download and run the RiftIcon binary on your main PC. Then, load the lightweight RiftNode companion app on any additional devices (Macbook, old gaming rigs, or Android phones).

02

Device Pooling

The nodes automatically discover each other over your local WiFi network. RiftIcon aggregates all available VRAM and System RAM across devices into a single, logical compute pool.

03

Model Execution

Select a GGUF model via the local dashboard. The system intelligently slices the tensor layers and distributes the compute payload across the mesh for high-speed, 100% offline inference.

// real benchmarks — verified methodology

All tests run on local RiftMesh V1.4.4 over standard 5GHz WiFi. Zero cloud APIs. Zero CPU fallback.

Hardware Mesh Model (GGUF) Quant Speed
1x RTX 4060 (8GB) Qwen 2.5 3B Q4_K_M 81.5 tok/s
2x RTX 4060s (WiFi) DeepSeek R1 8B Q4_K_M 43.0 tok/s
2x 4060 + 1x Mac M3 Llama 3 14B Q5_K_M 28.2 tok/s
4-Node Mixed Office Mesh Llama 3 70B Q4_K_M 12.4 tok/s
// 100% openai api compatible

RiftIcon spins up a local server on port 7117 that perfectly mocks the OpenAI interface. Connect Cursor, OpenWebUI, AnythingLLM, or LangChain directly to your local hardware mesh with zero code changes.

UI Clients (OpenWebUI, etc.)

Override the Base URL in settings.

OpenAI Base URL: http://127.0.0.1:7117/v1
API Key (Required by UI, Ignored by Rift): sk-rifticon

Python / LangChain

Use the standard `openai` pip package.

from openai import OpenAI

# Point client to the local mesh
client = OpenAI(
    base_url="http://127.0.0.1:7117/v1",
    api_key="sk-rifticon" 
)

resp = client.chat.completions.create(
    model="14B",
    messages=[{"role": "user", "content": "Hello mesh!"}]
)
// the ultimate neural node

Apple Silicon is the Muscle. Windows is the Brain.

Because Apple's M-Series chips use Unified Memory, RiftIcon treats system RAM as dedicated GPU VRAM. A MacBook Pro with 32GB of RAM effectively becomes a massive 32GB VRAM inference card when connected to your mesh.

Our core inference engine—the complex "Brain" that manages the mesh—runs exclusively on your main Windows PC. To tap into your MacBook's massive memory pool, simply drop the lightweight RiftNode companion app onto your Mac. It instantly beams its compute power and Unified Memory back to your Windows host over WiFi.

The Brain Windows PC
(Runs RiftIcon Core)
The Muscle MacBook (Apple Silicon)
(Runs RiftNode App)
// built-in ai agents

🤖 Your AI, Your Rules — Multiple Agents, One Dashboard

RiftIcon isn't just a chatbot — it's an agent platform. Create specialized AI agents with custom system prompts, sandboxed tools, multi-channel routing, and cron triggers. Switch between them instantly from the sidebar.

Code Agent chatting in RiftIcon dashboard — 66 tok/s, 2 nodes, mesh telemetry
Code Agent running at 66 tok/s across 2 mesh nodes
Agent Center — configure system prompts, descriptions, model, and temperature
✏️ Custom system prompts, model picker, temperature control
Agent Center — 6 built-in tools: Write File, Read File, Edit File, Run Command, Search, List Dir
🔧 6 sandboxed tools — file ops, shell, search, and more
Agent Center — channels: Dashboard, Discord, Telegram, WhatsApp, Webhook
📡 Channels — Dashboard, Discord, Telegram, WhatsApp, Webhook
Multi-Agent Custom Prompts Tool Use File Ops Channels Cron Triggers Webhooks
// canvas — code with your ai

🎨 Tell Your AI to Build It — Watch It Appear in Real-Time

Canvas gives you a split code editor with live preview, powered entirely by your local mesh. Ask your AI to build a website, script, or app component — and see the result rendered instantly. No copy-pasting, no tab switching.

Canvas split view — chat with AI agent on left, code editor and live preview on right
Split mode — chat + editor + live preview
Canvas with Code Agent building a website — red button and warning screen rendered live
🚀 Agent-generated website rendered in real-time
Live Preview HTML / CSS / JS Code Editor AI-Generated Side-by-Side
// one-click model downloads

📦 Browse & Download Models — No Terminal Required

The built-in Model Hub lets you browse, filter, and download GGUF models directly from the dashboard. See recommended models for your VRAM, pick a quantization, and click download — you're running in seconds.

Model Hub — browse and download GGUF models with one-click install, filter by size and type
📥 Browse, filter, and download — zero config
11+ Models Size Filters Recommendations One-Click Download
// what you're actually getting
RiftIcon v2.4 dashboard — threads, Code Agent chat, mesh telemetry with 2 nodes, 37.6 GB pooled VRAM
RiftIcon Dashboard — chat, threads, mesh telemetry, and Canvas in one window
Mesh Telemetry — 4-node topology with tensor pool, device discovery, and model selection
Mesh Telemetry — see every node, VRAM allocation, and tensor flow
RiftNode Android app — Samsung Galaxy contributing 10 GB RAM to the mesh over WiFi
RiftNode Android — turn your phone into a neural compute node
What you get at launch ($99 One-Time):
RiftIcon V1 (Windows): The core inference engine and dashboard.
RiftNode Apps: Unlimited distribution across Windows, Mac, and Android.
Lifetime Phase Upgrades: Get Phase 2 (Routing) and Phase 3 (Memory) free.
Priority Support: Direct 1-on-1 Discord access to the developer.
🛡️ The "It Actually Works" Guarantee If RiftIcon does not successfully detect and mesh at least two of your supported devices over your local WiFi, we will refund you in full. No API keys. No bullshit.
🚀 Join the waitlist — be first when we launch
... founders waiting

🔒 Limited Founder's Edition. Waitlist members get first access + the lowest price when we launch.
V1 Windows Founder's Edition — Coming Soon.
Only 500 Founder's licenses will be available at launch. Waitlist members get priority access.
Your support directly funds the next phase of the ultimate plug-and-play agent stack.
🧪 Apply to Be a Beta Tester
Randy W. Stover
Solo Developer · Inventor · RiftIcon Creator

I built RiftIcon because I was tired of paying for cloud AI that censors my outputs and harvests my data. I had three machines collecting dust and realized their combined VRAM could run the models I actually wanted. So I wrote the orchestrator myself — in Rust, from scratch. Now I'm sharing it with you.

Every license directly funds the next phase. You're not buying from a corporation — you're backing an engineer who uses this tool every day.

🚀 Join the waitlist for Founder's Edition