# === Docker (any OS — recommended) ===
docker run -d --name orin-node -p 8090:8090 \
-v orin-data:/app/orin/data ocorpt/orin-node:v1.5.7
# === macOS (direct download) ===
curl -LO https://macllm.ai/releases/ORIN-Node-v1.5.7-macos-arm64.tar.gz
tar -xzf ORIN-Node-v1.5.7-macos-arm64.tar.gz && ./ORIN-Node/ORIN-Node
# === Windows (PowerShell) ===
irm https://macllm.ai/install.ps1 | iex
# === Linux (bash) ===
curl -fsSL https://macllm.ai/install.sh | bash
# === Python (any OS, from source) ===
pip install fastapi uvicorn pydantic
python orin_node_app.py
After starting, open http://localhost:8090 to see the dashboard. The node auto-registers with the bootstrap network.
# ORIN Mainnet
curl https://orin.macllm.ai/health
# Local node
curl http://localhost:8090/health
curl -X POST https://orin.macllm.ai/query \
-H "Content-Type: application/json" \
-d '{"query": "What is MACLLM?", "user": "test_user"}'
curl https://orin.macllm.ai/account/test_user
No external dependencies required. Available at github.com/ocorpt/onion-data/orin/sdk.py or from the Download page. Import directly:
from orin.sdk import ORINClient
# Connect (auto-detects local or public)
client = ORINClient() # localhost:8090
# or
client = ORINClient("https://orin.macllm.ai")
# with API key
client = ORINClient("https://orin.macllm.ai", api_key="orin_abc123...")
result = client.query("What is MACLLM?", user="test_user")
print(result.answer) # Answer text
print(result.confidence) # 0.0 - 1.0
print(result.provenance_hash) # On-chain hash
print(result.block_index) # Block number
print(result.mcoin_cost) # Cost in mCoin
balance = client.balance("test_user")
client.transfer("test_user", "treasury", 100.0)
accounts = client.accounts()
health = client.health()
stats = client.chain_stats()
block = client.block(0) # Genesis
latest = client.latest_block()
nodes = client.nodes()
consensus = client.consensus()
prov = client.provenance(result.query_id)
print(prov["provenance_hash"])
print(prov["query_hash"])
print(prov["answer_hash"])
https://orin.macllm.ai (public) or http://localhost:8090 (local){
"query": "What is MACLLM?",
"user": "test_user",
"uses_llm": false,
"brain_answer": null // Optional pre-computed answer
}
{"sender": "test_user", "receiver": "treasury", "amount": 100.0}
{"name": "my-app"}
X-Api-Key header.Read endpoints (GET) are public. Write endpoints (POST) have rate limits (30/min per IP).
# Generate (localhost only)
curl -X POST http://localhost:8090/admin/api-keys \
-H "Content-Type: application/json" \
-d '{"name": "my-app"}'
# Use in requests
curl -X POST https://orin.macllm.ai/query \
-H "X-Api-Key: orin_abc123..." \
-H "Content-Type: application/json" \
-d '{"query": "Hello"}'
# Verify
curl https://orin.macllm.ai/auth/verify \
-H "X-Api-Key: orin_abc123..."
Total mCoin supply is exactly 1,000,000,000 (1 billion). This is verified after every block. The invariant can never be violated — no new mCoin can be minted after genesis.
Byzantine Fault Tolerant consensus with 2/3+1 validator threshold. Leader is selected via VRF-weighted random selection based on stake. Immediate finality — no forks.
Novel consensus mechanism with 3 validation methods:
| Method | How It Works |
|---|---|
| Hidden Test Query | Network sends queries with known answers to test nodes |
| Cross-Validation | Multiple nodes process same query, compare results |
| Gradient Check | Verify node's matrix shard produces expected partial results |
The probability matrix is split into 3 shards with 10% overlap. No single node sees the full matrix — blind computation ensures privacy. Results are reassembled by averaging overlapping regions.
Step 1: ORIN locks mCoin (1 for Brain, 5 for LLM)
Step 2: Brain routes to cognitive nodes by shard
Step 3: Nodes compute blindly (matrix × vector)
Step 4: Brain assembles partial results → answer
Step 5: ORIN records provenance + distributes mCoin
Step 6: Return answer + provenance hash to user
Every query processed by ORIN is recorded on-chain with a provenance hash. This hash links the original query, the computed answer, and the nodes that participated — creating an immutable audit trail.
{
"query_id": "q_abc123...",
"query_hash": "sha256 of original question",
"answer_hash": "sha256 of computed answer",
"provenance_hash": "sha256(query_hash + answer_hash + block)",
"block_index": 42,
"nodes_involved": ["node_A", "node_B", "node_C"],
"timestamp": "2026-03-19T12:00:00Z"
}
Use GET /provenance/{query_id} to retrieve the full provenance record for any query. See the REST API Reference for details.
| Recipient | Share | Purpose |
|---|---|---|
| Cognitive Nodes | 30% | Compute rewards |
| Knowledge Nodes | 20% | Data provision |
| Treasury | 20% | Protocol development |
| Architect | 20% | System design |
| Holders Pool | 10% | Token holder rewards |
Query costs: 1 mCoin (Brain-only) or 5 mCoin (uses external LLM)
# Health check
orin health
# Submit query
orin query "What is MACLLM?"
orin query "How does BFT work?" --user alice --llm
# Account operations
orin balance test_user
orin accounts
orin transfer treasury test_user 100
# Chain inspection
orin blocks --limit 20
orin block 0
orin provenance <query_id>
# Node & Consensus
orin nodes
orin consensus
orin cognitive-test
# Use custom endpoint
orin --api https://orin.macllm.ai health