FOR GPU OWNERS

Power the Decentralized AI Network

Join the NeuraNET compute network. Contribute GPU power to run AI inference jobs.

<5min
Setup time from download to running
24/7
Automated job processing
SOL
Payments via Solana blockchain

Quick Start Guide

Get your node up and running in 4 simple steps

1

Download Node Client

Download the NeuraNET node client for your operating system. Available for Windows, Linux, and macOS.

2

Install AI Engines

Install Ollama for text generation and/or ComfyUI for image generation. You can run one or both.

Ollama (Text Generation)
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama2
# Verify it's running (default: port 11434)
curl http://localhost:11434/api/tags
ComfyUI (Image Generation)
# Clone ComfyUI
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
# Install dependencies
pip install -r requirements.txt
# Download a checkpoint (e.g., SDXL)
# Place .safetensors files in models/checkpoints/
# Start ComfyUI (default: port 8188)
python main.py --listen
3

Configure Your Node

The node client creates a config file automatically. You can customize it to enable/disable engines and set ports.

# node-config.json (auto-created in app data folder)
{
"walletAddress": "your-solana-wallet",
"apiUrl": "http://localhost:3001",
"ollama": {
"enabled": true,
"port": 11434
},
"comfyui": {
"enabled": true,
"port": 8188
}
}
4

Start Processing Jobs

Your node will automatically detect capabilities and accept compatible jobs. Text jobs go to Ollama, image jobs go to ComfyUI.

# Start your node
npm run dev
# Node will:
✓ Detect Ollama and ComfyUI
✓ Register capabilities with network
✓ Accept matching jobs
✓ Process text via Ollama
✓ Generate images via ComfyUI
✓ Receive SOL payment

Supported Job Types

The types of AI jobs your node can process

Text Generation

Powered by Ollama

LLM inference jobs including chat, summarization, code generation, and more.

Image Generation

Powered by ComfyUI

Stable Diffusion image generation, including txt2img and img2img workflows.

GPU Requirements

Recommended hardware for running different models

VRAM
8GB

Entry Level

RTX 3060/3070 or similar. Run Llama 2 7B, Mistral 7B, and SD 1.5 image generation.

VRAM
16GB

Mid Range

RTX 4070/4080 or similar. Run Llama 2 13B, CodeLlama, and SDXL image generation.

VRAM
24GB+

High End

RTX 4090 or A100. Run 70B models, SDXL Turbo, and high-resolution image generation.

Note: Nodes with higher uptime and faster response times receive priority job assignments.