A decentralized open-source platform that allows students to share idle GPU power securely and request distributed compute time using a token-based scheduling system.
High-performance GPUs are expensive and inaccessible for many students and open-source contributors. Meanwhile, thousands of GPUs remain idle on personal systems during off-hours.
This leads to:
Limited AI/ML research access
Slower innovation in open-source
Inefficient hardware utilization
OpenGPU Mesh is a decentralized compute-sharing network that enables:
Users to share idle GPU resources
Students to request temporary compute access
Token-based fair usage tracking
Secure, containerized execution of jobs
It transforms unused GPUs into a distributed open compute grid.
Users install a lightweight GPU agent:
Detects available GPU (via NVIDIA-SMI)
Registers node with central scheduler
Reports idle capacity
Students:
Submit ML training job (Docker container)
Specify required GPU memory & time
Receive token cost estimation
Backend:
Matches job to available GPU node
Deploys container securely
Monitors execution via WebSockets
Contributors earn tokens by sharing GPU
Users spend tokens when running jobs
Ensures fair and sustainable ecosystem
FastAPI (API server)
Python scheduler engine
WebSockets (real-time monitoring)
Docker (secure job isolation)
Kubernetes (future scalability)
Distributed node registry
NVIDIA-SMI monitoring
CUDA compatibility check
Container sandboxing
Resource limitation
Time-based job kill switch