Cocoon Blockchain: Confidential AI Compute on TON

Cocoon is a decentralized AI compute network built on The Open Network (TON), designed to power private, encrypted AI inference at scale. Unveiled by Telegram founder Pavel Durov at Blockchain Life 2025, Cocoon rewards GPU contributors with Toncoin (TON) for processing AI tasks—creating a global marketplace for confidential compute.

Confidential AI for Telegram and Beyond
Telegram will be Cocoon’s first major client, integrating the network to deliver secure AI features like message summarization and smart replies. All data processed through Cocoon remains encrypted—even the nodes performing the computation cannot access the content. This ensures privacy by design, aligning with TON’s high-throughput architecture and Telegram’s billion-user reach.
Cocoon Blockchain | How It Works, Mining Pools, and Participation
Cocoon Blockchain — How It Works, Mining Pools, and Participation Guide
-
- ➖ GPU-Powered Compute: Cocoon distributes AI inference tasks to GPU node providers instead of traditional mining
- ➖ Earn TON: Contributors are rewarded in Toncoin based on compute power and task volume
- ➖ Encrypted by Default: All inference data is processed privately—no node sees the raw input
- ➖ Open Registration: GPU owners and developers can apply now; public beta launches November 2025
-
- 🧱 Mining Pools, Reimagined: Cocoon enables decentralized “compute collectives” where GPU contributors share TON rewards
- ➖ Specialized pools and aggregators are expected to emerge as the network matures
- 💼 For Developers and GPU Providers:
- ➖ Developers: Submit model architecture (e.g., DeepSeek, Qwen), query volume, and token size to join the beta
- ➖ GPU Providers: Share hardware specs (e.g., H200, VRAM, uptime) to start earning TON
- ➖ Apply directly through Cocoon’s official Telegram channel—applications are open now
GPU Requirements

GPU Requirements and Participation in Cocoon Network
-
- ➖ Cocoon leverages high-performance GPUs for decentralized AI inference, not traditional crypto mining
- ➖ Focus is on private, confidential compute for resource-intensive AI models like DeepSeek and Qwen
-
- ➖ Recommended GPU Types:
- ➖ Next-Gen Enterprise GPUs: NVIDIA A100, H100, AMD MI-series—ideal for infrastructure providers
- ➖ AI-Optimized GPUs: NVIDIA RTX 3090, 4090, and similar high-memory workstation/server cards
- ➖ Large VRAM GPUs: 16GB+ memory preferred for efficient model inference and high utilization
- ➖ Participation for Home Users:
- ➖ High-end consumer GPUs (RTX 3080, 3090, 4080, 4090) are viable for earning TON
- ➖ Institutional contributors are preparing large-scale deployments for maximum uptime and throughput
- ➖ No official minimum GPU list yet, but recent, AI-optimized, high-memory cards are strongly recommended
TON Rewards grid
TON Rewards — Earning Through Confidential AI Compute
- ➖ Cocoon rewards GPU contributors with Toncoin (TON) for processing encrypted AI inference tasks
- ➖ Earnings are based on compute throughput, model complexity, and real-time demand from AI applications
- ➖ TON is paid out directly to node providers, creating a decentralized income stream for global GPU owners
- ➖ Developers pay TON to access compute, forming a circular economy between AI builders and infrastructure providers
- ➖ Future use cases include staking, lending, and derivatives tied to compute performance
- ➖ All rewards are secured on-chain via TON, with transparent distribution logic and encrypted task routing
Summary
Cocoon transforms idle GPU power into a decentralized backbone for private AI. With TON’s scalability and Telegram’s global footprint, it’s positioned to become the default infrastructure for confidential AI compute—open, encrypted, and borderless
