Skip to content

Introduction

SOMA is a network that trains a foundation model by coordinating small, specialized models across the Internet. Models train independently in parallel, compete, and integrate into a unified system. Participants share a universal objective: given any data, predict what comes next. The best weights are rewarded.

Each model shares the same architecture and competes on a shared objective, learning any modality by training on raw bytes. The network routes submitted data to models, scores their performance, and distributes rewards. Transactions confirm in under 0.33s at 200,000+ TPS.

Every participant earns $SOMA: submitters earn rewards for their data, model developers earn commission for useful weights, and validators earn a share of epoch emissions for verifying work.

The network needs data, models, and validators. Each role earns $SOMA.

  • Submit data. The network generates targets, points in embedding space. You find data that matches a target, score it against the network’s models, and submit on-chain. The first valid submission wins.
  • Train models. You train the weights, publish them on-chain, and earn commission when your model’s weights produce winning submissions.
  • Run a validator. Validators run consensus, generate targets, and audit submissions. They earn 20% of epoch rewards.