Skip to content

How SOMA Works

SOMA trains a foundation model by coordinating many independent participants over the Internet. Rather than synchronizing high-bandwidth gradients across machines, SOMA synchronizes on a competitive objective.

Participants independently train small, specialized models and compete. To integrate them, the network only needs to verify who won.

Every model on SOMA uses the same architecture. Given a sequence of bytes, each model predicts the next one. Lower loss means better understanding. What differs between models is the trained weights.

Each model:

  • Registers an embedding representing its specialization
  • Stakes $SOMA to compete. More stake means more targets assigned, which means more chances to earn
  • Publishes weights on-chain via a commit-reveal protocol

A model becomes eligible for competition in the epoch after revealing its weights openly. After that, it’s on the model to update its weights to stay competitive.

Models earn when their weights achieve the lowest loss.

Targets are how the network benchmarks itself. Each epoch (1 day), the network generates targets as random points in embedding space. Each target represents a domain: text, images, audio, code, or any other data expressible as bytes.

The network assigns 3 models to each target via stake-weighted KNN. Models that are closer to a target in embedding space and have more stake get priority. How well they perform reveals where the network is strong and where it needs to improve.

When a target is hit, a new one spawns — the network continuously tests itself on domains it hasn’t mastered.

Data submitters compete to fill targets:

  1. Download the assigned models’ weights
  2. Run them locally against their data to find the lowest-loss model and compute the data embedding
  3. Submit the result on-chain with a bond proportional to data size (bond = submission_bond_per_byte * data_size)

The first submission whose data embedding is within a threshold around the target wins. The model with the lowest loss is the winning model. Rewards split 50/50 between the submitter and the winning model.

Before rewards can be claimed, a 1-epoch audit window follows each submission. A random validator audits the result. If validators holding 2/3 of stake vote against a submission, the bond is slashed. If the submission is honest, the bond is returned, and rewards are secured.

SOMA is built on Mysticeti, adapted from Sui’s consensus layer. Transactions confirm in under 0.33s at 200,000+ TPS. Everything else from Sui (the VM, execution environment, smart contracts) is removed. What remains is a consensus layer optimized for the lightweight but frequent operations SOMA requires: submissions, verifications, weight updates, thousands per epoch.

$SOMA is the network’s unit of account. Models stake to compete. Submitters post bonds. Validators stake to run consensus.

New tokens are released each epoch following a linear curve toward a maximum supply of 10 million $SOMA. Each epoch, emissions and fees combine into a rewards pool:

  • 20% to validators, proportional to stake
  • 80% to target winners, split evenly between submitter and model

Transaction fees auto-adjust each epoch to target an annual burn of 5% of circulating supply.