Skip to content

The Submission Flow

This page walks through how a data submission moves from preparation to reward.

A submitter finds an open target and downloads model weights for its assigned models using get_model_manifests(target). Each target has a set of models assigned via stake-weighted KNN — the submitter must use one of them.

The submitter runs the scoring service locally on their data against all assigned models. The model with the lowest loss (cross-entropy + SIGReg) wins. The submitter gets back: the winning model’s ID, the embedding it produced, and the distance score to the target.

All computation happens locally. The data never touches the network.

The submitter posts a single on-chain transaction containing:

  1. Target ID — which target they’re competing for

  2. Data commitment — a hash of the data

  3. Data URL, checksum, and size — so the data can be retrieved and verified

  4. Model ID — the winning model (must be from the target’s assigned set)

  5. Embedding and distance score — the model’s representation of the data and its distance to the target

  6. Bond — a deposit proportional to data size, returned if honest, forfeited if fraudulent

At the end of each epoch, the submission closest to each target wins.

Rewards come from the target’s reward pool and network emissions:

  • 50% to the submitter
  • 50% to the winning model

The other 20% of total emissions goes to validators separately.

During the following epoch, anyone can challenge the winning submission by posting their own bond. The challenger downloads the data and re-runs the computation.

Submission was honest: the challenger loses their bond.

Fraud detected: the submitter’s bond is forfeited, and the next-closest submission is audited.

After the challenge window closes, the winner claims their rewards.