Skip to content

SCP Economics and Governance

Version: v1.0 (Draft) Status: Draft Authoritative: Yes

Purpose

This document defines the token economics, task pricing, reward distribution, node architecture, staking, penalty, and governance rules of SCP.

Canonical protocol terms used here follow SCP Core Spec.

It answers:

  1. what the SYM token is and how it circulates
  2. how task execution is priced and paid for
  3. how data contributors and infrastructure operators are rewarded
  4. how nodes are structured, admitted, and governed
  5. how penalties and slashing protect protocol integrity
  6. how the economic model achieves long-term balance

SYM Token Model

SYM is the native token of the Symphony protocol. It serves as the unit of task pricing, reward distribution, staking collateral, and governance participation.

Token Supply

The protocol uses a hybrid supply model:

  1. Initial supply: a fixed genesis supply is minted at protocol launch. This constitutes the maximum non-inflationary supply.
  2. Early inflation: during the bootstrap phase (defined by governance, expected to last 3-5 years), the protocol mints additional SYM at a declining annual rate to fund data quality rewards and node operation rewards. The inflation schedule is:
    • Year 1: up to 8% of initial supply
    • Year 2: up to 5% of initial supply
    • Year 3: up to 3% of initial supply
    • Year 4: up to 1.5% of initial supply
    • Year 5+: 0% (inflation ends)
  3. Long-term equilibrium: after the bootstrap phase, all rewards are funded exclusively by task fee redistribution. No new tokens are minted.

Token Allocation

The genesis supply is allocated to:

  1. Protocol Treasury: funds protocol development, operations, and early-stage incentives
  2. Foundation Reserve: funds the operation of the 3 master nodes and ecosystem development
  3. Ecosystem Incentives: data quality rewards pool, node bootstrap rewards
  4. Community and Partners: enterprise onboarding incentives, developer grants

Exact allocation percentages are governance-configurable and published at genesis.

Token Utility

SYM is consumed for:

  1. task fees (query, compute, train task execution)
  2. staking collateral (node participation)
  3. governance participation (proposals, voting weight)
  4. challenge bonds (dispute resolution deposits)

SYM is earned through:

  1. data quality rewards (users who contribute high-quality parsed data)
  2. task fee distribution (Vault operators, executors, verifiers)
  3. node operation rewards (during bootstrap phase)
  4. data usage dividends (data contributors whose records are used by tasks)

Smart Contract Custody

All protocol-managed SYM is held by smart contracts on the settlement chain (Aptos):

  1. staking contracts: hold node stakes, enforce slashing
  2. escrow contracts: hold task fee locks, release upon settlement
  3. reward contracts: accumulate and distribute per-epoch rewards
  4. treasury contracts: hold protocol treasury with governance-controlled release

No protocol actor holds SYM on behalf of another actor outside of smart contract custody.

Node Architecture and Staking

Node Types

The protocol recognizes two node types with distinct roles:

Master Nodes (3 fixed)

  1. Operator: Symphony Foundation
  2. Responsibilities: operate the Admission Plane, Settlement Plane, and Governance Service. Master nodes coordinate all protocol traffic and maintain the canonical state machine.
  3. Trust model: the 3 master nodes form a BFT consensus group. Protocol finality requires agreement from at least 2 of 3 master nodes.
  4. Staking requirement: S_master SYM (highest tier, set by governance). The high stake reflects the master nodes' elevated protocol authority and the severity of potential misbehavior.
  5. Reward: master nodes receive the Protocol Treasury share of task fees as operational compensation.

Enterprise Nodes (approved membership)

  1. Operator: approved enterprises that provide internal data to the platform
  2. Responsibilities: operate Vault storage (Data Sovereignty Service) and optionally contribute compute capacity (Execution Plane subsystems). Enterprise nodes do not participate in verification or settlement.
  3. Admission: enterprises apply to join the network. Admission requires governance approval from the master node consensus group. Admission criteria include: legal entity verification, data contribution commitment, infrastructure compliance, and staking capacity.
  4. Staking requirement: S_enterprise SYM (moderate tier, set by governance). Stake scales with the volume of data stored and the compute capacity offered.
  5. Reward: enterprise nodes earn from two sources:
    • Vault Operator reward share from task fees (when their hosted data is used)
    • Executor reward share (if they contribute compute capacity)
  6. Data contribution credits: enterprise nodes accumulate credits proportional to their data contribution. These credits can offset the enterprise's own task fees when using the platform, creating a barter-like mechanism where data provision is exchanged for platform access.

Data Producer Staking (Optional)

  1. Staking is optional: Data Producers may participate without staking, but receive reduced rewards.
  2. Staking requirement: S_producer SYM (lowest tier, set by governance). This is significantly lower than node staking, intended to be accessible to individual users.
  3. Staked Data Producer: receives 100% of calculated data quality rewards and data usage dividends.
  4. Unstaked Data Producer: receives 50% of calculated data quality rewards and data usage dividends. The remaining 50% is returned to the reward pool.
  5. Staking benefit: beyond higher rewards, staked Data Producers gain priority in reward queue when the per-epoch reward pool is constrained.
  6. Slashing: staked Data Producers are subject to stake slashing for fraudulent or malicious data uploads. Unstaked Data Producers face only blocking and suspension (no financial penalty beyond lost rewards).
  7. Cooldown: staked Data Producers who unstake enter a cooldown period (shorter than node cooldown, e.g., 7 days) during which pending rewards continue to be calculated at the staked rate.

Staking Rules

All staking is managed by smart contracts:

  1. staking deposit must be completed before node activation
  2. staking amount must meet the minimum for the node type at all times
  3. partial unstaking is allowed only if the remaining stake meets the minimum
  4. full unstaking triggers a cooldown period (set by governance, e.g., 30 days) during which the node cannot participate and the stake cannot be withdrawn
  5. slashing is applied directly to staked SYM. If remaining stake falls below minimum after slashing, the node is suspended until restaked.

Slashing Conditions

SeverityConditionPenalty
CriticalPrivate data leakage, fabricated execution resultsUp to 100% stake slash + permanent suspension
SevereUnauthorized cross-Vault access, false verification, TEE attestation fraud20-50% stake slash + temporary suspension
ModerateRepeated timeout, data unavailability, protocol duty refusal5-15% stake slash + reward blocking
MinorConfiguration errors, transient failures with self-recoveryWarning + temporary reward reduction

Slashing is always evidence-based and subject to the challenge lifecycle.

Protocol Actors and Their Economic Roles

The protocol recognizes seven economic roles. These are protocol-level roles, not required one-to-one runtime services. Multiple roles may be held by the same node.

Actor-to-Node Mapping

Protocol ActorTypical NodeEconomic Role
Data ProducerEnd user (not a node, optional staking)Earns data quality rewards (100% if staked, 50% if unstaked) and data usage dividends
Task SubmitterEnd user or Enterprise NodePays task fees; Enterprise Nodes may offset with data contribution credits
Vault OperatorEnterprise NodeEarns Vault Operator share of task fees
ExecutorEnterprise Node (compute) or Master NodeEarns Executor share of task fees
VerifierMaster NodeEarns Verifier share of task fees
Governance ActorMaster NodeMay earn governance incentives
Treasury / Payout AuthorityMaster Node (smart contract)Operational compensation only

Actor Duties

Data Producer:

  1. upload source data into an authorized Vault boundary
  2. comply with quality standards defined by the Data Quality Reward Model
  3. respect quota and ingestion controls
  4. optionally stake S_producer SYM to receive full (100%) data quality rewards; unstaked Data Producers receive 50% rewards

Task Submitter:

  1. submit valid tasks with correct authorization context
  2. lock sufficient SYM budget before task admission
  3. avoid abusive, fraudulent, or replay-conflicting task submission

Vault Operator:

  1. maintain private data boundaries and expose only authorized record material
  2. preserve consistency between canonical records and Commitment references
  3. maintain uptime and data availability commitments

Executor:

  1. perform authorized computation inside Vault or TEE boundary
  2. produce deterministic and auditable execution output
  3. avoid unauthorized access, fabricated output, or duplicate claims

Verifier:

  1. validate execution correctness and integrity
  2. issue replayable VerificationDecision artifacts
  3. accept, reject, or challenge output according to evidence

Governance Actor:

  1. open, review, or adjudicate challenges within governance authority
  2. confirm or reject penalty outcomes based on replayable evidence

Treasury / Payout Authority:

  1. transform finalized reward accounting into payout intent on the settlement chain
  2. preserve payout integrity, reconciliation, and signer isolation

Task Pricing Framework

Every task class except parse requires the Task Submitter to lock a SYM budget before admission. The locked amount is calculated from a deterministic pricing formula.

Parse Pricing

Parse is free for the Data Producer:

  1. no SYM is required from the user to upload and parse data
  2. the computational cost of parsing is absorbed by the protocol (funded from treasury during bootstrap, from a small protocol fee on other tasks long-term)
  3. this design removes friction from data onboarding, which is the primary supply-side driver of the ecosystem

Query Pricing

The query task fee is calculated as:

query_fee = base_fee_query
          + (n_vaults × per_vault_fee)
          + (n_records_returned × per_record_fee)
          + (epsilon_consumed × per_epsilon_fee)

Where:

  1. base_fee_query: minimum fee for any query task, covering coordination and verification cost
  2. per_vault_fee: cost per participating Vault, covering Vault operator compensation
  3. per_record_fee: cost per record in the result set, covering data access cost
  4. per_epsilon_fee: cost per unit of differential privacy budget consumed, pricing the finite privacy resource

All fee parameters are set by policy_version and may vary by usage_scope.

Compute Pricing

The compute task fee is calculated as:

compute_fee = base_fee_compute
            + (compute_units × per_unit_fee)
            + (data_volume_mb × per_mb_fee)
            + (epsilon_consumed × per_epsilon_fee)

Where:

  1. base_fee_compute: minimum fee for any compute task
  2. per_unit_fee: cost per compute unit (abstract CPU-time or gas equivalent)
  3. per_mb_fee: cost per MB of input data accessed
  4. per_epsilon_fee: privacy budget cost, same as query

Train Pricing

The train task fee is the sum of per-round costs:

train_fee = base_fee_train
          + Σ_round (round_compute_fee + round_aggregation_fee)
          + (total_epsilon_consumed × per_epsilon_fee)

Where:

  1. base_fee_train: minimum fee for composite task setup
  2. round_compute_fee: sum of per-Vault compute costs for that round
  3. round_aggregation_fee: cost for the secure aggregation step per round
  4. total_epsilon_consumed: cumulative privacy budget across all rounds

Fee Distribution

When a task completes and settles, the locked fee is distributed as:

RecipientShareBasis
Data Contributors30-50%proportional to records used and privacy budget consumed against their data
Vault Operators15-25%proportional to records served and storage commitment
Executors15-25%proportional to verified compute work
Verifiers5-10%per verification decision
Protocol Treasury5-10%fixed protocol fee for sustainability

Exact percentages are set by policy_version. The sum must equal 100% of the locked fee.

For multi-Vault tasks:

  1. each participating Vault operator receives a share proportional to the records served and the privacy budget consumed for their data subjects
  2. each executor that performed a per-Vault execution slice receives reward proportional to the computational work verified for that slice
  3. the aggregation executor receives a separate reward for the aggregation step, distinct from per-Vault execution rewards
  4. verifiers receive reward for the overall verification, not per-Vault
  5. per-Vault reward shares are deterministic given the same settlement context and policy version
  6. a Vault that was authorized but did not respond (timed out) receives no reward for that task
  7. the sum of all per-Vault shares plus aggregator and verifier shares must equal the total task reward budget, with no unaccounted remainder

Fee Lifecycle

The protocol requires:

  1. fee to be locked in escrow smart contract at task admission
  2. locked fee to be released to recipients only after finalized settlement
  3. if task is rejected before execution, the full locked fee to be returned to Task Submitter
  4. if task fails during execution, the locked fee to be partially distributed (Vault operators and executors who performed work receive their share; remainder returned to Task Submitter)
  5. challenged tasks to have their fee distribution suspended until challenge resolution

Task Admission Controls

The protocol requires:

  1. insufficiently funded or unauthorized tasks not to enter accepted
  2. tasks whose estimated cost, complexity, or policy class exceeds limits to be rejected, delayed, or split under policy
  3. rate limits and concurrency caps per actor or policy scope
  4. repeated abusive or non-computable task submission to trigger budget forfeiture, blocking, or suspension

Data Quality Reward Model

Users who contribute data through parse tasks may receive SYM rewards if their data meets quality criteria. This is the primary mechanism for bootstrapping the data supply side.

Quality Scoring Criteria

Data quality is evaluated deterministically by the protocol along five dimensions:

  1. Parsability (binary): the uploaded data was successfully parsed into a valid canonical record with no structural errors. This is the minimum threshold — unparseable data receives zero reward.
  2. Completeness (0.0-1.0): ratio of populated optional fields to total optional fields in the resolved semantic schema. Higher completeness means more useful data.
  3. Freshness (0.0-1.0): score based on the age of the data event relative to parse time. Data from within 24 hours scores 1.0, decaying to 0.0 over a configurable window (e.g., 90 days).
  4. Non-duplication (binary): the record is not a duplicate of any existing canonical record in the same Vault, determined by commitment comparison. Duplicates receive zero reward.
  5. Domain Demand (multiplier, 0.5-3.0): a governance-set multiplier per domain reflecting current ecosystem demand. Domains with high query/compute activity have higher multipliers, incentivizing data supply where it is most needed.

Reward Calculation

For each successfully parsed, non-duplicate record:

data_reward = base_reward
            × (1 + completeness_score × completeness_weight + freshness_score × freshness_weight)
            × domain_demand_multiplier
            × staking_multiplier

Where:

  1. base_reward: minimum reward per valid record (set by policy_version)
  2. completeness_weight: weight given to completeness bonus (e.g., 0.3)
  3. freshness_weight: weight given to freshness bonus (e.g., 0.2)
  4. domain_demand_multiplier: governance-set per-domain multiplier
  5. staking_multiplier: 1.0 if the Data Producer has active S_producer stake; 0.5 if unstaked. The unrewarded 50% for unstaked producers is returned to the epoch reward pool.

Reward Funding Source

  1. during the bootstrap phase: data quality rewards are funded from the Ecosystem Incentives allocation and protocol inflation
  2. after bootstrap: data quality rewards are funded from the Protocol Treasury share of task fees
  3. if the reward pool for an epoch is exhausted, remaining eligible records queue for the next epoch's reward pool
  4. reward pool size per epoch is set by governance and publicly visible

Anti-Gaming Controls

The protocol requires:

  1. Sybil resistance: data quality rewards are rate-limited per identity (subject to consent-verified Data Producer identity)
  2. diminishing returns: rewards per Data Producer per epoch decrease after a configurable threshold, preventing single-actor dominance
  3. retroactive adjustment: if data is later found to be fraudulent, fabricated, or policy-violating through the challenge process, earned rewards are clawed back through penalty
  4. no reward for bulk automated uploads that lack genuine user-generated content, enforceable through behavioral analysis and governance review
  5. obviously invalid, duplicate, or abusive uploads do not enter protocol processing
  6. low-quality or malicious uploads are rate-limited, deprioritized, or pruned under policy
  7. repeated ingress abuse triggers blocking or suspension from further upload pathways

Data Usage Dividend Model

Beyond the one-time parse reward, Data Contributors receive ongoing dividends when their data is used by tasks.

Dividend Mechanism

  1. when a query, compute, or train task uses records from a Vault, the protocol tracks which Data Contributors' records were accessed
  2. the Data Contributor share of the task fee (30-50%) is distributed proportionally among all contributing Data Producers whose records were included in the task
  3. distribution is proportional to the number of records used per Data Producer, weighted by privacy budget consumed against their records
  4. dividends are accumulated per epoch and distributed at epoch settlement

Privacy-Preserving Dividend Accounting

The protocol requires:

  1. dividend attribution to use Vault-internal accounting — the coordination layer never sees which specific Data Producer's records were used
  2. per-Vault dividend allocation is calculated inside the Vault boundary and reported as aggregate amounts per Data Producer
  3. this ensures that the dividend mechanism does not leak information about whose data was accessed

Reward Lifecycle

Reward States

The canonical reward lifecycle is:

  1. pending_input: task is in progress, reward not yet calculable
  2. eligible: task has reached accepted verification and finalized settlement, reward inputs are complete
  3. calculated: reward amount has been determined under the fee distribution formula and policy version
  4. finalized: reward is confirmed and ready for payout
  5. blocked: reward is suspended due to pending challenge, insufficient stake, or active penalty
  6. adjusted: reward has been modified by governance action (e.g., clawback due to retroactive fraud finding)

Reward Invariants

The protocol requires:

  1. reward only for finalized work — no reward without accepted verification and finalized settlement
  2. append-only reward accounting — reward records are never deleted, only adjusted forward
  3. reward eligibility depends on: epoch_id, accepted verification, finalized settlement, contributor role, policy_version, penalty state, and stake compliance
  4. no actor receives reward if: the work is not finalized, the actor is blocked or suspended, the actor is a staking actor and not stake-compliant, or a challenge blocks finalization
  5. payout systems remain downstream from protocol accounting — payout failure must not mutate finalized reward records

Penalty Model

Penalty Forms

The protocol may apply three forms of consequence:

  1. block: temporarily prevent reward finalization or payout progression
  2. slash: append negative economic records or forfeit staked SYM
  3. suspend: temporarily or permanently remove actor eligibility or authority

These consequences must be evidence-based, append-only in accounting effect, and governable through the challenge process.

For staking actors (nodes): slash is applied against active stake.

For staked Data Producers: slash is applied against their S_producer stake, in addition to reward clawback.

For unstaked Data Producers and Task Submitters: forfeiture is applied against budget lock, challenge bond, or ingestion bond.

Penalty by Actor

Data Producer:

  1. repeated garbage, duplicate, or misleading uploads may trigger quota restriction, bond forfeiture, or blocking
  2. malicious upload behavior that pollutes protocol-relevant intake may trigger suspension
  3. fraudulent data discovered retroactively triggers clawback of data quality rewards
  4. for staked Data Producers: severe or repeated violations may trigger stake slashing in addition to reward clawback. Slashing severity follows the same graduated scale as node penalties.
  5. for unstaked Data Producers: penalties are limited to reward clawback, blocking, and suspension. No financial stake is at risk, but the producer loses access to future earning opportunities.

Task Submitter:

  1. abusive or fraudulent submission may trigger rejection, blocking, or budget forfeiture
  2. forged authorization or replay abuse may trigger suspension
  3. malicious challenge abuse may trigger challenge bond forfeiture

Vault Operator:

  1. unauthorized disclosure or misuse of private data may trigger severe slashing or suspension
  2. supplying false, inconsistent, or non-committed record material may trigger penalty
  3. refusing required protocol duties after accepted commitment may block reward eligibility
  4. accepted penalties directly slash active stake

Executor:

  1. fabricated execution results may trigger slashing
  2. unauthorized access or policy violation may trigger suspension or expulsion
  3. duplicate claiming or replay abuse may trigger blocking and negative economic adjustment
  4. accepted penalties directly slash active stake

Verifier:

  1. knowingly incorrect approval or rejection may trigger slashing
  2. fabricated evidence or collusion may trigger suspension or expulsion
  3. persistent low-quality or bad-faith verification may block future reward eligibility
  4. accepted penalties directly slash active stake

Governance Actor:

  1. frivolous or malicious challenge activity may trigger penalty
  2. collusive or bad-faith adjudication may trigger slashing and governance disqualification
  3. accepted penalties directly slash active stake

Treasury / Payout Authority:

  1. duplicate payout, withheld authorized payout, or payout tampering may trigger severe penalty
  2. signer misuse or treasury abuse may trigger suspension of payout authority
  3. accepted penalties directly slash active stake

Governance Rules

Challenge and Slashing

The protocol requires:

  1. challenge evidence to be replayable
  2. open challenge to block affected reward finalization
  3. approved penalties to append new economic records rather than rewrite history
  4. already confirmed chain effects to be corrected explicitly, not silently rolled back
  5. staking, reward, and penalty state to remain economically consistent with each actor's current eligibility

Governance Authority

The Governance Service (operated by Master Nodes) has authority over:

  1. policy version transitions at epoch boundaries
  2. enterprise node admission and removal
  3. fee parameter adjustments within the Task Pricing Framework
  4. data quality reward parameter adjustments (base_reward, domain_demand_multiplier)
  5. privacy budget replenishment rates
  6. emergency actions: temporary budget freezes, node suspension, protocol parameter changes

All governance decisions require the quorum and process defined in the active policy version.

Privacy Budget Economics

Privacy budget is both a privacy-protection mechanism and an economic resource. Its consumption must be accounted for within the protocol economics framework.

Budget as Economic Resource

The protocol requires:

  1. privacy budget consumption to be a cost factor in task admission alongside monetary budget lock
  2. tasks that consume privacy budget to carry an estimated epsilon_cost at admission, verified against actual consumption at settlement
  3. the per_epsilon_fee in the Task Pricing Framework to price this finite resource, creating economic incentive for budget-efficient task design
  4. excessive or wasteful privacy budget consumption to be penalizable through the standard challenge and penalty process

Budget Replenishment and Governance

The protocol requires:

  1. privacy budget replenishment to be a governed epoch-boundary event, not an automatic reset
  2. replenishment rates to be set by governance policy per usage_scope and privacy_class
  3. governance actors to have the authority to adjust replenishment rates or impose temporary budget freezes in response to privacy incidents

Budget Violation Penalties

For executors:

  1. execution that consumes more privacy budget than declared at admission may trigger penalty
  2. execution that circumvents budget enforcement through data manipulation or query restructuring may trigger slashing

For task submitters:

  1. repeated submission of tasks designed to exhaust privacy budgets without legitimate business purpose may trigger budget lock forfeiture and submission suspension

Blockchain Obligation Boundary

SCP defines the obligation to prepare payout instructions from finalized reward records.

At the protocol level:

  1. Aptos is the current primary settlement chain
  2. SYM is the canonical token on this chain
  3. all staking, escrow, reward, and treasury contracts are deployed on this chain
  4. PayoutInstructionSet must preserve deterministic payout identity
  5. payout failure must not mutate finalized reward records

Implementation details for submission, retry, signer management, and reconciliation belong to SCS.

Economic Balance Analysis

The protocol must maintain sustainable coin flow: total SYM distributed as rewards must not exceed total SYM collected from task fees plus authorized inflation.

Coin Flow Model

Sources of SYM entering circulation:

  1. Task fee redistribution: the primary long-term source. When Task Submitters pay for query/compute/train tasks, the fee is redistributed to Data Contributors, Vault Operators, Executors, and Verifiers.
  2. Inflation (bootstrap only): during years 1-5, newly minted SYM funds data quality rewards and node operation rewards. This source decreases annually to zero.
  3. Unstaking: when nodes exit, their previously locked stake returns to circulation. This is not new supply, but it increases liquid supply.

Sinks of SYM leaving circulation:

  1. Task fee payment: Task Submitters spend SYM to execute tasks. This SYM is locked in escrow until settlement.
  2. Staking locks: nodes must lock SYM to participate. This removes SYM from liquid circulation.
  3. Slashing and burns: penalized stake is either burned (permanently reducing supply) or sent to treasury.
  4. Protocol fee accumulation: the Protocol Treasury share of task fees accumulates in the treasury contract. This SYM is semi-liquid (governance-controlled release).

Equilibrium Condition

For the protocol to be economically sustainable after the bootstrap phase:

Per-epoch task fee revenue ≥ Per-epoch reward obligations

Specifically:

Σ(task_fees) ≥ Σ(data_quality_rewards) + Σ(data_usage_dividends) + Σ(vault_operator_rewards) + Σ(executor_rewards) + Σ(verifier_rewards) + Σ(governance_incentives) + protocol_operating_cost

Since task fees are the source of all post-bootstrap rewards, this is automatically satisfied by the fee distribution model (fees in = rewards out). The key risk is insufficient task demand:

  1. if task volume is too low, per-record data quality rewards become small, reducing incentive to contribute data
  2. if data supply is too low, task quality suffers, reducing incentive for Task Submitters
  3. this creates a potential cold-start death spiral

Cold Start Strategy

The bootstrap inflation and treasury allocation address the cold-start problem:

  1. Phase 1 (Year 1-2): protocol subsidizes both sides. Data quality rewards come from inflation. Task fees are kept low or subsidized from treasury to attract Task Submitters.
  2. Phase 2 (Year 2-4): as data supply and task demand grow, task fee revenue increasingly covers rewards. Inflation decreases.
  3. Phase 3 (Year 5+): task fee revenue fully covers all rewards. Protocol is self-sustaining.

Governance Monitoring

The protocol requires governance to monitor:

  1. per-epoch coin flow balance (task fees collected vs rewards distributed)
  2. reward pool utilization rate (percentage of available rewards actually distributed)
  3. data quality reward pool depletion rate (how quickly the per-epoch pool is consumed)
  4. staking ratio (percentage of total SYM supply locked in staking)
  5. task demand growth rate vs data supply growth rate

If imbalances are detected, governance may adjust:

  1. fee parameters in the Task Pricing Framework
  2. data quality reward parameters (base_reward, domain_demand_multiplier)
  3. fee distribution percentages
  4. inflation rate (during bootstrap phase only)

Trust and Security Assumptions

The protocol assumes:

  1. private plaintext remains inside Vault or authorized TEE boundaries
  2. append-only evidence and accounting are mandatory
  3. signer compromise and payout tampering are treated as governable security incidents
  4. reconciliation drift must be detectable
  5. TEE attestation integrity depends on hardware vendor trust, which is a protocol-external assumption monitored by governance
  6. privacy budget accounting is trustworthy only if Vault operators and executors faithfully report consumption, enforceable through verification and challenge
  7. smart contract correctness on Aptos is a protocol-external assumption, mitigated by formal verification and audit
  8. the 3 master node BFT model tolerates at most 1 Byzantine node; compromise of 2 or more master nodes would require emergency governance intervention