SCP Economics and Governance
Version: v1.0 (Draft) Status: Draft Authoritative: Yes
Purpose
This document defines the token economics, task pricing, reward distribution, node architecture, staking, penalty, and governance rules of SCP.
Canonical protocol terms used here follow SCP Core Spec.
It answers:
- what the SYM token is and how it circulates
- how task execution is priced and paid for
- how data contributors and infrastructure operators are rewarded
- how nodes are structured, admitted, and governed
- how penalties and slashing protect protocol integrity
- how the economic model achieves long-term balance
SYM Token Model
SYM is the native token of the Symphony protocol. It serves as the unit of task pricing, reward distribution, staking collateral, and governance participation.
Token Supply
The protocol uses a hybrid supply model:
- Initial supply: a fixed genesis supply is minted at protocol launch. This constitutes the maximum non-inflationary supply.
- Early inflation: during the bootstrap phase (defined by governance, expected to last 3-5 years), the protocol mints additional SYM at a declining annual rate to fund data quality rewards and node operation rewards. The inflation schedule is:
- Year 1: up to 8% of initial supply
- Year 2: up to 5% of initial supply
- Year 3: up to 3% of initial supply
- Year 4: up to 1.5% of initial supply
- Year 5+: 0% (inflation ends)
- Long-term equilibrium: after the bootstrap phase, all rewards are funded exclusively by task fee redistribution. No new tokens are minted.
Token Allocation
The genesis supply is allocated to:
- Protocol Treasury: funds protocol development, operations, and early-stage incentives
- Foundation Reserve: funds the operation of the 3 master nodes and ecosystem development
- Ecosystem Incentives: data quality rewards pool, node bootstrap rewards
- Community and Partners: enterprise onboarding incentives, developer grants
Exact allocation percentages are governance-configurable and published at genesis.
Token Utility
SYM is consumed for:
- task fees (query, compute, train task execution)
- staking collateral (node participation)
- governance participation (proposals, voting weight)
- challenge bonds (dispute resolution deposits)
SYM is earned through:
- data quality rewards (users who contribute high-quality parsed data)
- task fee distribution (Vault operators, executors, verifiers)
- node operation rewards (during bootstrap phase)
- data usage dividends (data contributors whose records are used by tasks)
Smart Contract Custody
All protocol-managed SYM is held by smart contracts on the settlement chain (Aptos):
- staking contracts: hold node stakes, enforce slashing
- escrow contracts: hold task fee locks, release upon settlement
- reward contracts: accumulate and distribute per-epoch rewards
- treasury contracts: hold protocol treasury with governance-controlled release
No protocol actor holds SYM on behalf of another actor outside of smart contract custody.
Node Architecture and Staking
Node Types
The protocol recognizes two node types with distinct roles:
Master Nodes (3 fixed)
- Operator: Symphony Foundation
- Responsibilities: operate the Admission Plane, Settlement Plane, and Governance Service. Master nodes coordinate all protocol traffic and maintain the canonical state machine.
- Trust model: the 3 master nodes form a BFT consensus group. Protocol finality requires agreement from at least 2 of 3 master nodes.
- Staking requirement:
S_masterSYM (highest tier, set by governance). The high stake reflects the master nodes' elevated protocol authority and the severity of potential misbehavior. - Reward: master nodes receive the Protocol Treasury share of task fees as operational compensation.
Enterprise Nodes (approved membership)
- Operator: approved enterprises that provide internal data to the platform
- Responsibilities: operate Vault storage (Data Sovereignty Service) and optionally contribute compute capacity (Execution Plane subsystems). Enterprise nodes do not participate in verification or settlement.
- Admission: enterprises apply to join the network. Admission requires governance approval from the master node consensus group. Admission criteria include: legal entity verification, data contribution commitment, infrastructure compliance, and staking capacity.
- Staking requirement:
S_enterpriseSYM (moderate tier, set by governance). Stake scales with the volume of data stored and the compute capacity offered. - Reward: enterprise nodes earn from two sources:
- Vault Operator reward share from task fees (when their hosted data is used)
- Executor reward share (if they contribute compute capacity)
- Data contribution credits: enterprise nodes accumulate credits proportional to their data contribution. These credits can offset the enterprise's own task fees when using the platform, creating a barter-like mechanism where data provision is exchanged for platform access.
Data Producer Staking (Optional)
- Staking is optional: Data Producers may participate without staking, but receive reduced rewards.
- Staking requirement:
S_producerSYM (lowest tier, set by governance). This is significantly lower than node staking, intended to be accessible to individual users. - Staked Data Producer: receives 100% of calculated data quality rewards and data usage dividends.
- Unstaked Data Producer: receives 50% of calculated data quality rewards and data usage dividends. The remaining 50% is returned to the reward pool.
- Staking benefit: beyond higher rewards, staked Data Producers gain priority in reward queue when the per-epoch reward pool is constrained.
- Slashing: staked Data Producers are subject to stake slashing for fraudulent or malicious data uploads. Unstaked Data Producers face only blocking and suspension (no financial penalty beyond lost rewards).
- Cooldown: staked Data Producers who unstake enter a cooldown period (shorter than node cooldown, e.g., 7 days) during which pending rewards continue to be calculated at the staked rate.
Staking Rules
All staking is managed by smart contracts:
- staking deposit must be completed before node activation
- staking amount must meet the minimum for the node type at all times
- partial unstaking is allowed only if the remaining stake meets the minimum
- full unstaking triggers a cooldown period (set by governance, e.g., 30 days) during which the node cannot participate and the stake cannot be withdrawn
- slashing is applied directly to staked SYM. If remaining stake falls below minimum after slashing, the node is suspended until restaked.
Slashing Conditions
| Severity | Condition | Penalty |
|---|---|---|
| Critical | Private data leakage, fabricated execution results | Up to 100% stake slash + permanent suspension |
| Severe | Unauthorized cross-Vault access, false verification, TEE attestation fraud | 20-50% stake slash + temporary suspension |
| Moderate | Repeated timeout, data unavailability, protocol duty refusal | 5-15% stake slash + reward blocking |
| Minor | Configuration errors, transient failures with self-recovery | Warning + temporary reward reduction |
Slashing is always evidence-based and subject to the challenge lifecycle.
Protocol Actors and Their Economic Roles
The protocol recognizes seven economic roles. These are protocol-level roles, not required one-to-one runtime services. Multiple roles may be held by the same node.
Actor-to-Node Mapping
| Protocol Actor | Typical Node | Economic Role |
|---|---|---|
Data Producer | End user (not a node, optional staking) | Earns data quality rewards (100% if staked, 50% if unstaked) and data usage dividends |
Task Submitter | End user or Enterprise Node | Pays task fees; Enterprise Nodes may offset with data contribution credits |
Vault Operator | Enterprise Node | Earns Vault Operator share of task fees |
Executor | Enterprise Node (compute) or Master Node | Earns Executor share of task fees |
Verifier | Master Node | Earns Verifier share of task fees |
Governance Actor | Master Node | May earn governance incentives |
Treasury / Payout Authority | Master Node (smart contract) | Operational compensation only |
Actor Duties
Data Producer:
- upload source data into an authorized Vault boundary
- comply with quality standards defined by the Data Quality Reward Model
- respect quota and ingestion controls
- optionally stake
S_producerSYM to receive full (100%) data quality rewards; unstaked Data Producers receive 50% rewards
Task Submitter:
- submit valid tasks with correct authorization context
- lock sufficient SYM budget before task admission
- avoid abusive, fraudulent, or replay-conflicting task submission
Vault Operator:
- maintain private data boundaries and expose only authorized record material
- preserve consistency between canonical records and
Commitmentreferences - maintain uptime and data availability commitments
Executor:
- perform authorized computation inside Vault or TEE boundary
- produce deterministic and auditable execution output
- avoid unauthorized access, fabricated output, or duplicate claims
Verifier:
- validate execution correctness and integrity
- issue replayable
VerificationDecisionartifacts - accept, reject, or challenge output according to evidence
Governance Actor:
- open, review, or adjudicate challenges within governance authority
- confirm or reject penalty outcomes based on replayable evidence
Treasury / Payout Authority:
- transform finalized reward accounting into payout intent on the settlement chain
- preserve payout integrity, reconciliation, and signer isolation
Task Pricing Framework
Every task class except parse requires the Task Submitter to lock a SYM budget before admission. The locked amount is calculated from a deterministic pricing formula.
Parse Pricing
Parse is free for the Data Producer:
- no SYM is required from the user to upload and parse data
- the computational cost of parsing is absorbed by the protocol (funded from treasury during bootstrap, from a small protocol fee on other tasks long-term)
- this design removes friction from data onboarding, which is the primary supply-side driver of the ecosystem
Query Pricing
The query task fee is calculated as:
query_fee = base_fee_query
+ (n_vaults × per_vault_fee)
+ (n_records_returned × per_record_fee)
+ (epsilon_consumed × per_epsilon_fee)Where:
base_fee_query: minimum fee for any query task, covering coordination and verification costper_vault_fee: cost per participating Vault, covering Vault operator compensationper_record_fee: cost per record in the result set, covering data access costper_epsilon_fee: cost per unit of differential privacy budget consumed, pricing the finite privacy resource
All fee parameters are set by policy_version and may vary by usage_scope.
Compute Pricing
The compute task fee is calculated as:
compute_fee = base_fee_compute
+ (compute_units × per_unit_fee)
+ (data_volume_mb × per_mb_fee)
+ (epsilon_consumed × per_epsilon_fee)Where:
base_fee_compute: minimum fee for any compute taskper_unit_fee: cost per compute unit (abstract CPU-time or gas equivalent)per_mb_fee: cost per MB of input data accessedper_epsilon_fee: privacy budget cost, same as query
Train Pricing
The train task fee is the sum of per-round costs:
train_fee = base_fee_train
+ Σ_round (round_compute_fee + round_aggregation_fee)
+ (total_epsilon_consumed × per_epsilon_fee)Where:
base_fee_train: minimum fee for composite task setupround_compute_fee: sum of per-Vault compute costs for that roundround_aggregation_fee: cost for the secure aggregation step per roundtotal_epsilon_consumed: cumulative privacy budget across all rounds
Fee Distribution
When a task completes and settles, the locked fee is distributed as:
| Recipient | Share | Basis |
|---|---|---|
| Data Contributors | 30-50% | proportional to records used and privacy budget consumed against their data |
| Vault Operators | 15-25% | proportional to records served and storage commitment |
| Executors | 15-25% | proportional to verified compute work |
| Verifiers | 5-10% | per verification decision |
| Protocol Treasury | 5-10% | fixed protocol fee for sustainability |
Exact percentages are set by policy_version. The sum must equal 100% of the locked fee.
For multi-Vault tasks:
- each participating Vault operator receives a share proportional to the records served and the privacy budget consumed for their data subjects
- each executor that performed a per-Vault execution slice receives reward proportional to the computational work verified for that slice
- the aggregation executor receives a separate reward for the aggregation step, distinct from per-Vault execution rewards
- verifiers receive reward for the overall verification, not per-Vault
- per-Vault reward shares are deterministic given the same settlement context and policy version
- a Vault that was authorized but did not respond (timed out) receives no reward for that task
- the sum of all per-Vault shares plus aggregator and verifier shares must equal the total task reward budget, with no unaccounted remainder
Fee Lifecycle
The protocol requires:
- fee to be locked in escrow smart contract at task admission
- locked fee to be released to recipients only after finalized settlement
- if task is rejected before execution, the full locked fee to be returned to Task Submitter
- if task fails during execution, the locked fee to be partially distributed (Vault operators and executors who performed work receive their share; remainder returned to Task Submitter)
- challenged tasks to have their fee distribution suspended until challenge resolution
Task Admission Controls
The protocol requires:
- insufficiently funded or unauthorized tasks not to enter
accepted - tasks whose estimated cost, complexity, or policy class exceeds limits to be rejected, delayed, or split under policy
- rate limits and concurrency caps per actor or policy scope
- repeated abusive or non-computable task submission to trigger budget forfeiture, blocking, or suspension
Data Quality Reward Model
Users who contribute data through parse tasks may receive SYM rewards if their data meets quality criteria. This is the primary mechanism for bootstrapping the data supply side.
Quality Scoring Criteria
Data quality is evaluated deterministically by the protocol along five dimensions:
- Parsability (binary): the uploaded data was successfully parsed into a valid canonical record with no structural errors. This is the minimum threshold — unparseable data receives zero reward.
- Completeness (0.0-1.0): ratio of populated optional fields to total optional fields in the resolved semantic schema. Higher completeness means more useful data.
- Freshness (0.0-1.0): score based on the age of the data event relative to parse time. Data from within 24 hours scores 1.0, decaying to 0.0 over a configurable window (e.g., 90 days).
- Non-duplication (binary): the record is not a duplicate of any existing canonical record in the same Vault, determined by commitment comparison. Duplicates receive zero reward.
- Domain Demand (multiplier, 0.5-3.0): a governance-set multiplier per domain reflecting current ecosystem demand. Domains with high query/compute activity have higher multipliers, incentivizing data supply where it is most needed.
Reward Calculation
For each successfully parsed, non-duplicate record:
data_reward = base_reward
× (1 + completeness_score × completeness_weight + freshness_score × freshness_weight)
× domain_demand_multiplier
× staking_multiplierWhere:
base_reward: minimum reward per valid record (set bypolicy_version)completeness_weight: weight given to completeness bonus (e.g., 0.3)freshness_weight: weight given to freshness bonus (e.g., 0.2)domain_demand_multiplier: governance-set per-domain multiplierstaking_multiplier: 1.0 if the Data Producer has activeS_producerstake; 0.5 if unstaked. The unrewarded 50% for unstaked producers is returned to the epoch reward pool.
Reward Funding Source
- during the bootstrap phase: data quality rewards are funded from the Ecosystem Incentives allocation and protocol inflation
- after bootstrap: data quality rewards are funded from the Protocol Treasury share of task fees
- if the reward pool for an epoch is exhausted, remaining eligible records queue for the next epoch's reward pool
- reward pool size per epoch is set by governance and publicly visible
Anti-Gaming Controls
The protocol requires:
- Sybil resistance: data quality rewards are rate-limited per identity (subject to consent-verified Data Producer identity)
- diminishing returns: rewards per Data Producer per epoch decrease after a configurable threshold, preventing single-actor dominance
- retroactive adjustment: if data is later found to be fraudulent, fabricated, or policy-violating through the challenge process, earned rewards are clawed back through penalty
- no reward for bulk automated uploads that lack genuine user-generated content, enforceable through behavioral analysis and governance review
- obviously invalid, duplicate, or abusive uploads do not enter protocol processing
- low-quality or malicious uploads are rate-limited, deprioritized, or pruned under policy
- repeated ingress abuse triggers blocking or suspension from further upload pathways
Data Usage Dividend Model
Beyond the one-time parse reward, Data Contributors receive ongoing dividends when their data is used by tasks.
Dividend Mechanism
- when a query, compute, or train task uses records from a Vault, the protocol tracks which Data Contributors' records were accessed
- the Data Contributor share of the task fee (30-50%) is distributed proportionally among all contributing Data Producers whose records were included in the task
- distribution is proportional to the number of records used per Data Producer, weighted by privacy budget consumed against their records
- dividends are accumulated per epoch and distributed at epoch settlement
Privacy-Preserving Dividend Accounting
The protocol requires:
- dividend attribution to use Vault-internal accounting — the coordination layer never sees which specific Data Producer's records were used
- per-Vault dividend allocation is calculated inside the Vault boundary and reported as aggregate amounts per Data Producer
- this ensures that the dividend mechanism does not leak information about whose data was accessed
Reward Lifecycle
Reward States
The canonical reward lifecycle is:
pending_input: task is in progress, reward not yet calculableeligible: task has reached accepted verification and finalized settlement, reward inputs are completecalculated: reward amount has been determined under the fee distribution formula and policy versionfinalized: reward is confirmed and ready for payoutblocked: reward is suspended due to pending challenge, insufficient stake, or active penaltyadjusted: reward has been modified by governance action (e.g., clawback due to retroactive fraud finding)
Reward Invariants
The protocol requires:
- reward only for finalized work — no reward without accepted verification and finalized settlement
- append-only reward accounting — reward records are never deleted, only adjusted forward
- reward eligibility depends on:
epoch_id, accepted verification, finalized settlement, contributor role,policy_version, penalty state, and stake compliance - no actor receives reward if: the work is not finalized, the actor is blocked or suspended, the actor is a staking actor and not stake-compliant, or a challenge blocks finalization
- payout systems remain downstream from protocol accounting — payout failure must not mutate finalized reward records
Penalty Model
Penalty Forms
The protocol may apply three forms of consequence:
block: temporarily prevent reward finalization or payout progressionslash: append negative economic records or forfeit staked SYMsuspend: temporarily or permanently remove actor eligibility or authority
These consequences must be evidence-based, append-only in accounting effect, and governable through the challenge process.
For staking actors (nodes): slash is applied against active stake.
For staked Data Producers: slash is applied against their S_producer stake, in addition to reward clawback.
For unstaked Data Producers and Task Submitters: forfeiture is applied against budget lock, challenge bond, or ingestion bond.
Penalty by Actor
Data Producer:
- repeated garbage, duplicate, or misleading uploads may trigger quota restriction, bond forfeiture, or blocking
- malicious upload behavior that pollutes protocol-relevant intake may trigger suspension
- fraudulent data discovered retroactively triggers clawback of data quality rewards
- for staked Data Producers: severe or repeated violations may trigger stake slashing in addition to reward clawback. Slashing severity follows the same graduated scale as node penalties.
- for unstaked Data Producers: penalties are limited to reward clawback, blocking, and suspension. No financial stake is at risk, but the producer loses access to future earning opportunities.
Task Submitter:
- abusive or fraudulent submission may trigger rejection, blocking, or budget forfeiture
- forged authorization or replay abuse may trigger suspension
- malicious challenge abuse may trigger challenge bond forfeiture
Vault Operator:
- unauthorized disclosure or misuse of private data may trigger severe slashing or suspension
- supplying false, inconsistent, or non-committed record material may trigger penalty
- refusing required protocol duties after accepted commitment may block reward eligibility
- accepted penalties directly slash active stake
Executor:
- fabricated execution results may trigger slashing
- unauthorized access or policy violation may trigger suspension or expulsion
- duplicate claiming or replay abuse may trigger blocking and negative economic adjustment
- accepted penalties directly slash active stake
Verifier:
- knowingly incorrect approval or rejection may trigger slashing
- fabricated evidence or collusion may trigger suspension or expulsion
- persistent low-quality or bad-faith verification may block future reward eligibility
- accepted penalties directly slash active stake
Governance Actor:
- frivolous or malicious challenge activity may trigger penalty
- collusive or bad-faith adjudication may trigger slashing and governance disqualification
- accepted penalties directly slash active stake
Treasury / Payout Authority:
- duplicate payout, withheld authorized payout, or payout tampering may trigger severe penalty
- signer misuse or treasury abuse may trigger suspension of payout authority
- accepted penalties directly slash active stake
Governance Rules
Challenge and Slashing
The protocol requires:
- challenge evidence to be replayable
- open challenge to block affected reward finalization
- approved penalties to append new economic records rather than rewrite history
- already confirmed chain effects to be corrected explicitly, not silently rolled back
- staking, reward, and penalty state to remain economically consistent with each actor's current eligibility
Governance Authority
The Governance Service (operated by Master Nodes) has authority over:
- policy version transitions at epoch boundaries
- enterprise node admission and removal
- fee parameter adjustments within the Task Pricing Framework
- data quality reward parameter adjustments (base_reward, domain_demand_multiplier)
- privacy budget replenishment rates
- emergency actions: temporary budget freezes, node suspension, protocol parameter changes
All governance decisions require the quorum and process defined in the active policy version.
Privacy Budget Economics
Privacy budget is both a privacy-protection mechanism and an economic resource. Its consumption must be accounted for within the protocol economics framework.
Budget as Economic Resource
The protocol requires:
- privacy budget consumption to be a cost factor in task admission alongside monetary budget lock
- tasks that consume privacy budget to carry an estimated
epsilon_costat admission, verified against actual consumption at settlement - the
per_epsilon_feein the Task Pricing Framework to price this finite resource, creating economic incentive for budget-efficient task design - excessive or wasteful privacy budget consumption to be penalizable through the standard challenge and penalty process
Budget Replenishment and Governance
The protocol requires:
- privacy budget replenishment to be a governed epoch-boundary event, not an automatic reset
- replenishment rates to be set by governance policy per
usage_scopeandprivacy_class - governance actors to have the authority to adjust replenishment rates or impose temporary budget freezes in response to privacy incidents
Budget Violation Penalties
For executors:
- execution that consumes more privacy budget than declared at admission may trigger penalty
- execution that circumvents budget enforcement through data manipulation or query restructuring may trigger slashing
For task submitters:
- repeated submission of tasks designed to exhaust privacy budgets without legitimate business purpose may trigger budget lock forfeiture and submission suspension
Blockchain Obligation Boundary
SCP defines the obligation to prepare payout instructions from finalized reward records.
At the protocol level:
- Aptos is the current primary settlement chain
SYMis the canonical token on this chain- all staking, escrow, reward, and treasury contracts are deployed on this chain
PayoutInstructionSetmust preserve deterministic payout identity- payout failure must not mutate finalized reward records
Implementation details for submission, retry, signer management, and reconciliation belong to SCS.
Economic Balance Analysis
The protocol must maintain sustainable coin flow: total SYM distributed as rewards must not exceed total SYM collected from task fees plus authorized inflation.
Coin Flow Model
Sources of SYM entering circulation:
- Task fee redistribution: the primary long-term source. When Task Submitters pay for query/compute/train tasks, the fee is redistributed to Data Contributors, Vault Operators, Executors, and Verifiers.
- Inflation (bootstrap only): during years 1-5, newly minted SYM funds data quality rewards and node operation rewards. This source decreases annually to zero.
- Unstaking: when nodes exit, their previously locked stake returns to circulation. This is not new supply, but it increases liquid supply.
Sinks of SYM leaving circulation:
- Task fee payment: Task Submitters spend SYM to execute tasks. This SYM is locked in escrow until settlement.
- Staking locks: nodes must lock SYM to participate. This removes SYM from liquid circulation.
- Slashing and burns: penalized stake is either burned (permanently reducing supply) or sent to treasury.
- Protocol fee accumulation: the Protocol Treasury share of task fees accumulates in the treasury contract. This SYM is semi-liquid (governance-controlled release).
Equilibrium Condition
For the protocol to be economically sustainable after the bootstrap phase:
Per-epoch task fee revenue ≥ Per-epoch reward obligationsSpecifically:
Σ(task_fees) ≥ Σ(data_quality_rewards) + Σ(data_usage_dividends) + Σ(vault_operator_rewards) + Σ(executor_rewards) + Σ(verifier_rewards) + Σ(governance_incentives) + protocol_operating_costSince task fees are the source of all post-bootstrap rewards, this is automatically satisfied by the fee distribution model (fees in = rewards out). The key risk is insufficient task demand:
- if task volume is too low, per-record data quality rewards become small, reducing incentive to contribute data
- if data supply is too low, task quality suffers, reducing incentive for Task Submitters
- this creates a potential cold-start death spiral
Cold Start Strategy
The bootstrap inflation and treasury allocation address the cold-start problem:
- Phase 1 (Year 1-2): protocol subsidizes both sides. Data quality rewards come from inflation. Task fees are kept low or subsidized from treasury to attract Task Submitters.
- Phase 2 (Year 2-4): as data supply and task demand grow, task fee revenue increasingly covers rewards. Inflation decreases.
- Phase 3 (Year 5+): task fee revenue fully covers all rewards. Protocol is self-sustaining.
Governance Monitoring
The protocol requires governance to monitor:
- per-epoch coin flow balance (task fees collected vs rewards distributed)
- reward pool utilization rate (percentage of available rewards actually distributed)
- data quality reward pool depletion rate (how quickly the per-epoch pool is consumed)
- staking ratio (percentage of total SYM supply locked in staking)
- task demand growth rate vs data supply growth rate
If imbalances are detected, governance may adjust:
- fee parameters in the Task Pricing Framework
- data quality reward parameters (base_reward, domain_demand_multiplier)
- fee distribution percentages
- inflation rate (during bootstrap phase only)
Trust and Security Assumptions
The protocol assumes:
- private plaintext remains inside Vault or authorized TEE boundaries
- append-only evidence and accounting are mandatory
- signer compromise and payout tampering are treated as governable security incidents
- reconciliation drift must be detectable
- TEE attestation integrity depends on hardware vendor trust, which is a protocol-external assumption monitored by governance
- privacy budget accounting is trustworthy only if Vault operators and executors faithfully report consumption, enforceable through verification and challenge
- smart contract correctness on Aptos is a protocol-external assumption, mitigated by formal verification and audit
- the 3 master node BFT model tolerates at most 1 Byzantine node; compromise of 2 or more master nodes would require emergency governance intervention