Skip to content

SCS System Architecture

Version: v1.0 (Draft) Status: Draft Authoritative: Yes

Purpose

This document defines how SCS (Symphony Core System) realizes the SCP protocol as a concrete, deployable system.

It answers:

  1. how the physical node network is structured and what runs where
  2. how services map to the SCP 3 Planes + 2 Services architecture
  3. how nodes communicate with each other and with external users
  4. what data each node type stores and how persistence is organized
  5. how smart contracts on Aptos realize staking, escrow, and settlement
  6. how TEE environments are integrated for privacy-preserving computation
  7. how the system handles failure, reconciliation, and operational concerns

For the normative protocol semantics that this system must preserve, see:

  1. SCP Protocol Overview
  2. SCP Core Spec
  3. SCP Economics and Governance

Design Principles

SCS realizes protocol truth defined by SCP, not redefines it. This means:

  1. SCP defines lifecycle meaning, actor duties, semantic rules, and economic invariants
  2. SCS defines the runtime services, storage, APIs, and operational mechanisms that preserve those meanings in production
  3. a deployment may combine or separate services differently, as long as the same protocol semantics are preserved
  4. SCS adds no protocol concepts — every SCS behavior must trace back to an SCP requirement

Part 1: Deployment Topology

Node Network

The SCS network consists of two node types operating in a permissioned topology:

                    ┌──────────────────────────────────────┐
                    │         Master Node Cluster           │
                    │   (3 nodes, BFT consensus, Symphony)  │
                    │                                       │
                    │  ┌─────────┐ ┌─────────┐ ┌─────────┐│
                    │  │Master-1 │ │Master-2 │ │Master-3 ││
                    │  └────┬────┘ └────┬────┘ └────┬────┘│
                    │       └──────┬────┴────────────┘     │
                    │              │ BFT Consensus          │
                    └──────────────┼───────────────────────┘

                    ┌──────────────┼───────────────────────┐
                    │    Inter-Node Communication Layer     │
                    │   (mTLS, authenticated, encrypted)    │
                    └──────┬───────┼──────────┬────────────┘
                           │       │          │
              ┌────────────┴┐  ┌───┴────────┐ ┌┴────────────┐
              │Enterprise-A │  │Enterprise-B│ │Enterprise-C │
              │  (Vault-A)  │  │  (Vault-B) │ │  (Vault-C)  │
              └─────────────┘  └────────────┘ └─────────────┘

Master Node Cluster

The 3 master nodes are operated by the Symphony Foundation. They collectively run:

  1. Admission Plane services — API gateway, identity, consent verification, semantic resolution, privacy budget reservation, TaskEnvelope assembly
  2. Settlement Plane services — result verification, challenge lifecycle, reward accounting, penalty enforcement, payout execution
  3. Governance Service — policy version management, epoch management, actor registry, governance proposals
  4. BFT consensus engine — protocol finality requires agreement from at least 2 of 3 master nodes

Master nodes do NOT store private user data. They see only commitments, metadata, and protocol-visible artifacts.

Each master node is a full replica:

  1. every master node runs the complete set of master services
  2. any single master node failure does not interrupt protocol operation
  3. state is synchronized through the BFT consensus protocol
  4. each master node independently validates all protocol transitions

Enterprise Nodes

Enterprise nodes are operated by approved enterprises. Each enterprise node runs:

  1. Vault services (Data Sovereignty Service) — encrypted record storage, consent management, privacy budget ledger, authorization enforcement
  2. Execution services (Execution Plane subset) — Vault-internal computation, optional TEE execution, slice result production
  3. Node agent — handles inter-node communication, heartbeat, and coordination protocol with the master cluster

Enterprise nodes do NOT run admission, verification, settlement, or governance services. Those remain on the master cluster.

Each enterprise node is independently operated:

  1. the enterprise controls its own infrastructure, storage, and operational practices
  2. the enterprise's data never leaves the Vault boundary except as commitment references or TEE-processed results
  3. the enterprise manages its own consent records for data subjects within its Vault
  4. enterprise nodes may optionally contribute compute capacity beyond their own Vault data

End Users

End users (Data Producers and Task Submitters) interact with the system through the master cluster's API gateway. They do not run nodes. Their data flows:

  1. Data upload: user → API gateway → master node routes to assigned enterprise Vault → parse execution within Vault
  2. Task submission: user → API gateway → Admission Plane on master → TaskEnvelope → dispatched to relevant nodes

Part 2: Service Architecture

Service-to-Node Mapping

ServiceRuns OnSCP Component
API GatewayMaster nodesAdmission Plane entry
Identity & Auth ServiceMaster nodesAdmission Plane
Consent Verification ServiceMaster nodes (invokes Consent Manager on enterprise nodes via RPC)Admission Plane
Semantic Registry ServiceMaster nodesAdmission Plane
Privacy Budget Reservation ServiceMaster nodes (invokes Privacy Budget Ledger on enterprise nodes via RPC)Admission Plane
TaskEnvelope Assembly ServiceMaster nodesAdmission Plane
Task OrchestratorMaster nodesExecution Plane (coordination)
Multi-Vault CoordinatorMaster nodesExecution Plane (coordination)
Vault Storage EngineEnterprise nodesData Sovereignty Service
Consent ManagerEnterprise nodesData Sovereignty Service
Privacy Budget LedgerEnterprise nodesData Sovereignty Service
Computation RuntimeEnterprise nodes (+ optional TEE)Execution Plane (computation)
Aggregation RuntimeMaster nodes (in TEE)Execution Plane (aggregation)
Result Verification ServiceMaster nodesSettlement Plane
Challenge ManagerMaster nodesSettlement Plane
Reward Accounting ServiceMaster nodesSettlement Plane
Payout ServiceMaster nodesSettlement Plane
Policy Version ManagerMaster nodesGovernance Service
Epoch ManagerMaster nodesGovernance Service
Actor RegistryMaster nodesGovernance Service
BFT Consensus EngineMaster nodesCross-cutting

Master Node Services Detail

API Gateway

  1. single entry point for all external requests (user uploads, task submissions, result queries)
  2. TLS termination, rate limiting, DDoS protection
  3. routes requests to the appropriate internal service based on request type
  4. does NOT perform protocol logic — it is a routing and protection layer only

Task Orchestrator

The central state machine driver:

  1. receives validated TaskEnvelope from the Admission Plane
  2. drives the task through canonical state transitions (acceptedresolvingdispatchedverifyingawaiting_settlementcompleted)
  3. for multi-Vault tasks: delegates to Multi-Vault Coordinator
  4. for composite tasks: manages sub-task sequencing and iteration control
  5. emits protocol events for every state transition (auditable, replayable)
  6. enforces timeouts at every stage

Multi-Vault Coordinator

  1. expands the coordination envelope into per-Vault execution assignments
  2. sends execution assignments to target enterprise nodes
  3. tracks per-Vault slice progress and enforces quorum
  4. triggers aggregation when sufficient slices complete
  5. handles partial completion and timeout logic

Aggregation Runtime

  1. executes inside the aggregation environment declared in the coordination envelope. SCP permits three such environments: (a) an attested TEE, (b) a secure multi-party computation protocol, or (c) a homomorphic encryption scheme. SCS v1 ships (a) as the default implementation on master nodes; (b) and (c) are supported as pluggable runtimes that may be enabled per policy without changing the outer service contract
  2. when the runtime is an attested TEE, it is provisioned on a master node but operates under a dedicated aggregator actor identity, distinct from the master node's Settlement/Governance identities, so that the aggregator is bound by Executor-class staking, penalty, and governance rules as required by SCP. The aggregator actor must never coincide with the task submitter
  3. receives per-Vault slice results (commitment references and privacy-safe outputs)
  4. executes the declared aggregation method (union, intersection, secure_sum, federated_average)
  5. enforces cardinality thresholds at the aggregate level
  6. produces aggregate result with cryptographic proof and (for TEE runtimes) an AttestationReport
  7. does NOT retain per-Vault inputs after producing the aggregate

Because the Aggregation Runtime and the Result Verification Service may physically co-locate on the master cluster, SCS enforces logical separation: the aggregator signs with an Executor-class key, verification runs in a separate service process with its own key material, and challenge evidence always distinguishes the aggregator actor from the verifier actor. A deployment that exposes Aggregation Runtime to an independent third-party Executor is permitted and encouraged once the actor registry supports external aggregators.

Semantic Registry Service

  1. stores domain hierarchy, canonical attributes, query attributes, derivation rules
  2. handles semantic resolution requests during task admission
  3. manages attribute lifecycle transitions (proposed → registered → active → deprecated → retired)
  4. supports local attribute candidate aggregation and promotion workflows

Reward Accounting Service

  1. receives finalized settlement contexts
  2. calculates per-actor reward shares using the fee distribution formula and policy version. The full set of recipient classes is: Data Contributors (Data Producers), Vault Operators, Executors, Aggregators, and Verifiers. The sum of all per-Vault shares plus aggregator and verifier shares must equal the total task reward budget, with no unaccounted remainder (SCP economics invariant)
  3. applies staking multiplier (1.0 for staked, 0.5 for unstaked Data Producers) and routes the unrewarded 50% back to the epoch reward pool via an explicit pool-return record, so that the invariant in (2) holds after the multiplier is applied
  4. calculates data quality rewards for parsed records
  5. calculates data usage dividends for record contributions
  6. produces reward records and submits payout instructions to the Payout Service

Payout Service

  1. transforms finalized reward records into PayoutInstructionSet for the Aptos chain
  2. manages signer isolation: payout and reward distribution use keys distinct from the Treasury multi-sig and from the BFT consensus keys of the master nodes, so that a compromise of one key class does not automatically grant authority over another
  3. batches payout instructions by epoch
  4. after Settlement Plane finalization, anchors the corresponding SettlementRoot hash on Aptos as part of the same epoch batch, tracks confirmation, and retries on reorg without mutating the finalized settlement context
  5. handles submission, retry, and confirmation tracking
  6. records payout success or failure without mutating finalized reward records

Enterprise Node Services Detail

Vault Storage Engine

  1. stores canonical private records in encrypted form
  2. maintains Commitment = SHA256(Serialize(CR)) for every record
  3. enforces encryption at rest and in transit
  4. supports record retrieval only through authenticated, authorized channels
  5. no plaintext leaves the Vault except into an attested TEE
  1. stores per-data-subject consent records within the Vault
  2. enforces consent checks before any data access
  3. handles consent grant, withdrawal, and expiry
  4. consent state never exported as plaintext — only consent verification results (yes/no) are shared

Privacy Budget Ledger

  1. maintains per-data-subject, per-usage-scope budget tracking
  2. processes reserve operations from the master cluster during task admission
  3. processes commit operations during execution
  4. processes release operations when tasks fail before execution
  5. enforces that committed consumption is append-only

Computation Runtime

  1. executes authorized computation on Vault-internal records
  2. for parse tasks: runs OCR, extraction, and structured output generation
  3. for query/compute/train slices: evaluates against authorized records and produces SliceResultBundle
  4. may delegate to a local TEE for cross-Vault tasks where policy requires
  5. produces commitment-linked, deterministic outputs

Part 3: Inter-Node Communication

Communication Model

All inter-node communication uses mutual TLS (mTLS) with certificate-based authentication.

Master-to-Master

  1. BFT consensus messages: state synchronization, block proposals, vote messages
  2. Service replication: configuration, registry, and accounting state
  3. protocol: internal consensus protocol (gRPC-based)

Master-to-Enterprise

  1. Execution assignments: TaskEnvelope + per-Vault execution parameters
  2. Consent verification requests: "does data subject X have consent for usage scope Y?"
  3. Privacy budget operations: reserve, commit, release
  4. Slice result collection: enterprise returns SliceResultBundle after execution
  5. Heartbeat and health: enterprise node liveness monitoring
  6. protocol: gRPC over mTLS, with message signing

Enterprise-to-Master

  1. Slice results: completed execution output with proof references
  2. Privacy budget responses: reservation confirmations, commit acknowledgments
  3. Consent verification responses: yes/no results
  4. Health reports: storage capacity, compute availability, uptime metrics

User-to-Master

  1. Data upload: user uploads raw data (receipt images, documents) via REST/gRPC API
  2. Task submission: user submits task requests via REST/gRPC API
  3. Result retrieval: user queries task status and retrieves settled results
  4. Account management: staking, unstaking, reward queries
  5. protocol: HTTPS REST or gRPC, with JWT/OAuth2 authentication

Part 4: Data Architecture

Persistence by Node Type

Master Node Persistence

Master nodes store protocol state, not private data:

StoreContentsProperties
Task State StoreTask lifecycle events, TaskEnvelopes, state transitionsAppend-only, replicated across 3 masters
Semantic RegistryDomains, canonical attributes, query attributes, derivation rulesVersion-aware, replicated
Settlement StoreSettlement contexts (candidate and finalized), SettlementRoot recordsAppend-only, replicated
Reward LedgerPer-actor reward records, adjustments, payout intentsAppend-only, replicated
Challenge StoreChallenge lifecycle records, evidence referencesAppend-only, replicated
Epoch StoreEpoch definitions, window metadata, aggregation scopesReplicated
Policy StorePolicy version definitions, governance proposalsReplicated
Actor RegistryActor identities, role bindings, staking stateReplicated
Audit LogAll protocol events with timestampsAppend-only, immutable

Enterprise Node Persistence

Enterprise nodes store private data and local protocol state:

StoreContentsProperties
Vault Record StoreEncrypted canonical private recordsEncrypted at rest, access-controlled
Commitment StoreCommitment references for all recordsIntegrity-linked to records
Consent StorePer-data-subject consent recordsNever exported as plaintext
Privacy Budget LedgerPer-subject, per-scope budget entriesAppend-only for committed entries
Local Attribute PoolLocal semantic candidates and stability metricsVault-scoped
Execution Transcript StorePer-task execution evidence and proof referencesAppend-only

Replay and Idempotency

The system must preserve:

  1. replay-critical version and epoch context on all persisted records
  2. idempotent create behavior for externally retried requests (deduplication by request ID)
  3. deterministic linkage between tasks, semantic resolution, execution artifacts, and verification decisions
  4. append-only finalized accounting — corrections through explicit adjustment records, never silent mutation

Part 5: Smart Contract Architecture

Contracts on Aptos

All protocol-managed SYM is held by smart contracts on the Aptos blockchain:

Staking Contract

  1. accepts SYM deposits for S_master, S_enterprise, and S_producer tiers
  2. enforces minimum stake requirements per tier, read from the Policy Store at the active policy_version
  3. processes slashing instructions from the Settlement Plane
  4. manages cooldown periods for unstaking. Cooldown durations are governance parameters, not hard-coded constants; the current policy values (e.g., 30 days for nodes, 7 days for Data Producers) are loaded from the Policy Store and may be revised through a governance proposal
  5. emits staking state changes as on-chain events

Escrow Contract

  1. locks SYM from Task Submitters at task admission
  2. holds locked funds until settlement finalization
  3. releases funds to recipients upon finalized reward distribution
  4. returns funds to submitter for rejected-before-execution tasks
  5. supports partial release for tasks that fail during execution

Reward Contract

  1. receives epoch-aggregated reward distributions from the Payout Service
  2. distributes SYM to recipients: Data Producers (Data Contributors), Vault Operators, Executors, Aggregators, and Verifiers. Every payout batch carries per-class amounts so that the on-chain record preserves the SCP invariant that per-Vault shares plus aggregator and verifier shares equal the total task reward budget, with no unaccounted remainder
  3. applies staking multiplier: full distribution to staked producers, 50% to unstaked; the unrewarded 50% is explicitly routed back to the epoch reward pool through a pool-return instruction rather than left as a silent remainder
  4. maintains per-epoch reward records on-chain

Treasury Contract

  1. holds the Protocol Treasury and Ecosystem Incentives allocations
  2. releases funds only through governance-approved proposals
  3. receives the Protocol Treasury share of task fees
  4. funds data quality rewards and operational costs during bootstrap
  5. multi-sig controlled by a dedicated Treasury signer set that is disjoint from the keys used by the master nodes for BFT consensus, Settlement Plane operations, and Payout submission. This signer isolation ensures that a compromise of the operational master cluster does not automatically confer authority to move treasury funds. Treasury signers are bound by governance policy, and any release additionally requires an on-chain governance proposal reference

Inflation Controller

  1. manages the declining bootstrap inflation schedule. The current schedule values (e.g., 8% → 5% → 3% → 1.5% → 0%) and the bootstrap duration (expected 3-5 years) are governance parameters loaded from the Policy Store, not hard-coded constants. Governance may revise the schedule or shorten/extend the bootstrap phase within the bounds set by SCP economics
  2. mints new SYM per epoch according to the active schedule
  3. distributes minted SYM to the reward pool and treasury
  4. automatically disables when the governance-defined terminal epoch is reached (by default at the end of the bootstrap phase)

On-Chain vs Off-Chain Boundary

ConcernOn-Chain (Aptos)Off-Chain (SCS services)
Staking stateCurrent stake, slashing recordsStaking UI, deposit workflow
Task feesLocked in escrow, distributed on settlementFee calculation, admission validation
RewardsFinal distribution amountsReward calculation, multi-Vault distribution
SettlementSettlementRoot hash anchored on-chainFull settlement context stored off-chain
GovernanceProposal voting resultsProposal creation, discussion, execution

The on-chain layer provides finality and transparency. The off-chain layer provides computation, privacy, and throughput.


Part 6: TEE Integration

TEE Provisioning

  1. master nodes maintain a TEE environment for secure aggregation
  2. enterprise nodes may maintain local TEE environments for cross-Vault execution
  3. TEE enclaves are provisioned with pre-approved computation code (measurement hash registered in the governance registry)
  4. enclave identity and measurement are verified at provisioning time

TEE Execution Flow

For a cross-Vault computation:

  1. master node's Multi-Vault Coordinator assigns execution to TEE
  2. enterprise Vaults encrypt record material with the TEE enclave's public key
  3. encrypted data is transferred to the TEE environment
  4. TEE decrypts data inside the enclave, performs computation
  5. TEE produces result with AttestationReport (enclave identity, measurement hash, platform identity, freshness nonce)
  6. result and attestation are returned to the master node
  7. TEE purges all record material after execution

TEE Attestation Verification

  1. the Result Verification Service on master nodes validates every AttestationReport
  2. verification checks: enclave measurement matches registered code, platform is not revoked, freshness nonce is valid
  3. failed attestation results in task rejection
  4. TEE platform revocations are tracked in the governance registry

Part 7: Operational Concerns

Monitoring and Alerting

The system should monitor:

  1. Node health: master and enterprise node liveness, resource utilization
  2. Task throughput: admission rate, execution latency, settlement time
  3. Privacy budget: budget utilization rates per Vault, approaching-exhaustion warnings
  4. Staking: stake levels, approaching-minimum warnings, slashing events
  5. Consensus: BFT round times, missed votes, view changes
  6. On-chain: transaction confirmation times, gas costs, contract state

Failure Handling

Master Node Failure

  1. single master failure: the remaining 2 masters maintain consensus and protocol operation
  2. double master failure: protocol halts until at least 2 masters are available (safety over liveness)
  3. failed master rejoins through state catch-up from the surviving nodes

Enterprise Node Failure

  1. enterprise node failure during task execution: affected slices time out
  2. if quorum is still met with remaining Vaults: task proceeds with partial coverage
  3. if quorum is broken: task is rejected or retried under policy
  4. persistent enterprise node failure: governance may suspend the node and freeze task dispatch to its Vault. Consent records are never exported from the suspended Vault in plaintext (this would violate the Consent Manager invariant that consent state is Vault-local). Instead, affected data subjects are notified and may re-authorize their data under a different approved enterprise node through a fresh consent grant, or the records remain quarantined until the original node is restored. Any migration of private records between Vaults proceeds through a TEE-mediated, re-encrypted transfer authorized by a governance proposal, not through direct plaintext copy

On-Chain Failure

  1. Aptos chain congestion: payout batches queue until confirmation
  2. contract failure: payout retried without mutating off-chain reward records
  3. chain reorganization: reconciliation service detects and re-submits affected transactions

Reconciliation

The system must detect and handle:

  1. duplicate submissions (idempotent deduplication)
  2. partial downstream effects (incomplete payout batches)
  3. missing confirmations (payout submitted but not confirmed)
  4. reconciliation drift (off-chain reward records vs on-chain payout state)
  5. epoch-window accounting inconsistencies
  6. privacy budget two-phase drift: because budget reservation is initiated by master nodes while the authoritative ledger lives on enterprise nodes, the reserve → commit → release sequence is a distributed transaction. SCS runs a periodic reconciliation between master-side task state and enterprise-side Privacy Budget Ledger entries and detects: (a) reservations on master with no matching enterprise reserve, (b) enterprise reserves with no task to commit or release them, (c) committed enterprise entries whose task was rejected or timed out on master, (d) released master reservations that were never released on the enterprise ledger. Every reserve/commit/release event is linked into the task evidence chain by task_id and request_id so the reconciliation is deterministic
  7. SettlementRoot anchoring drift: the on-chain anchored SettlementRoot must match the off-chain finalized settlement context; any mismatch triggers a re-anchor through Payout Service without mutating the settlement context

Corrections are always explicit through adjustment records, never through silent mutation.

Upgrade and Migration

  1. service upgrades on master nodes: rolling upgrade with one node at a time, maintaining 2-of-3 consensus throughout
  2. smart contract upgrades: through governance proposal, with upgrade proxy pattern on Aptos
  3. protocol version transitions: at epoch boundaries, with one-epoch advance notice
  4. enterprise node software upgrades: coordinated with master cluster, with grace period for version compatibility
  5. TEE enclave measurement upgrades: when an aggregation or execution enclave image changes, the new measurement hash must be registered in the governance registry through a governance proposal before any task may be dispatched to it. During the transition, old and new measurements may be temporarily allowed in parallel so that in-flight tasks complete under the measurement they were admitted with; tasks admitted after the cutover use only the new measurement. Revocation of a measurement (for example, after a side-channel disclosure) blocks new dispatch immediately while still allowing challenge evaluation of already-completed tasks, per SCP's TEE failure handling rules

Part 8: Implementation Assurance

Required System Capabilities

SCS must support at least:

  1. Replay: execution and verification artifacts can be replayed to reproduce the same result under the same inputs
  2. Settlement-root reproducibility: SettlementRoot can be independently computed from the same settlement context
  3. Semantic-resolution auditability: the semantic resolution path for any task can be reconstructed
  4. Epoch-window reconstruction: the exact membership of any epoch can be reproduced
  5. Reward-accounting reconciliation: reward records can be reconciled against on-chain payout records
  6. Privacy budget auditability: per-subject budget consumption can be audited through the task evidence chain

Contract Packaging

The Markdown specification set remains the canonical semantic source.

SCS must additionally produce machine-readable implementation artifacts:

  1. API schemas (OpenAPI/gRPC protobuf) for all external and internal interfaces
  2. event schemas for all protocol events (task transitions, settlement, rewards, penalties)
  3. version annotations for schema_version, semantic_version, and policy_version
  4. error catalogs mapping protocol error families to HTTP/gRPC status codes
  5. smart contract ABIs for all Aptos contracts
  6. TEE enclave measurement hashes for all approved computation code

Relationship to SCP

This document describes implementation realization only.

For the normative meaning of protocol architecture, task lifecycle, semantic model, data protection, settlement, and economics, see:

  1. SCP Protocol Overview
  2. SCP Core Spec
  3. SCP Economics and Governance