Skip to content

Consensus-Execution Pipelining

Source: docs/design/consensus-execution-pipelining.md

Status: Draft (Deferred) Issue: chain-6cepd Date: 2026-02-03

Consensus-execution pipelining overlaps consensus for block N+1 with execution of block N. This can reduce the critical path per block when execution time is non-trivial. However, it introduces material complexity for light clients, state-root semantics, and rollback behavior. We are deferring implementation until the chain is closer to launch and execution is a confirmed bottleneck.

Today, the block path is serial:

Propose -> Vote/Finalize -> Execute -> Done

If execution takes a significant fraction of block time, overall throughput is limited by (consensus + execution). Pipelining overlaps these stages:

Block N: Propose/Vote/Finalize
Block N: Execute
Block N+1: Propose/Vote/Finalize

This reduces per-block latency and increases throughput only when execution is slow relative to consensus. With a fast RISC-V VM, the benefit may be small until workloads grow.

  • Light client complexity: state proofs must target a different header (or an execution receipt) once execution lags consensus.
  • State root semantics: headers can no longer trivially commit to the post-execution state of the same block.
  • Rollback complexity: if execution lags and a reorg happens, execution must roll back and re-run safely.

Given these costs and the chain not being live yet, we will defer the implementation and revisit once execution throughput is a proven bottleneck.

Pipelining requires decoupling the consensus header from the execution root. Two viable options:

Option A: Delayed Root

  • Block N header commits to the state root after executing block N-1.
  • Pros: simple header structure.
  • Cons: proofs for block N transactions must reference N+1 (or earlier).

Option B: Execution Receipt (preferred for clarity)

  • Block N header commits to consensus data only.
  • Block N+1 carries an ExecutionReceipt for block N:
    • block hash
    • execution status
    • post-state root
  • Light clients verify receipts to tie execution to consensus.

Execution must not be allowed to “fail” the block after finality. Options:

  • Treat execution failures as per-transaction reverts only; the block is still valid.
  • Enforce pre-execution validation so that all consensus-accepted blocks are guaranteed to execute deterministically (no fatal VM errors).

If execution can fail fatally, pipelining is unsafe. We should treat fatal execution errors as consensus-invalid and prevent pipelining until we have strong pre-execution validation.

Light clients need a clear rule for which header roots to use:

  • Proofs for transaction effects must target the execution receipt root (Option B) or a delayed root (Option A).
  • Clients track two heights:
    • finalized_height
    • executed_height

Introduce explicit tags:

  • finalized: highest finalized block (consensus)
  • executed: highest block with execution receipt
  • safe: min(finalized, executed)
  • pending: mempool / in-flight

If consensus reorgs block N after block N has executed:

  • Roll back execution state to the last executed canonical ancestor.
  • Re-run execution for the new canonical chain.
  • Requires snapshotting / journaling of execution state (already present in state backends, but needs validation under pipelining).
  • Use a bounded execution queue.
  • If execution lags, block builders should throttle block size or delay proposals to avoid unbounded backlog.
  • Light client proof complexity and client upgrade burden.
  • Increased state management complexity (snapshots, receipts, rollbacks).
  • Consensus/execution mismatch if validation is insufficient.

Defer implementation until:

  1. Execution is a dominant bottleneck in production-like benchmarks.
  2. Light client protocol changes are acceptable for downstream consumers.
  3. Execution receipts / rollback strategy are fully specified.
  1. docs/design/consensus-execution-pipelining.md (this document)
  2. State diagram showing block lifecycle (if revisited)
  3. Failure mode analysis (if revisited)
  4. Light client protocol delta (if revisited)
  5. RPC compatibility matrix (if revisited)