Skip to content

JIT Compilation

Source: docs/design/jit-compilation.md

Status: Draft Issue: chain-6jitd Date: 2026-02-03

Add a tiered JIT to speed up hot contract code while preserving strict interpreter-equivalent semantics. The interpreter remains the reference implementation; JIT tiers must be deterministic, gas-accurate, and safe.

  1. 5x-20x speedup for hot contracts compared to the interpreter.
  2. Deterministic behavior identical to interpreter (results + gas).
  3. Safe execution with strong sandboxing guarantees.
  4. Bounded memory and predictable compile overhead.
  5. Clear rollback path: any JIT anomaly falls back to interpreter.
  • Replacing the interpreter as the semantic authority.
  • Aggressive speculative optimizations that risk nondeterminism.
  • Unbounded per-contract compilation or caching.

We use three tiers:

  1. Tier 0: Interpreter (reference)
  2. Tier 1: Baseline JIT (fast compile, modest speedup)
  3. Tier 2: Optimizing JIT (slower compile, higher speedup)
  • Promote to Tier 1 when total gas executed for a contract exceeds 10_000_000 or after 20 calls (whichever comes first).
  • Promote to Tier 2 when total gas executed exceeds 200_000_000 or after 200 calls (whichever comes first).
  • Demote to interpreter if a JIT tier triggers any validation failure or code cache entry is evicted.

These thresholds are starting points and should be tuned with benchmarks.

+-------------+
| Interpreter |
+-------------+
| hot
v
+-------------+
| Baseline JIT|
+-------------+
| hotter
v
+-------------+
| Opt JIT |
+-------------+
^
| failure / eviction
+--------------------
  • Interpreter output is canonical. JIT results must match exactly:
    • return values
    • state writes
    • logs
    • gas usage
  • The JIT must not depend on runtime nondeterminism (CPU features, timing, randomness, or OS-dependent codegen).
  • On any detected mismatch, automatically disable JIT for the process and fall back to the interpreter.
  1. Differential fuzzing (existing vm-conformance infrastructure): run random programs across interpreter, baseline JIT, and opt JIT and compare outputs and gas.
  2. Replay tests: execute recorded devnet traces in both tiers.
  3. Per-build conformance: require 0 mismatches across a seed corpus.
(code_hash, gas_schedule_id, vm_config_hash, tier)
  • LRU by total code size.
  • Hard cap: 512 MiB (configurable).
  • Optional per-contract cap: 64 MiB.
  • Phase 1: in-memory only.
  • Phase 2: optional disk cache for baseline JIT.

We need strong isolation because native JIT code executes inside the validator process. We will stage sandboxing:

  • In-process JIT with software bounds checks on all memory accesses.
  • W^X (write xor execute) for JIT pages.
  • No syscalls from JIT code; all external interactions go through hostcalls.
  • Move JIT execution into a separate process with:
    • seccomp-bpf profile
    • shared memory for linear memory pages
    • strict IPC for hostcalls

This phase can be delayed if in-process checks prove safe and fast.

We must preserve gas accounting identical to the interpreter.

  • Precompute gas cost per basic block during decode.
  • Insert a gas check at basic block entry:
    • if gas_left < block_cost -> trap
    • otherwise subtract block_cost and continue.
  • For indirect branches and loop back-edges, ensure a gas check is executed each time control enters the block.

This mirrors the interpreter’s per-instruction accounting while keeping JIT overhead low.

  • JIT cache entries are keyed by code_hash. Any code change yields a new hash and naturally invalidates old entries.
  • If contract address points to new code, tiering state resets to interpreter.
  • Calls always route through the runtime dispatcher, which selects the highest available tier for the callee.
  • JIT-to-JIT calls use the same ABI as interpreter calls.
  • If a callee is missing JIT code, fall back to interpreter for that call.
  • Baseline JIT compile time: <= 5 ms for 10k instructions.
  • Opt JIT compile time: <= 50 ms for 10k instructions.
  • Speedup targets (steady-state):
    • Baseline JIT: 5x
    • Opt JIT: 10x-20x
  • Memory overhead: <= 2x code size per compiled tier.
  1. Tune promotion thresholds and cache caps after benchmark data.
  2. Decide whether opt JIT should ever be disk-cached.
  1. docs/design/jit-compilation.md (this document)
  2. Tiering state machine diagram (included above)
  3. Sandboxing threat model (outlined above)
  4. Gas metering accuracy analysis (included above)
  5. Benchmark targets (included above)