Skip to content

VM Tiering & Code Cache

Source: crates/vm-codecache, crates/vm-runtime/src/execution.rs

Ashen uses a tiered execution model for RISC-V contracts. New code starts in the interpreter and can be promoted to higher-performance tiers as it becomes “hot” (frequently executed). Gas accounting is tier-independent --- the same gas is charged regardless of which tier executes.

Tiering is gated by both compile-time and runtime switches:

Terminal window
# Build with tiering support
cargo build --features std,tui,vm-tiering
# Enable at runtime
ASHEN_VM_TIERING=1 ./target/debug/node

If ASHEN_VM_TIERING is unset, the node runs interpreter-only even when compiled with vm-tiering.

TierDescriptionWhen Used
InterpreterDecode-per-instruction reference tier; semantic authorityCold code, short executions
JITLazy predecode; caches basic blocks on first executionModerately-hot code
AOTEager predecode; entire program pre-decoded at load timeHot code, production steady-state
NativeCranelift JIT to native machine code (cranelift-native feature)Compute-heavy contracts

The interpreter supports two sub-modes:

ModeDescriptionBest For
StepExecute one instruction at a timeShort executions, constrained memory
BlockCacheCache decoded basic blocks for reuseTight loops, repeated code paths

The cranelift-native feature enables compilation of hot basic blocks to native machine code via Cranelift. If native compilation fails for a block (e.g., unsupported instruction), execution falls back to the interpreter for that block.

When using the code cache, the runtime queries best_available_tier():

Aot (if cached) > Jit (if cached) > Interpreter (fallback)

The fallback table for explicit tier requests:

RequestedCache HasEffective
AotAotAot
AotJitJit
AotnothingInterpreter
JitJit+Jit
JitnothingInterpreter
InterpreteranyInterpreter

The HotCodeTracker monitors per-contract execution and promotes hot code automatically. Each contract is tracked by its code_hash.

tracker.record(code_hash, gas_used) -> Option<Tier>

On each call, the tracker increments call_count and accumulates total_gas. Promotion triggers when call count exceeds a threshold:

ThresholdPromotes To
jit_threshold_callsTier::Jit
aot_threshold_callsTier::Aot

When promotion fires:

  • JIT: Stores the program image in the code cache for lazy basic-block predecoding on next execution.
  • AOT: Pre-decodes the entire program into a PredecodedProgram and stores it in the cache. Future executions skip all decode work.

Promotion happens after execution completes and does not consume additional gas. Promotion failures (e.g., decode errors) are silently ignored --- the contract continues at the lower tier.

By default, JIT/AOT entries remain zero until contracts exceed the promotion thresholds.

All cache entries are keyed by CacheKey:

FieldTypeDescription
code_hash[u8; 32]Blake3 hash of deployed contract ELF bytes
abi_versionu16Contract ABI version (currently 1)
gas_schedule_id&'static strGas schedule identifier (e.g., "gas-v1")
translator_versionu16Predecoder version (currently 1)
toolchain_hash[u8; 32]Hash of the contract toolchain manifest

Any change to these fields invalidates all entries for that contract.

ArtifactDescriptionTier
ProgramImageResolved ELF with segments and entry pointAll
PredecodedProgramPredecoded basic blocks for AOT executionAot
OpaqueArbitrary bytes (future extension)Any

The CodeCache is an LRU cache with configurable limits:

SettingDefaultDescription
max_entries1024Maximum number of cached artifacts
max_bytes256 MiBMaximum total size of cached artifacts

When limits are exceeded, the least-recently-used entry is evicted. The cache tracks per-entry hit_count, last_used_tick, and total_gas_consumed.

EventAction
Gas schedule version bumpAll entries invalidated (embedded in cache key)
ABI version bumpAffected entries invalidated (embedded in cache key)
Translator version bumpAll entries invalidated (embedded in cache key)
Contract code changeThat contract’s entries invalidated (code hash changes)
Cache format version bumpAll disk entries invalidated (header mismatch)
Node restart (unchanged versions)Reuses persisted entries

The DiskCodeCache persists compiled artifacts across node restarts.

SettingDefaultDescription
cache_dir~/.ashen/code-cache/Storage directory
max_entries4096Max entries on disk
max_bytes512 MiBMax total size on disk
enabledtrueCan be disabled for pure in-memory mode

Each entry is stored as {hex(key_hash)}-{tier}.bin:

[4 bytes] Magic: "ASHC"
[1 byte] Format version (currently 1)
[1 byte] Tier (0=Interpreter, 1=Jit, 2=Aot)
[32 bytes] Blake3 checksum of (cache_key || artifact_bytes)
[8 bytes] Artifact size (little-endian u64)
[N bytes] Artifact data (borsh-serialized)

Corruption handling: Blake3 checksum verification on load. Corrupt, truncated, or oversize entries are automatically deleted.

Atomic writes: Entries are written to a .tmp file, then atomically renamed.

Portability: Cache entries are not portable across architectures or translator versions.

Gas charging is tier-independent. All tiers pay the same predecode_per_byte cost upfront, charged against the gas meter before execution begins:

predecode_gas = predecode_per_byte * program_size_bytes

The predecode_per_byte rate is defined in the gas schedule (gas-v1.json). AOT-eager mode deducts this cost before execution; cached AOT was charged at compilation time.

When std + vm-tiering are enabled, the node exports Prometheus metrics under /metrics:

MetricDescription
code_cache_entriesTotal cached entries
code_cache_entries_by_tier{tier="..."}Entries by tier (interpreter, jit, aot)
code_cache_bytesTotal bytes used by cache
code_cache_hitsTotal cache hits since startup
code_cache_missesTotal cache misses since startup

CodeCache::stats() returns a CodeCacheStats snapshot with hit_rate() (computed as hits / total lookups, 0.0 to 1.0).

  1. Build with --features vm-tiering.
  2. Set ASHEN_VM_TIERING=1 and run the node.
  3. Execute contract workloads.
  4. Check code_cache_entries_by_tier{tier="jit"} and tier="aot".

If JIT/AOT remain zero, the node is running tiered selection but no contracts have exceeded the promotion thresholds yet.

The runtime provides execution entry points with increasing configurability:

FunctionUse Case
execute_entrypointBasic interpreter, no cache
execute_entrypoint_tierExplicit mode selection, no cache
execute_entrypoint_tieredAuto-select best tier from cache
execute_entrypoint_cachedExplicit tier request + cache
execute_entrypoint_with_trackingCache + hot-code promotion (production)
execute_entrypoint_with_configUnified entry point via EntrypointConfig

For production block execution, use execute_entrypoint_with_tracking or execute_entrypoint_with_config with both cache and tracker configured.