The Anoma Virtual Machine
The AVM is an object-centric, message-passing, transactional virtual machine designed for distributed computation. Objects communicate exclusively through messages, maintain internal state, and execute within serializable transactions.
This Rust implementation faithfully mirrors the formal Agda specification and provides a production-quality interpreter with full observability.
Key properties
-
Object-centric: computation is organized around objects with behaviors, not global instruction streams.
-
Message-passing: objects communicate by sending and receiving values. There is no shared mutable state.
-
Transactional: state changes can be grouped into atomic transactions with snapshot isolation. Conflicts are detected at commit time.
-
Interaction trees: programs are represented as trees of observable effects, enabling formal reasoning about execution.
-
Layered ISA: the instruction set is composed from independent layers (objects, transactions, pure functions, distribution, constraints), each adding capabilities on top of the previous.
-
Distributed: objects can run on different machines connected via TCP. The
Transporttrait makes remote calls transparent — acallto an object on another node serializes the message, sends it over the network, and returns the response.
Crate structure
| Crate | Purpose |
|---|---|
avm-core | Types, instructions, interpreter, errors, tracing, Tape IR |
avm-node | Distributed runtime: TCP transport, location directory, CLI |
avm-examples | PingPong and Battleship demonstrations |
avm-book | This documentation |
Architecture
The AVM follows a layered design. A Program (an ITree) is handed to the
Interpreter, which dispatches each Vis node to the appropriate handler.
Handlers read and write through a central Store; object behaviors are
resolved via a BehaviorRegistry that can manufacture new ITree programs on
demand.
graph TD
Program[ITree Program] -->|interpret| Interpreter
Interpreter -->|dispatch| ObjHandler[Object Handler]
Interpreter -->|dispatch| TxHandler[Transaction Handler]
Interpreter -->|dispatch| IntrospectHandler[Introspect Handler]
Interpreter -->|dispatch| OtherHandlers[Other Handlers...]
ObjHandler -->|read/write| Store[(Store)]
TxHandler -->|snapshot/restore| Store
ObjHandler -->|resolve| Registry[BehaviorRegistry]
Registry -->|create program| Program
Distributed architecture
Multiple avm-node processes form a cluster. Each node owns local objects
and routes remote calls over TCP via the Transport trait.
graph LR
subgraph alpha [Node alpha:9001]
PingObj[ping object]
StoreA[(Store)]
end
subgraph beta [Node beta:9002]
PongObj[pong object]
StoreB[(Store)]
end
PingObj -->|call via TCP| PongObj
PongObj -->|response via TCP| PingObj
Quick start
# Run all tests
cd avm && cargo test --all
# Run just the PingPong example tests
cargo test -p avm-examples ping_pong
# Distributed demo (two terminals):
# Terminal 1:
just avm demo-beta
# Terminal 2:
just avm demo-alpha
# Open the API documentation
cargo doc --open -p avm-core
Related
- Formal Agda Specification — the type-checked specification that this Rust implementation mirrors
Values
Every piece of data in the AVM flows through the Val type: messages between
objects, internal state, function arguments, and results.
Variants
| Variant | Rust type | Example |
|---|---|---|
Nat(n) | u64 | Val::Nat(42) |
Bool(b) | bool | Val::Bool(true) |
Str(s) | String | Val::str("hello") |
List(v) | Vec<Val> | Val::list(vec![Val::Nat(1)]) |
Pair(a, b) | Box<Val>, Box<Val> | Val::pair(Val::Nat(1), Val::Nat(2)) |
Nothing | — | Val::Nothing |
Just(v) | Box<Val> | Val::just(Val::Nat(7)) |
Nothing and Just model the Maybe type from the specification. A call
that fails returns Nothing; a successful call wraps its result in Just.
Type aliases
Input = Val— messages sent to an objectOutput = Val— responses from an object
Identifiers
The AVM uses newtype wrappers to prevent mixing different kinds of IDs:
ObjectId(String)— unique object identifierMachineId(String)— physical machineControllerId(String)— logical transaction controllerTxId(u64)— transaction identifier
All are generated by FreshIdGen, a monotonic counter.
Interaction Trees
AVM programs are interaction trees — data structures that describe a
computation interleaved with observable effects. They are the Rust encoding
of the coinductive ITree from the Agda specification.
Structure
enum ITree<E, A> {
Ret(A), // computation finished with result A
Tau(Box<ITree>), // silent step (no effect)
Vis(E, Box<dyn FnOnce(Val) -> ITree>), // emit event E, continue with response
}
Ret(a)— terminal node. The program has produced a value.Tau(next)— an internal step with no observable effect. The interpreter eliminates these iteratively.Vis(event, continuation)— the program emits anevent(an instruction) and waits for a response from the environment. Thecontinuationis a function that takes the response and produces the next tree.
Building programs
Use trigger to lift an instruction into a single-step tree:
let tree = trigger(instruction::get_self());
// tree = Vis(GetSelf, |response| Ret(response))
Use bind (or the avm_do! macro) to sequence steps:
let program = avm_do! {
let self_id <- trigger(instruction::get_self());
let input <- trigger(instruction::get_input());
ret(Val::pair(self_id, input))
};
The avm_do! macro supports:
let x <- expr;— monadic bind (expr must produce anITree)let x = expr;— pure variable bindingexpr;— sequence (discard result)expr— final expression (must be anITree)
Example tree
The following diagram shows a small program that opens a transaction, creates
two objects, commits, and then calls one of them. Each Vis node emits an
instruction and receives a typed response before continuing.
graph TD
A[Vis: begin_tx] -->|TxRef| B[Vis: create_obj]
B -->|ObjectRef| C[Vis: create_obj]
C -->|ObjectRef| D[Vis: commit_tx]
D -->|Bool| E[Vis: call]
E -->|Result| F[Ret: value]
Execution
The interpreter runs the tree iteratively:
Ret(a)— return the resultTau(next)— step tonext(loop, no recursion)Vis(instr, cont)— execute the instruction, feed the result tocont
This design avoids stack overflow for arbitrarily long programs.
Instruction Set
The AVM instruction set is organized into composable layers. Each layer adds capabilities on top of the previous one.
Layer hierarchy
Each layer is additive — a program that only uses Layer 0 instructions is still a valid Layer 4 program.
graph BT
L0[Layer 0: Objects + Introspect] --> L1[Layer 1: + Transactions]
L1 --> L2[Layer 2: + Pure Functions]
L2 --> L3[Layer 3: + Distribution]
L3 --> L4[Layer 4: + Constraints]
Layer 0 — Objects and Introspection
| Instruction | Returns | Description |
|---|---|---|
create_obj(name, ctrl?) | ObjectId | Create object from behavior name |
destroy_obj(id) | Bool | Mark object for destruction |
call(id, input) | Maybe Output | Synchronous message call |
receive() | Maybe Input | Receive next message |
get_self() | ObjectId | Current object’s ID |
get_input() | Input | Current input message |
get_state() | List Val | Object’s internal state |
set_state(vals) | () | Replace internal state |
get_sender() | Maybe ObjectId | Caller’s ID |
get_current_machine() | MachineId | Physical machine |
Layer 1 — Transactions
| Instruction | Returns | Description |
|---|---|---|
begin_tx(ctrl?) | TxId | Start a transaction |
commit_tx(id) | Bool | Validate and apply atomically |
abort_tx(id) | () | Discard all pending changes |
Layer 2 — Pure Functions
| Instruction | Returns | Description |
|---|---|---|
call_pure(name, args) | Maybe Val | Call a named pure function |
Layer 3 — Distribution
| Instruction | Returns | Description |
|---|---|---|
get_machine(id) | Maybe MachineId | Object’s physical location |
teleport(machine) | Bool | Move execution |
move_object(id, machine) | Bool | Move object data |
fetch(id) | Bool | Bring replica locally |
get_controller(id) | Maybe ControllerId | Object’s controller |
transfer_object(id, ctrl) | Bool | Transfer ownership |
freeze(id) | Maybe Bool | Synchronize replicas |
Layer 4 — Constraints (stub)
Finite-domain constraint variables and nondeterministic choice. Not yet implemented in the interpreter.
Safety
In the Agda specification, instructions are classified as Safe or Unsafe.
Unsafe instructions (reflection, reifyTxState, registerPure) bypass
encapsulation. The Rust implementation tracks this at the type level through
the instruction enum structure.
Transactions
Transactions provide serializable snapshot isolation for groups of operations. All state changes within a transaction are buffered and applied atomically on commit.
Lifecycle
begin_tx(controller?) → TxId
↓
create / destroy / call / setState / transfer (buffered)
↓
commit_tx(id) → Bool OR abort_tx(id)
Commit sequence
sequenceDiagram
participant P as Program
participant I as Interpreter
participant S as Store
P->>I: begin_tx
I->>S: snapshot store
I-->>P: TxRef
P->>I: create_obj / call / setState
I->>S: buffer in overlay
P->>I: commit_tx
I->>S: validate read set
I->>S: apply creates → transfers → writes → states → destroys
I-->>P: true
Commit protocol
When commit_tx is called, the interpreter:
- Validates the read set — checks that no observed object was concurrently destroyed outside the transaction.
- Applies changes in dependency order:
- Creates (new objects enter the store)
- Transfers (ownership changes)
- Writes (call effects, already executed)
- States (
setStateupdates) - Destroys (objects removed from the store)
- Clears the overlay — all pending buffers are emptied.
If validation fails, the transaction is rolled back and commit_tx returns
false.
Abort
abort_tx discards all pending changes immediately. The store is unchanged.
Overlay buffers
During an active transaction, the State struct maintains:
| Buffer | Contents |
|---|---|
tx_log | (ObjectId, Input) pairs for each call |
creates | Pending object creations |
destroys | Objects marked for destruction |
observed | Read set (for conflict detection) |
pending_transfers | Ownership transfers |
pending_states | setState updates |
Restrictions
- Only one transaction can be active at a time.
teleportis forbidden during a transaction (ErrTeleportDuringTx).- Nested transactions are not supported.
PingPong Tutorial
This tutorial walks through the PingPong example — two objects exchanging messages until a counter reaches a maximum value.
Overview
The PingPong program:
- Creates two objects (“ping” and “pong”) inside a transaction
- Sends an initial message to ping
- Ping and pong call each other, incrementing a counter
- When the counter reaches max, the exchange stops
The orchestrator
pub fn ping_pong_program(max_count: u64) -> ITree<Instruction, Val> {
avm_do! {
let tx_id_val <- trigger(instruction::begin_tx(None));
let ping_id <- trigger(instruction::create_obj("ping", None));
let pong_id <- trigger(instruction::create_obj("pong", None));
let _committed <- trigger(instruction::commit_tx(tx_id));
let initial_msg = Val::list(vec![
Val::str("Ping"), Val::Nat(0), Val::Nat(max_count), pong_id,
]);
let result <- trigger(instruction::call(ping_id, initial_msg));
ret(result)
}
}
Key points:
begin_tx/commit_txensures both objects are created atomically.- The initial message carries: tag, counter, max, and the pong object’s ID.
Behavior functions
Each behavior is a function Val -> ITree<Instruction, Val>:
Ping: if counter < max, call pong with counter + 1. Otherwise return “done”.
Pong: queries its own ID with get_self() and the caller with
get_sender(), then calls ping back with counter + 1.
Running it
let result = run_ping_pong(3).expect("should succeed");
// result.value contains the final response
// result.trace contains all events (creates, calls, etc.)
What to observe in the trace
- 2
ObjectCreatedevents (ping and pong) - 1
TransactionStarted+ 1TransactionCommitted - Multiple
ObjectCalledevents (the back-and-forth)
Battleship Tutorial
This tutorial demonstrates transactions and state management through a simplified Battleship game.
Overview
The game creates two board objects, places ships via setState, and resolves
attacks by checking coordinates against the stored ship list.
Board behavior
The board behavior handles two message types:
["ship", x, y, length]— place a ship by reading the current state, appending the ship, and writing back.["coord", x, y]— check if the coordinate hits any ship.
"ship" => {
avm_do! {
let current_state <- trigger(get_state());
let new_state = { /* append ship to list */ };
trigger(set_state(new_state));
ret(Val::Bool(true))
}
}
"coord" => {
avm_do! {
let current_state <- trigger(get_state());
let hit = /* check if (x,y) intersects any ship */;
ret(Val::Bool(hit))
}
}
Game setup
avm_do! {
// Atomic creation
let tx <- trigger(begin_tx(None));
let board1 <- trigger(create_obj("board", None));
let board2 <- trigger(create_obj("board", None));
trigger(commit_tx(tx));
// Place ships
trigger(call(board1, ship(0, 0, 3)));
trigger(call(board2, ship(1, 1, 4)));
// Attack
let hit <- trigger(call(board2, coord(1, 1))); // hit!
let miss <- trigger(call(board2, coord(9, 9))); // miss
ret(Val::list(vec![hit, miss]))
}
Key concepts demonstrated
- Transactional creation: both boards are created atomically.
- State management:
get_state/set_statemaintain ship lists across calls. - Message-based dispatch: the board behavior pattern-matches on the input message tag to decide what to do.
Distributed PingPong
This tutorial runs the PingPong example across two terminals connected
via TCP. The ping object lives on node alpha and the pong object
lives on node beta. Calls cross the network transparently.
Prerequisites
Build the avm-node binary:
cd avm && cargo build -p avm-node
Running the demo
Open two terminals:
Terminal 1 — start the beta node (hosts pong):
just avm demo-beta
# or: RUST_LOG=info cargo run -p avm-node -- \
# --name beta --port 9002 --peer alpha:9001 --demo
Terminal 2 — start the alpha node (hosts ping, runs orchestrator):
just avm demo-alpha
# or: RUST_LOG=info cargo run -p avm-node -- \
# --name alpha --port 9001 --peer beta:9002 --demo --rounds 3
What happens
sequenceDiagram
participant A as alpha (ping)
participant N as Network (TCP)
participant B as beta (pong)
A->>A: create ping object
B->>B: create pong object
B->>N: CreateNotify(pong)
N->>A: CreateNotify(pong)
A->>A: orchestrator calls ping
A->>N: Call(pong, Ping msg)
N->>B: Call(pong, Ping msg)
B->>B: run pong behavior
B->>N: CallResponse(Pong msg)
N->>A: CallResponse(Pong msg)
A->>A: ping behavior resumes
A->>N: Call(pong, Ping msg)
Note over A,B: ...repeats until max rounds...
A->>A: ping returns "done"
- Beta creates the
pongobject locally and broadcasts aCreateNotifymessage to all peers. - Alpha receives the notification and updates its location directory:
pong → beta. - Alpha creates
pinglocally and runs the orchestrator program. - When
pingcallspong,execute_callsees thatponglives onbeta(different machine) and routes through theTcpTransport. - The call is serialized as JSON, sent over TCP, executed on beta, and the response returns the same way.
- When
pongcallspingback, the same process happens in reverse.
Wire protocol
Messages are length-prefixed JSON over TCP:
[4-byte big-endian length][UTF-8 JSON payload]
Message types:
| Message | Direction | Purpose |
|---|---|---|
Call | requester → owner | Synchronous object invocation |
CallResponse | owner → requester | Return value or error |
CreateNotify | creator → all | Update location directories |
DestroyNotify | destroyer → all | Remove from directories |
Ping/Pong | either | Keepalive |
Key concepts
- Location directory: each node maintains a map from
ObjectIdtoMachineId. Updated byCreateNotifybroadcasts. - Transport trait: the
TcpTransportimplementsavm_core::Transport, bridging the sync interpreter to the async TCP layer via oneshot channels. - Node-prefixed IDs:
FreshIdGen::with_prefix(u16)ensures globally uniqueObjectIds across nodes (top 16 bits = node identity). - Behaviors are local: each node registers its own behaviors. Only messages (values) cross the network, not code.
Architecture
This page provides a deeper look at the AVM’s internal design: the crate
layout, the interpreter execution model, the BehaviorRegistry, and how
transactions achieve snapshot isolation.
Crate structure
| Crate | Purpose |
|---|---|
avm-core | Types, instructions, interpreter, errors, tracing, Tape IR, Transport trait |
avm-node | Distributed runtime: TCP transport, location directory, CLI binary |
avm-examples | PingPong and Battleship demonstrations (single-process) |
avm-book | This documentation (built with mdBook) |
avm-node depends on avm-core (with the serde feature enabled).
avm-examples depends on avm-core. avm-book has no code dependency.
Interpreter execution model
The interpreter is a simple loop that drives an ITree to completion:
- If the current node is
Ret(v), returnv. - If it is
Tau(next), set the current node tonextand repeat (no recursion, no stack growth). - If it is
Vis(instr, cont), dispatchinstrto the matching handler, obtain the responser, then set the current node tocont(r)and repeat.
Dispatch is a match on the top-level instruction enum. Each arm calls a
dedicated handler function that may mutate the State (store, overlay buffers,
trace) and returns a Val to be fed back into the continuation.
This design means that even a program with millions of steps consumes only constant stack space.
BehaviorRegistry
The BehaviorRegistry maps behavior names (strings) to factory functions of
type Fn(ObjectId) -> ITree. When create_obj("Foo", ...) is executed:
- The registry looks up the factory for
"Foo". - The factory is called with the fresh
ObjectIdto produce the initialITreefor that object. - The tree is stored in the object’s metadata inside the
Store.
Subsequent call(id, input) instructions retrieve the stored tree, advance it
by one step with input as the response to the pending receive(), and write
the new tree back.
Transaction semantics and snapshot-restore
Transactions provide serializable snapshot isolation using an overlay approach:
- On
begin_tx, the interpreter records the current store version (a logical snapshot) and initializes empty overlay buffers. - All mutating operations during the transaction (
create_obj,set_state,call,transfer_object,destroy_obj) are written to the overlay rather than the live store. - On
commit_tx, the interpreter validates the read set (checking that nothing observed during the transaction was concurrently destroyed), then flushes the overlay to the store in dependency order: creates → transfers → writes → states → destroys. - On
abort_txor a validation failure, the overlay is discarded and the store is unchanged.
This means commit_tx is the sole point where the store can change, making it
straightforward to reason about isolation.
Distributed runtime (avm-node)
The avm-node crate provides a networked runtime where objects on different
machines communicate transparently via TCP.
graph TD
subgraph Node
CLI[CLI: clap] --> NodeStruct[Node]
NodeStruct --> Worker[Worker Thread]
NodeStruct --> TCP[TCP Listener]
Worker -->|interpret| Interpreter
Interpreter -->|local call| Store[(Store)]
Interpreter -->|remote call| Transport[TcpTransport]
Transport -->|send Call| TCP
TCP -->|receive CallResponse| Transport
end
Key components:
- Transport trait (
avm-core): pluggable interface for remote calls.LocalOnlyTransport(default) returnsUnreachable;TcpTransportserializes calls over TCP. - LocationDirectory:
ObjectId → MachineIdmap updated byCreateNotifybroadcasts. Each node knows where every object lives. - PendingMap: bridges the sync
Transport::remote_call()(running on a worker thread) to the async TCP response via oneshot channels. - Node-prefixed IDs:
FreshIdGen::with_prefix(u16)puts a 16-bit node identity in bits [63:48] of eachObjectId, preventing collisions. - Wire protocol: length-prefixed JSON over TCP. Message types:
Call,CallResponse,CreateNotify,DestroyNotify,Ping,Pong.
Formal specification references
The Rust implementation mirrors the coinductive Agda model available at https://anoma.github.io/avm-lab/. Key correspondences:
| Rust | Agda |
|---|---|
ITree<E, A> | ITree E A (coinductive) |
Vis(instr, cont) | vis e k |
Ret(a) | ret a |
Tau(next) | tau t |
interpret | interp |
BehaviorRegistry | behavior map in the formal store |
AVMError | the error monad layer in the Agda interpreter |
Divergences are intentional and are noted in the module-level documentation of
avm-core.
DSL Helpers
avm-core ships a small set of convenience combinators that sit on top of the
raw ITree API. They eliminate repetitive boilerplate and encode common
protocols as typed values.
with_transaction
pub fn with_transaction<F, T>(body: F) -> ITree
where
F: FnOnce(TxId) -> ITree<T>,
Wraps body in a begin_tx / commit_tx pair. If commit_tx returns
false the outer tree still returns normally — callers that need retry logic
should inspect the returned Bool.
Typical usage:
let program = with_transaction(|tx| {
avm_do! {
let a <- trigger(instruction::create_obj("Counter", None));
let b <- trigger(instruction::create_obj("Counter", None));
ret(Val::pair(a, b))
}
});
create_and_call
pub fn create_and_call(name: &str, input: Val) -> ITree
Creates a fresh object from behavior name, immediately calls it with
input, and returns the call result. Equivalent to:
avm_do! {
let id <- trigger(instruction::create_obj(name, None));
let out <- trigger(instruction::call(id, input));
ret(out)
}
Useful for fire-and-forget object creation where you only care about the first response.
send
pub fn send(target: ObjectId, msg: Val) -> ITree
A thin alias over trigger(instruction::call(target, msg)) that reads more
naturally in message-passing style:
let reply = avm_do! {
_ <- send(peer_id, Val::symbol("ping"));
trigger(instruction::receive())
};
Structured protocol types
Rather than passing raw Val everywhere, the examples define newtype wrappers
that convert to and from Val at the boundary. The convention is:
struct MyMsg { /* fields */ }
impl From<MyMsg> for Val { ... }
impl TryFrom<Val> for MyMsg { ... }
This keeps behavior bodies readable while preserving the uniform Val
interface required by the interpreter.
Protocol helper: expect_symbol
pub fn expect_symbol(v: Val, name: &str) -> Result<(), AVMError>
Asserts that a Val is the symbol name. Returns Err(RejectedCall) if it
is not. Use this at the top of a behavior’s receive loop to guard against
unexpected messages.
Protocol helper: unpack_pair
pub fn unpack_pair(v: Val) -> Result<(Val, Val), AVMError>
Deconstructs a Val::Pair. Returns Err(RejectedCall) if the value is not a
pair.
Error Hierarchy
The AVM uses a compositional error type that mirrors the instruction set layers.
Each instruction family has its own error enum, composed into the top-level
AVMError.
Composition chain
AVMError
├── PureLayerError
│ ├── TxLayerError
│ │ ├── BaseError
│ │ │ ├── ObjError
│ │ │ ├── IntrospectError
│ │ │ ├── ReflectError
│ │ │ └── ReifyError
│ │ └── TxError
│ └── PureError
├── MachineError
├── ControllerError
├── FdError
└── NondetError
Any leaf error can be converted to AVMError with ? thanks to From
implementations.
Object errors (ObjError)
| Variant | When |
|---|---|
NotFound(id) | Object doesn’t exist in store or pending creates |
AlreadyDestroyed(id) | Object was already marked for destruction |
AlreadyExists(id) | Duplicate creation |
RejectedCall(id) | Object’s behavior rejected the input |
MetadataCorruption(id) | Metadata store is inconsistent |
Transaction errors (TxError)
| Variant | When |
|---|---|
Conflict(id) | Read set validation failed at commit |
NotFound(id) | Transaction ID doesn’t match active tx |
NoActiveTx | Commit/abort without begin |
InvalidDuringTx | Nested begin_tx or teleport during tx |
Machine errors (MachineError)
| Variant | When |
|---|---|
Unreachable(id) | Target machine is unreachable |
TeleportDuringTx | Teleport attempted inside a transaction |
Controller errors (ControllerError)
| Variant | When |
|---|---|
NotAvailable(id) | Object doesn’t exist for transfer |
NoController(id) | Object has no controller for freeze |
UnauthorizedTransfer(id) | Transfer not permitted |
Tracing
Every state-modifying instruction emits a LogEntry into the execution trace.
The trace provides a complete audit trail for debugging, testing, and formal
verification.
Log entry structure
struct LogEntry {
timestamp: u64, // monotonic counter
event_type: EventType, // what happened
executing_controller: Option<ControllerId>,
}
Event types
| Event | Emitted by |
|---|---|
ObjectCreated { id, behavior_name } | create_obj |
ObjectDestroyed(id) | destroy_obj |
ObjectCalled { id, input, output } | call |
MessageReceived { id, input } | receive |
StateUpdated(id) | set_state |
ObjectMoved { id, from, to } | move_object |
ExecutionMoved { from, to } | teleport |
ObjectFetched { id, machine } | fetch |
ObjectTransferred { id, from, to } | transfer_object |
ObjectFrozen { id, controller } | freeze |
FunctionUpdated(name) | update_pure |
TransactionStarted(id) | begin_tx |
TransactionCommitted(id) | commit_tx |
TransactionAborted(id) | abort_tx |
ErrorOccurred(msg) | error paths |
Using traces in tests
use avm_core::trace::{count_events, EventType};
let result = run_ping_pong(3).unwrap();
let creates = count_events(&result.trace, |e| {
matches!(e, EventType::ObjectCreated { .. })
});
assert_eq!(creates, 2);
Timestamps
Timestamps are monotonically increasing within a single interpret() call.
They do not represent wall-clock time — they are logical event counters.