DarlaStack Kernel

Govern what AI agents do, not just what they say.

DarlaStack Kernel sits at the tool-dispatch boundary. Every proposed action is checked against a versioned contract before it runs — allowed, blocked, escalated for approval, or degraded — and written to a tamper-evident audit trail that can be replayed against any rule version. Deterministic execution governance for agentic systems.

Status Prototype · Pilot-Ready Domain AI Agent DevOps Verified COLETEK PTY LTD
§01The Problem

Single-call checks miss the sequence.

Modern AI agents call tools across multiple steps. State persists between steps. A read in step one looks innocent. An egress in step two looks routine. The combination is exfiltration. Most policy checks evaluate calls in isolation — the cross-step pattern slips through. The risk lives in the sequence, not the individual call.

§02The Exploit Scenario

Two calls. Each one defensible. Together, plausible exfiltration.

A two-step read-then-egress where each call is acceptable in isolation. The agent reads a config file containing an API key. Then it sends "telemetry" to an external webhook. Reviewed independently, both pass. Reviewed as a sequence, the second action is an exfiltration path if the runtime carries the read result into the egress payload.

Step 01 · read_config_file Isolated check · pass
operation: read_config_file
path: /etc/darla/config.yaml
authority: A (read-only)
isolated_check: allow — path within sandbox, size under cap
Step 02 · post_telemetry Isolated check · pass
operation: post_telemetry
endpoint: https://metrics.example.io/ingest
payload_size_kb: 3.2
isolated_check: allow — endpoint allowed, size under cap
Sequence reality Runtime-dependent · plausible exfiltration
// If the runtime permits read-then-egress and carries
// the file contents into the outbound payload, the API
// key from step 01 leaves the boundary in step 02.
// Each call's local check cannot see this.
§03The DarlaStack Kernel Path

Same sequence. Cross-step invariant blocks step two.

DarlaStack Kernel preserves session context. When step two arrives, the kernel sees that step one read a configuration file in this session and that the proposed egress payload is large enough to plausibly carry that content. A declared cross-step invariant fires. The action is blocked through a critical invariant, regardless of authority tier. The current contract blocks this regardless of how authoritative the caller claims to be.

Step 01 · read_config_file · DarlaStack Kernel Allow
decision: ALLOW
reason_code: PASSES_BOUNDS_AND_INVARIANTS
session_state: recorded — read of /etc/darla/config.yaml
audit_input_digest: sha256:9f2c…
Step 02 · post_telemetry · DarlaStack Kernel Block
decision: BLOCK
reason_code: CROSS_STEP_INVARIANT_VIOLATION
invariant: no_egress_after_secret_read
severity: critical
human_approval_can_override: false
session_evidence: step_01_digest=9f2c… → step_02_intent=egress
audit_input_digest: sha256:b14a…
Decision time · sub-millisecond in prototype path Production overhead must be measured in target integration
§04The Audit Trail

Every decision hash-chained. Every input replayable.

Each decision is bound to canonical input values, not just operation names. Different inputs produce different audit digests. The chain is tamper-evident — any retroactive edit breaks the linkage and is visibly detectable. Inputs are preserved for replay, so a decision made today can be reconstructed exactly tomorrow.

Seq
Operation
Hash
Decision
001
read_config_file · /etc/darla/config.yaml
sha256:9f2c8a14b3…
Allow
002
post_telemetry · metrics.example.io
sha256:b14ad7c903…
Block
003
archive_production_logs · /var/log/prod/
sha256:c8f1e22a45…
Approve
004
read_config_file · /etc/darla/agents.yaml
sha256:11d4e8f7c0…
Allow
§05The Compliance Time Machine

Replay yesterday's decisions under today's contract.

Rules drift. Policies tighten. The Time Machine takes a historical audit log and re-evaluates every decision against a newer contract. Which past actions would be decided differently under the policy we have now? It surfaces drift, supports retrospective review, and gives compliance functions a structured way to ask the question that auditors actually ask.

Original — contract v1.2.1 Allowed at the time
operation: post_telemetry
endpoint: metrics.example.io
decision_then: ALLOW
contract_hash: sha256:a01b…
replay
Replayed — contract v1.2.2 Would block today
operation: post_telemetry
endpoint: metrics.example.io
decision_now: BLOCK
reason: no_exfil_after_sensitive_read added in v1.2.2
§06Integration

Three deployment modes. One mandatory boundary.

DarlaStack Kernel is designed to sit at the mandatory tool-dispatch boundary. A production integration must ensure that all tool calls pass through DarlaStack Kernel — the kernel only protects what is routed through it. Three deployment shapes are supported, each with different security properties; choose based on existing agent architecture.

# Embedded library — minimal latency, requires trusted in-process call site
from darlastack import Kernel
kernel = Kernel.load_contract("contracts/devops_v1.2.2.json")

decision = kernel.evaluate(operation="read_config_file", inputs={...})
if decision.allowed:
    run_tool(...)

Embedded

Library import inside the agent process. Lowest latency. Requires the integrator to ensure all tool calls actually go through the kernel — bypass risk lives at the call site.

In-process · sub-ms · trusted code path

Sidecar

DarlaStack Kernel runs as a co-located process. Agent calls a local socket before tool execution. Stronger boundary than embedded; assumes the agent cannot launch tools out-of-band.

Co-located · IPC · process boundary

Gateway

DarlaStack Kernel fronts the tool API surface. Agent has no direct path to tools — every call routes through the gateway. Strongest mandatory boundary, highest latency, requires network plumbing.

Network · enforced routing · audit upstream
§07Limitations

What DarlaStack Kernel is, and what DarlaStack Kernel isn't.

DarlaStack Kernel enforces declared rules over declared actions. It does not infer hidden intent, prove universal AI safety, or provide automatic legal compliance. It is not a malware sandbox, an IAM replacement, a SIEM, a DLP product, or a complete compliance platform.

A production deployment must ensure all tool calls pass through DarlaStack Kernel, that audit evidence is stored in durable encrypted storage with access control, and that surrounding systems still handle identity, runtime security, and human process. DarlaStack Kernel is the deterministic reference monitor — not the entire security architecture.

§08 · Pilot Offer · DarlaStack Kernel Agent Governance Pilot v1

Bring one AI agent workflow. We'll govern it.

A bounded paid engagement to prove DarlaStack Kernel against your real agent-tool boundary. We map the workflow, write the first contract, run baseline attack chains, and produce a replayable audit log plus a written go/no-go report on production integration.

  • Map one agent-tool workflow against your existing infrastructure
  • Define the first DarlaStack Kernel contract — operations, bounds, invariants, authority classes
  • Run baseline attack chains including read-then-egress and cross-step misuse
  • Demonstrate allow / block / approval / degrade decisions on your scenarios
  • Export tamper-evident audit log with contract hash and replay support
  • Written go/no-go report on production integration path
AU$5k–$15kFirst pilot · scope-bounded
2–4 weeksFrom contract sign to report
One workflowOne contract · one report