Back to Blog
Security

The Case for Deterministic Security in AI

In the world of LLMs, "99% safe" means "unsafe." If your banking agent hallucinates a transfer authorization 1% of the time, you don't have a product—you have a liability. At a rate of 1,000 transactions per hour, that 1% failure rate produces 10 unauthorized actions per hour, 240 per day, 87,600 per year. At AKIOS, we believe security must be deterministic, not probabilistic.

The Probabilistic Security Fallacy

Most AI security approaches rely on statistical methods: confidence scores, anomaly detection, and behavioral analysis. These methods work well for human users because humans are slow, inconsistent, and easy to challenge with CAPTCHAs. AI agents are none of these things. They operate at machine speed with perfect consistency, which means a 1% failure rate that might be acceptable for human users becomes catastrophic when an agent processes thousands of actions per minute.

The fundamental problem is that you cannot secure a probabilistic system with another probabilistic system. Using an LLM to detect whether another LLM is misbehaving is circular reasoning implemented as infrastructure. The guardrail model has its own failure rate, which compounds with the agent's failure rate rather than reducing it.

Consider the attack surface of a typical AI agent deployed in production:

1. Prompt Injection Direct & Indirect Malicious input, poisoned data, multi-turn erosion
2. Tool Abuse Escalation & Chain Privilege escalation, parameter injection, chaining
3. Data Exfiltration Context & Channel Cross-session leak, side-channel, memorization
4. Resource Exhaustion Loops & Costs Token bombing, infinite loops, cost amplification
5. Behavioral Drift Goals & Policy Goal hijacking, policy erosion, reward hacking
Probabilistic defense: catches ~90-95%
Deterministic defense: catches 100%
REF: THREAT-MODEL
AKIOS ENG

Deterministic Security with AKIOS

AKIOS takes a fundamentally different approach. Instead of trying to detect bad behavior after the fact, we prevent it before execution. Every agent action is intercepted by a deterministic policy engine written in Rust. The engine evaluates the action against a set of rigid, testable rules. No probabilities, no machine learning, no confidence scores—just clear, enforceable constraints.

The key insight is that while the agent's reasoning is probabilistic, its actions are discrete and enumerable. An agent either makes a POST request or it does not. It either accesses customer data or it does not. It either exceeds its budget or it does not. These are binary properties that can be verified deterministically, regardless of the reasoning path that led to them.

Policy-as-Code

Security policies are defined as code, not configuration. This is a critical distinction. Configuration can be misconfigured. Code can be tested. Our policies are:

  • Version-controlled with full Git history and audit trails
  • Unit-testable with the same frameworks you use for application code
  • Peer-reviewable through standard code review processes
  • Composable for complex multi-tenant, multi-jurisdiction scenarios
  • Immutable at runtime—policies cannot be modified by the agent or by runtime configuration changes
/// Deterministic policy enforcement — the core of AKIOS security.
/// This function is called for EVERY agent action, without exception.
/// It runs in < 2ms and has zero false negatives by construction.
fn enforce_policy(
    action: &AgentAction,
    context: &SecurityContext,
    policy: &PolicyManifest,
) -> PolicyDecision {
    // 1. Network access control — allowlist-based, default-deny
    if let Some(network_call) = &action.network_request {
        if !policy.network_allowlist.permits(
            &network_call.host,
            &network_call.method,
            &network_call.path,
        ) {
            return PolicyDecision::Deny {
                rule: "network_access",
                reason: format!(
                    "{} {} to {} blocked by allowlist rule",
                    network_call.method, network_call.path, network_call.host
                ),
                action: DenyAction::LogAndBlock,
            };
        }
    }

    // 2. PII detection — deterministic pattern matching, not ML
    if let Some(pii_findings) = policy.pii_scanner.scan(&action.payload) {
        if !context.has_consent_for(&pii_findings) {
            return PolicyDecision::Deny {
                rule: "pii_protection",
                reason: format!(
                    "PII detected: {:?}. No consent on record.",
                    pii_findings.categories
                ),
                action: DenyAction::RedactAndRetry,
            };
        }
    }

    // 3. Budget enforcement — hard limits, no soft warnings
    let session_cost = context.cumulative_cost();
    if session_cost + action.estimated_cost() > policy.budget.max_per_session {
        return PolicyDecision::Deny {
            rule: "budget_exceeded",
            reason: format!(
                "Session cost {:.2} + action {:.2} exceeds limit {:.2}",
                session_cost, action.estimated_cost(), policy.budget.max_per_session
            ),
            action: DenyAction::TerminateSession,
        };
    }

    // 4. Tool permission check — explicit allowlist
    if let Some(tool_call) = &action.tool_invocation {
        if !policy.tool_permissions.allows(&tool_call.name, &tool_call.args) {
            return PolicyDecision::Deny {
                rule: "tool_access",
                reason: format!("Tool '{}' not in permitted set", tool_call.name),
                action: DenyAction::LogAndBlock,
            };
        }
    }

    // 5. Human-in-the-loop gates — configurable escalation points
    if policy.hitl_gates.requires_approval(&action) {
        return PolicyDecision::Escalate {
            rule: "human_approval_required",
            approver: policy.hitl_gates.approver_for(&action),
            timeout: policy.hitl_gates.timeout,
        };
    }

    PolicyDecision::Allow {
        audit_record: AuditRecord::from(action, context, policy),
    }
}

Real-World Policy Examples

Here are example production policies that address sector-specific threat models. Each policy demonstrates how AKIOS enforces deterministic controls for a particular regulatory context:

Financial Services: Transaction Controls

apiVersion: akios/v1
kind: AgentPolicy
metadata:
  name: banking-transaction-agent
spec:
  governance:
    transaction_controls:
      - name: amount-limits
        condition: "transaction.amount > 10000"
        action: require_dual_approval
        audit: "SEC-17a-4 transaction record"
      - name: international-transfers
        condition: "transaction.destination.country != origin.country"
        action: require_compliance_review
        timeout_minutes: 30
      - name: velocity-check
        condition: "transaction.count_last_hour > 3"
        action: block_and_alert
        alert_channel: "fraud-ops"
      - name: material-nonpublic
        condition: "context.contains_mnpi == true"
        action: deny_all_output
        reason: "Regulation FD — cannot act on MNPI"
    network_access:
      mode: allowlist
      default: deny
      rules:
        - host: "api.bloomberg.com"
          methods: ["GET"]
        - host: "internal-risk-engine.corp"
          methods: ["GET", "POST"]

Healthcare: PHI Protection

apiVersion: akios/v1
kind: AgentPolicy
metadata:
  name: clinical-documentation-agent
spec:
  governance:
    phi_controls:
      - name: hipaa-access-control
        condition: "data.contains_phi AND NOT user.has_hipaa_training"
        action: deny_access
        log: "HIPAA access attempt — untrained user"
      - name: emergency-access
        condition: "context.emergency_mode AND user.role == 'physician'"
        action: grant_temporary_access
        duration_minutes: 60
        audit: "Emergency access invoked — requires post-hoc review"
      - name: cross-patient-barrier
        condition: "session.patient_id != data.patient_id"
        action: deny_access
        reason: "Cross-patient data access violation"
      - name: clinical-recommendations
        condition: "output.type in ['diagnosis', 'therapeutic_recommendation']"
        action: require_physician_approval
    pii_redaction:
      engine: deterministic
      fields: ["patient_name", "mrn", "dob", "ssn", "address", "phone"]
      mode: redact_before_inference

Why Rule Engines Are Making a Comeback

In the 2000s, rule engines were everywhere—Drools, CLIPS, JBoss Rules. Then machine learning arrived and everyone forgot about them. But rules are making a comeback in AI security because they have four properties that ML-based approaches lack:

┌───────────────────────┬─────────────────────┬──────────────────────┐
│ Property              │ Rule Engine         │ ML-Based Security    │
├───────────────────────┼─────────────────────┼──────────────────────┤
│ Explainability        │ Always — rules are  │ Often opaque —       │
│                       │ human-readable      │ "model says no"      │
├───────────────────────┼─────────────────────┼──────────────────────┤
│ Testability           │ Unit tests, formal  │ Statistical tests    │
│                       │ verification        │ (never 100%)         │
├───────────────────────┼─────────────────────┼──────────────────────┤
│ Latency               │ < 2ms (AKIOS)       │ 50-500ms (inference) │
├───────────────────────┼─────────────────────┼──────────────────────┤
│ Drift resistance      │ Rules don't drift — │ Models drift as      │
│                       │ they are immutable   │ data distribution    │
│                       │                     │ changes              │
├───────────────────────┼─────────────────────┼──────────────────────┤
│ False negative rate   │ 0% by construction  │ Nonzero — guaranteed │
│                       │ (all actions checked)│ to miss some attacks │
├───────────────────────┼─────────────────────┼──────────────────────┤
│ Audit trail           │ Deterministic —     │ Probabilistic —      │
│                       │ "rule #7 blocked it"│ "confidence was 0.3" │
└───────────────────────┴─────────────────────┴──────────────────────┘

The key insight is that rule engines and ML-based security are not competitors—they operate at different layers. ML-based approaches are excellent for detecting novel threats (anomaly detection, behavioral analysis). Rule engines are essential for enforcing known constraints (access control, budget limits, data classification). AKIOS uses deterministic rules as the enforcement layer and ML-based analysis as an observability layer. The rules cannot be bypassed. The ML adds intelligence on top.

Integration with Existing Security Infrastructure

AKIOS does not replace your existing security infrastructure—it extends it into the AI domain. We integrate with the tools your security team already uses:

  • SIEM systems (Splunk, Elastic, Sentinel) — AKIOS policy decisions are emitted as structured security events, indexed and searchable alongside your existing security telemetry
  • Identity providers (Okta, Azure AD, Auth0) — Agent identity and operator identity are both governed by your existing IAM infrastructure
  • DLP systems (Symantec, Microsoft Purview) — AKIOS PII redaction layer works in concert with your DLP classification policies
  • Compliance frameworks (SOC 2, ISO 27001, NIST CSF) — Policy manifests map directly to compliance control objectives, producing audit evidence automatically
  • Vulnerability scanners (Snyk, Trivy) — The AKIOS control plane is scanned as part of your standard vulnerability management process

The Compliance Matrix

Every AKIOS policy rule maps to one or more compliance control objectives. This mapping is maintained in the policy manifest and produces audit evidence automatically during normal operation:

┌────────────────────────┬─────────────────────────────────────────┐
│ AKIOS Policy Rule      │ Compliance Controls Satisfied           │
├────────────────────────┼─────────────────────────────────────────┤
│ network_allowlist      │ SOC2 CC6.1, NIST AC-4, ISO 27001 A.13  │
│ pii_redaction          │ GDPR Art.25, HIPAA §164.514, CCPA 1798 │
│ budget_enforcement     │ SOX §302, NIST PM-3, ISO 27001 A.12    │
│ tool_permissions       │ SOC2 CC6.3, NIST AC-6, ISO 27001 A.9   │
│ hitl_gates             │ EU AI Act Art.14, NIST AC-3             │
│ audit_logging          │ SOC2 CC7.2, SEC 17a-4, NIST AU-2       │
│ session_isolation      │ HIPAA §164.312, NIST SC-4              │
│ immutable_policy       │ SOC2 CC8.1, NIST CM-3, ISO 27001 A.12  │
└────────────────────────┴─────────────────────────────────────────┘

Each rule produces audit evidence on every evaluation.
Compliance is not a quarterly exercise — it is a continuous output.

The Future of AI Security

As AI agents become more autonomous, the security conversation will shift from "How do we monitor AI?" to "How do we control AI?" Monitoring tells you what happened. Control determines what can happen. Deterministic security, implemented through systems like AKIOS, provides the foundation for this control.

The trajectory is clear: every organization deploying autonomous agents will need a deterministic enforcement layer. The question is whether you build it yourself (a multi-year, multi-team investment) or deploy a system purpose-built for the problem. In a world of probabilistic AI, deterministic security is not just better—it is essential.