The Security Layer
Your AI Agents
Need.

Refractal is the runtime trust layer for enterprise agents. Hardening every tool call, every model, every action.

Designed for CISO · Security · Product
Integrates with Claude · OpenAI · Open Source
Translates
The thesis

Agents now act on your behalf. Browsing, clicking, querying, paying. Every tool call is a new attack surface. LLMs remain structurally vulnerable to prompt injection, jailbreaks, and data exfiltration. Each model has different security policies. Companies need an over-arching security layer.

How the plugin works

Four defences between your agent
and the world.

Agent security

Stop the agent before it leaks.

Drop-in guardrails for Claude, OpenAI, and other computer-use agents. We monitor every tool call and screen action in flight, catching prompt injection, blocking credential exfiltration, and quarantining untrusted content before it reaches your system.

  • Secret containment Tokens and keys never cross the model boundary.
  • Injection defence Untrusted content is sandboxed and parsed as data, not instruction.
  • Action gating Destructive tool calls require verified intent or human approval.
  • Plugs into existing telemetry Streams from Langfuse, LangSmith, Splunk, and any OTEL-based stack.
Integrations

Runs on top of your stack.

Refractal sits between your models and your audit trail. We ingest from any LLMs and agent observability you already run, and instrument every frontier model and copilot.

Model & application layer Frontier models & copilots
  • Claude Claude Cowork, Claude Code
  • OpenAI GPT models, ChatGPT
  • Open-source Bring-your-own-model
  • Microsoft Copilot M365 Copilot, Copilot Studio
Observability layer Traces, evals & telemetry
  • Langfuse Open-source LLM tracing
  • LangSmith LangChain & LangGraph traces
  • OpenTelemetry Any OTEL-compliant stack
How it works

A control plane for non-deterministic systems.

Refractal sits between your agents and everything they touch. Tools, data, users. Enforcing policy at the point of action, not after the fact.

01 · Hook
Hooks trigger before key actions
Tool calls, data reads, model swaps. Captures identity, intent, context.
02 · Evaluate
Rules evaluate in under 50 ms
Compiled from your regulatory posture. Deterministic, testable, versioned.
03 · Decide
Block, flag, or escalate
Severity drives routing. Critical events wake the on-call compliance rota.
04 · Record
Write the audit ledger
Append-only, cryptographically signed, mapped to named controls.
refractal · event stream · production
Live
Customer context

"Our ML team spends up to 20% of their time responding to ad-hoc risk and compliance questions from the business. Refractal turns that week of work into a query."

Head of AI Governance, European retail bank
Standards

Aligned with emerging AI standards.

AARM Autonomous Action Runtime Management

We also map continuously to the EU AI Act, NIST AI RMF, ISO/IEC 42001 and SOC 2 Type II.

An open, vendor-neutral specification for intercepting AI-driven actions, evaluating them against policy and intent, and recording tamper-evident receipts. We are building our security plugin to conform to this standard.

We are forward looking. As the newest standards emerge we instrument for them ahead of broad adoption, so you can scale AI without waiting for compliance to catch up.

Securing the agentic frontier.

If you run risk, compliance, or security, we'd love to talk.