Skip to main content
Studio is an AI-assisted operations IDE that connects directly to production network and server infrastructure. That combination — autonomous agents, persistent credentials, real device access — sets a high bar for safety. This section is the engineering account of how we meet that bar, and where we’re still working. We wrote it for the person who has to sign off on Studio for their environment. It’s not a marketing page. It cites the actual controls we’ve built, names the cryptographic primitives, and is honest about the things that aren’t done yet. If you find a gap, tell us — that’s how this gets stronger.

The principles

Six commitments shape every safety decision Studio makes:

Plaintext minimization

Secrets exist as plaintext in the smallest possible blast radius for the shortest possible time. Nothing sensitive is stored on disk in clear; nothing sensitive enters the AI provider’s context unless you explicitly substituted it.

Approval before consequence

Read-only operations move freely. Anything that changes external state — a device command, a connector write, a credential decrypt for execution — passes through a classifier and an approval gate the user can see.

Tenant isolation by cryptography

Organization isolation is enforced not just by API guards but by per-org encryption keys in an HSM. Two orgs sharing infrastructure cannot read each other’s data even if the access control logic fails.

One AI provider, controlled region

All model inference goes through AWS Bedrock in us-east-1. There is no path that ships your data to a third-party AI vendor or to Anthropic’s hosted API directly.

Auditability over autonomy

Every meaningful action the agent takes — tool call, command staged, credential decrypted, share granted — is observable in real time and recoverable from history. Autopilot exists; surveillance of it is the price of using it.

Honest limits

Some things are not built yet. Some things never will be (because they conflict with operational reality). We document both. Buying decisions made on incomplete information cost more than the truth.

What’s on this tab

Threat model

What we’re defending against, what we’re not, and the assumptions our controls rely on.

Identity and access

Clerk for the user, Cognito for the AWS calls, organization isolation enforced top to bottom.

Vault and keys

Per-org KMS keys in a FIPS-validated HSM, AES-256-GCM envelope encryption, automatic 30-day DEK rotation, cryptographic shredding on org deletion.

Human in the loop

The trust-level model, tool classification, the approval gate, and how the steering controls let you stop or redirect a running agent in real time.

AI provider and data flow

Bedrock-only, model and region pinning, three-tier prompt cache, secret redaction before model context, and the boundary between local and cloud.

Agent and local runtime

Electron + Go sidecar architecture, what stays on your device, the local embeddings model, and how the desktop process talks to the backend.

Audit and telemetry

What’s logged, what’s redacted, what third parties get (Sentry, Amplitude), and how to disable optional telemetry.

Connectors and MCP safety

How third-party API credentials are stored, how MCP tool catalogs are gated, and what happens if a remote MCP server tries to misbehave.

Supply chain and updates

Code signing, notarization, update signature verification, the build pipeline, and the path from source to your machine.

Known limits and roadmap

The honest list of what isn’t done yet and what we’re working on. Read this before committing.

A one-paragraph version

If you read nothing else: Studio’s user identity is Clerk; AWS access goes through Cognito Identity Pool short-term credentials; sensitive fields are encrypted with AES-256-GCM under per-organization data keys wrapped by AWS KMS FIPS 140-2 customer master keys, with rotation on a recurring schedule and cryptographic shredding on organization deletion; the AI agent runs through approval gates classified by tool risk class with three trust levels (manual, supervised, autonomous); all model inference uses Anthropic models via AWS Bedrock in us-east-1 only; the desktop app is a signed Electron binary with a Go sidecar that holds plaintext keys only in OS-protected memory; and there are some things — most notably AI context scrubbing of substituted secrets, and audit logging of every decrypt call — that are not yet GA, documented honestly under known limits.

Why the safety architecture is also the moat

The same controls that keep your data isolated also make Studio’s accumulated value structurally hard to extract. Per-organization keys mean your procedures, memories, conversation transcripts, and host inventory are bound to keys only your organization can use. That connective tissue — the runbooks, the topology corrections, the vendor behavior memories — gets denser the longer you use Studio, and specifically against your network. For the full argument, including the compounding-workspace framing, see Why Studio.

How to read this section

If you’re a CISO doing initial diligence: read threat model, vault and keys, known limits, in that order. If you’re a security engineer doing the deep review: read every page. The vault and HITL pages have the most engineering detail; the supply chain page has the most surprises. If you’re an MSP or ISP team lead deciding whether your operators can use Studio: read overview, human in the loop, and audit and telemetry.