Consider a Monday morning in your production office. An artist is iterating on a sequence for an unannounced title using an AI agent inside your generation platform. The screenplay, the character reference frames, and the early renders of the third-act reveal are all in the system. An hour later, an attacker exploits a misconfigured permission in the platform’s MCP integration. By the time anyone notices, the renders are exfiltrated. The attacker does not need to sell them. They only need to post the reveal shot on an anonymous forum.
Your situation is not only a privacy incident, it’s also a market event. Ticket sales projections collapse, insurance underwriters get involved, partner obligations unravel, and the financial damage runs into tens of millions of dollars.
The Landscape Just Shifted Under Us
Three data points from the past six weeks tell the same story. Mandiant’s M-Trends 2026 report measured the window between initial access and attacker handoff collapsing from eight hours in 2022 to 22 seconds in 2025. Anthropic released Claude Mythos Preview in April and restricted distribution after their own engineers, with no security training, asked the model for remote code execution exploits overnight and got working code by morning. Within days, OpenAI followed with GPT-5.4-Cyber under a similarly restricted access program. The frontier model providers themselves now believe their most capable models can be weaponized faster than defenders can react. The tools that exploit AI platforms are now AI-capable themselves.
Same Discipline, No Margin
The security conversation around agentic AI has a packaging problem. RSA Conference 2026 was dominated by vendors declaring agent security a fundamentally new discipline requiring fundamentally new products. Cisco launched DefenseClaw. Microsoft announced Agent 365 as a control plane. Check Point unveiled an “AI Defense Plane.” Gartner projects 40 percent of enterprise applications will embed task-specific agents by year-end.
The underlying discipline has not changed. Agent security is identity, least privilege, audit, and shared responsibility. What has changed is that the margin for error in executing that playbook has collapsed. At human timescales, a misconfigured permission could sit dormant for days before anyone found it. Detection and response was a viable primary strategy. At machine speed, detection and response is the safety net. The primary posture has to be secure-by-design: controls embedded into the architecture before the attack surface opens.
The Question Your Next AI Platform Vendor Cannot Duck
When your team evaluates an agentic AI platform (any platform that orchestrates models, takes actions on your behalf, or exposes an MCP endpoint to your users), the test that separates serious platforms from shipping vaporware is this:
Does the platform have a defined shared responsibility model,
and was it a design principle or an afterthought?
You don’t want a marketing page about enterprise security features. You want an architectural answer for where the platform’s security boundary ends and your studio’s begins; an answer that was part of the original design, not retrofitted after the first enterprise deal required it.
If the vendor cannot articulate this boundary clearly, one of two things is true. Either they have not thought about it, which means security was not a design principle. Or they have thought about it but cannot explain it, which means you will inherit the ambiguity. Both outcomes are bad for you.
The Three Layers You Need to Understand
Cloud service providers settled the shared responsibility question two decades ago. AWS does not own what runs inside an EC2 instance. They own the isolation boundary, the encryption infrastructure, and the authentication enforcement. You own the workload. That contract is explicit, well-understood, and auditable.
Agentic AI needs the same clarity, but the layers are more complex. There are three distinct parties with distinct obligations, not two.
The frontier model provider (Anthropic, OpenAI, Google) owns the model, its training data provenance, and its safety guardrails. The SaaS production platform (any company orchestrating agents on top of those models) owns orchestration, boundary enforcement, and the immutable audit trail. The enterprise customer (your studio) owns the choice of model provider, the API keys, the data going in, and the outputs coming out.
The design principle at every boundary is the same: validate what crosses it, do not inspect the interior.
When a customer’s model generates output that re-enters the platform, the platform validates, classifies, and records that output.
What happened inside the model’s inference is the model provider’s concern. What happens to that output once it enters the platform’s data model is the platform’s concern.
When a vendor says “we handle everything,” treat that as a red flag. No single layer can handle everything, and any vendor claiming otherwise is either confused about the architecture or obscuring it.
Why This Matters More in Media Than Anywhere Else
The agentic security conversation at RSA was horizontal. Media and entertainment face three compounding factors that no other industry confronts simultaneously:
- Training data is contested IP (Disney and Universal v. Midjourney, the Minimax case, 70+ active suits, fair use unresolved);
- The protected asset is time-sensitive in ways that turn a leak into a market event, not a privacy incident;
- The AI output is the product itself, not a processing artifact, which makes provenance the evidentiary chain for distribution, insurance, and legal defense.
Every studio executive reading this already feels these factors. The question is whether their AI platform vendors understand the stakes at the same level.
Provenance Is the Enforcement Mechanism
This is where the shared responsibility model and the M&E-specific risk converge. Provenance is how the SaaS platform proves it held up its end of the contract. The platform does not need to inspect what happens inside a customer’s model to maintain accountability. It needs to record, immutably, what crossed the boundary. Every generated asset gets metadata about its origin: the model, the provider, the inputs, the user who initiated the generation, the timestamp, the parameters. That metadata travels with the asset through every stage of the production pipeline. It cannot be altered after the fact, even by administrators. The platform owns the chain. The customer owns the generation. Neither can substitute for the other.
The Test
Ask your next AI platform vendor to describe their security boundary in architectural terms. Ask who owns what, at which layer, at which moment. Ask what happens at the boundaries where responsibilities transfer. Ask how they prove, forensically and immutably, that they met their obligations. If the answer is coherent, you have a platform partner. If the answer is vague, you have a liability.
The margin for error closed while nobody was looking.