Skip to main content

Using Zero Trust Identity principles to ensure security for AI-based services

Published: 7/16/2025

by Zero Trust

Learn how best practices of the ZT Identity pillar hold the key to working securely with AI agents, and what steps your team can take to be prepared.

AI agents and the identity challenge

The digital landscape is in constant flux, but few shifts are as profound as the rise of AI agents. These sophisticated entities are designed to perceive their environment, reason, and act autonomously to achieve specific goals. Unlike traditional software, AI agents can make independent decisions, learn, adapt, use various tools and APIs, and even orchestrate complex workflows.

AI agents’ capacity for self-directed action fundamentally redefines who, or what, is requesting access within our digital systems, pushing existing security paradigms to their limits. In this post, we’ll consider the age-old question of “Who did what, and when?”

How is AI impacting CMS?

Within CMS, AI has already had an impact through systems such as CMS Chat by helping facilitate faster response and creation times, more robust human decision-making, as well as a more holistic understanding of work being created and delivered. As AI systems are becoming more prevalent, this post seeks to help system owners build AI-based services and systems in a "secure-by-default" setting.

As we enable a new wave of decentralized decision making by building and implementing AI systems, paying attention to “who (or what) did what action, and when?” will be crucial. While AI is a new technology to CMS, implementing it safely requires that we increase our awareness and develop mature processes for fundamental identity practices.

Zero Trust provides a secure foundation

At the heart of securing this evolving landscape is one of the Zero Trust Pillars: Identity. Its core tenet, "never trust, always verify", demands that every access request, regardless of origin, is explicitly validated. This mandates granular permissions, constantly checked against up-to-date policies and based strictly on necessity.

While organizations have diligently applied Zero Trust principles to human users and then to Non-Person Entities (NPEs) like applications and services, the autonomous nature of AI agents introduces a critical new dimension. A mature identity process and robust automation are not silver bullets, but they are indispensable foundations for addressing the complex identity challenges AI presents.

Example of an AI identity challenge

The true identity dilemma for AI agents becomes apparent when we consider their access context. Imagine a chatbot embedded in a company's web portal. If a user asks about publicly available service information, the interaction is benign.

But what happens when the user queries sensitive personal account details, such as a balance or pending charge? Does the chatbot possess elevated "super-user" privileges, able to see all customer accounts, only filtering its output based on the requesting user's permissions? Or does it dynamically assume the precise role and permissions of the specific user, gaining access only to what that user is authorized to see? This distinction is paramount, illustrating the complex challenge of permissioning for autonomous AI actions.

While building your systems, consider the expected and explicitly denied actions you expect from your users. Leverage these expected actions to define guardrails for that user, and implement alerting that activates when a user hits or passes those guardrails. In both cases, the initiator of the request’s identity is key to being able to establish, verify, and track.

Likewise, when looking to build systems that interact with many different services or APIs, consider how you define your non-person entities, and what their expected and explicitly denied actions are. Similarly to humans, implement alerting mechanisms and highly available logging processes that surface when systems attempt to exceed the limits of their abilities.

Overlapping security concerns

Securing these evolving interactions also highlights a critical overlap in vulnerabilities. The OWASP API Security Top 10 has long warned against flaws like Broken Object Level Authorization (BOLA), Broken User Authentication, and Broken Function Level Authorization (BFLA), where inadequate controls allow unauthorized access to data or functions.

Similarly, the OWASP Top 10 for LLM Applications points to Broken Access Control as a paramount risk. While the mechanics might differ — perhaps a sophisticated prompt injection exploits an LLM's trust, rather than a direct API call — the fundamental problem remains. An AI agent, through its capacity for autonomous action, could inadvertently or maliciously access sensitive data, bypass intended security features, or perform actions beyond its authorized scope, making the "never trust, always verify" tenet more critical than ever.

A practical path forward

The path forward for ISSOs, security professionals, and business leaders is clear, albeit challenging. Before diving deep into AI-specific identity solutions, organizations must first refine their foundational identity practices. This means rigorously understanding existing human and NPE roles and permissions, and actively working to scope them down to the principle of least privilege. When roles enforce least privilege, they minimize the subsequent blast radius, making the fallout iteratively smaller and less impactful. Consistently auditing these assignments and permissions is crucial.

What you can do now

Here are some identity hygiene steps your team can begin right away to ensure you are prepared as the AI revolution surges forward:

  • Reassess your current access policies for conventional users and systems. Are there opportunities to pare them down to least privilege?
  • If you need to get more granular, start by targeting your secrets access first.
  • As a team, define and refine roles access. Aim for policies so precise that users have no valid reason to request changes to those resources.
  • Audit these roles periodically by comparing them to calls against your infrastructure.
  • Rigorously move toward the practice of modifying production via code changes that are reviewed and tested thoroughly by clearly defined users.
  • Autonomous actions should also adhere to guardrails with explicit expected denial policies.

At the end of the day, proactive and fundamental identity hygiene is not just good practice — it's the prerequisite for securely harnessing the power of AI.

About the author

Jonathan (Jono) Sosulska is a Principal Compliance Analyst with the Zero Trust team within CMS. Jono also serves as Vice President of Product at Aquia, and has contributed to the CMS' mission through roles as a CMS Application Security Program (CASP) Threat Modeler, Principal Security Architect, and Pipeline Engineer. 

About the publisher:

The Zero Trust Team works to help CMS implement the Executive Order that requires continuous verification of system users to promote stronger security. We introduce new tools and streamline processes to support the transition to Zero Trust throughout the enterprise.