ARTICLE
18 March 2026

AI Security And Access Controls: Best Practices For 2026 And Beyond

BK
Brooks Kushman

Contributor

Since the firms founding in 1983, Brooks Kushman has built a national reputation as a premier intellectual property law firm. We have accomplished this by attracting the best talent, and by working closely with clients to understand how your business really operates and what really drives your company or brand.
Artificial intelligence has rapidly shifted from experimental tooling to embedded enterprise infrastructure. In 2025, organizations across industries moved AI systems into production environments...
United States New York Technology
Matthew M. Jakubowski’s articles from Brooks Kushman are most popular:
  • within Technology topic(s)
  • with Inhouse Counsel
  • in Canada
  • with readers working within the Technology, Media & Information and Law Firm industries

Artificial intelligence has rapidly shifted from experimental tooling to embedded enterprise infrastructure. In 2025, organizations across industries moved AI systems into production environments that influence regulated decisions, process sensitive data, and interact directly with customers and employees. This transition has elevated AI security from a technical consideration to a core governance and compliance obligation.

Among the many challenges created by enterprise AI adoption, two risks now stand out as particularly urgent: indefinite retention of AI training and interaction data, and weak or outdated access controls around AI systems. Together, these issues create significant exposure across privacy, cybersecurity, litigation, and regulatory enforcement. Organizations that fail to address them risk turning AI innovation into a longterm liability rather than a competitive advantage.

The New Risk Landscape Created by AI Systems

Modern AI tools quietly collect and retain vast amounts of information. Prompts, uploads, model outputs, metadata, and interaction logs are often stored far longer than users realize and, in some cases, reused for model training by default unless customers opt out. These practices introduce several compounding risks.

Key enterprise risks created by AI data retention include:

  • Expanded breach exposure as retained prompts and outputs increase the size and value of attack surfaces
  • Heightened regulatory scrutiny under laws requiring data minimization, retention limits, and deletion rights
  • Shadow storage created by AI notebooks, agent memory, and multiagent workflows outside traditional data inventories
  • Difficulty responding to litigation, investigations, and regulatory audits without clear retention and deletion controls

Leading organizations are responding by implementing short retention windows, automated purging, enterprise-grade AI platforms, opt-out guarantees for training use, and detailed logs that track what data was uploaded, where it resides, and who accessed it.

Why Traditional Access Controls Fail in AI Environments

AI systems break many assumptions embedded in legacy access control models. In traditional enterprise software, access is often limited to discrete applications and datasets. AI systems, by contrast, aggregate information across sources, generate new content, and increasingly act autonomously.

A single over-permissioned user or AI agent can access sensitive training data, retrieve historical prompts, expose outputs to unauthorized audiences, or initiate actions beyond their intended scope. As AI agents become more capable, the risk multiplies. Agents may operate continuously, interact with other systems, and make decisions at machine speed, all while relying on the permissions granted to them by humans.

This reality makes strong access control frameworks essential. Organizations can no longer rely on informal approvals, shared credentials, or static permission templates. Instead, AI systems require structured, auditable, and enforceable access models designed specifically for their unique risk profile.

Role-Based Access Control as the Foundation

Role-Based Access Control, or RBAC, has emerged as the control layer that AI systems were previously missing. RBAC governs access to models, data, notebooks, outputs, and system capabilities based on defined roles rather than individual users.

Effective RBAC programs for AI systems typically include:

  • Granular permissions for prompts, outputs, notebooks, and model execution
  • Defined roles for developers, users, reviewers, administrators, and AI agents
  • Governance matrices that replace ad hoc or template-based permissions
  • Regular reviews to prevent over-permissioning and privilege creep
  • Clear separation between human access and autonomous agent permissions

In an AI context, RBAC prevents unauthorized access to training data, reduces insider risk, and supports audits by clearly defining who can do what within AI environments. Importantly, RBAC applies not only to people, but also to AI agents acting on their behalf.

AI Agents and the Need for Permission Discipline

AI agents represent one of the fastest-growing sources of security risk. Unlike traditional tools, agents can operate across sessions, retain context, and interact with multiple systems simultaneously. Without clearly defined permissions, an agent may access confidential data it was never intended to handle or take actions with serious downstream consequences.

Best practices require treating AI agents as privileged users. Each agent should have a defined role, explicit permissions, and technical constraints that limit its scope of action. Organizations should avoid granting agents broad system access simply for convenience. Instead, permissions should align with the narrowest set of tasks the agent needs to perform.

Equally important is ongoing review. As agents evolve, permissions that were once appropriate may become excessive. Regular audits help ensure that agents continue to operate within authorized boundaries.

Audit Logging, Legal Holds, and Evidence Preservation

AI systems now generate electronically stored information that courts increasingly expect organizations to preserve. Prompts, outputs, and metadata may be discoverable in litigation, even when users believed interactions were temporary or informal.

In a first‑of‑its‑kind ruling, the Southern District of New York held in United States v. Heppner that a defendant's conversations with a publicly available generative AI tool are not protected by the attorney‑client privilege or the work‑product doctrine when they are not made at the direction of counsel. Judge Rakoff reasoned that open AI platforms are neither attorneys nor confidential intermediaries, and that their terms of service permit data review and third‑party disclosure, making AI chats equivalent to communications with any other third party. As a result, sharing legal analysis or privileged information with a public AI system can waive privilege and confidentiality protections, underscoring that longstanding privilege principles apply with full force even as generative AI becomes a new technological frontier.

Additionally, missing audit logs may create serious risks. Without reliable records, organizations may be unable to respond to regulatory inquiries, defend enforcement actions, or comply with legal holds. Courts have already ordered preservation of AI interaction logs, overriding default deletion settings in some cases.

Best practices include implementing logging frameworks that track AI activity across systems, capturing who accessed what, when, and through which model. Legal hold processes should explicitly cover AI tools, including personal accounts used for work. Retention and deletion policies must align with both regulatory obligations and litigation risk management.

Data Minimization and Enterprise AI Controls

Strong access controls must be paired with disciplined data practices. Organizations should limit what data enters AI systems in the first place. Sensitive personal information, protected health data, trade secrets, and client confidential material should be anonymized, redacted, or excluded whenever possible.

Enterprise-grade AI tools offer meaningful advantages, including contractual guarantees that user data will not be used for model training, documented security controls, and compliance certifications. For highly sensitive use cases, local or on-premises deployments may be appropriate.

Training employees is equally important. Clear guidance on what can and cannot be entered into AI systems reduces accidental exposure and reinforces accountability across the organization.

Regulatory Pressure Is Accelerating

AI security and access controls are no longer optional safeguards. Regulators and courts increasingly expect organizations to demonstrate layered security, documented governance, and reasonable measures tailored to AI risks. State privacy laws, AI governance statutes, and global frameworks such as the EU AI Act all reinforce the need for structured controls, transparency, and accountability.

Conclusion

AI security is no longer just about protecting models. It is about controlling data, defining access, preserving evidence, and ensuring accountability across complex, evolving systems. Indefinite data retention and weak access controls are among the fastest growing sources of AI risk, but they are also among the most addressable.

By adopting disciplined retention practices, implementing robust RBAC, treating AI agents as privileged actors, and aligning logging and governance with legal expectations, organizations can enable AI innovation without sacrificing security or compliance.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More