ARTICLE
26 March 2026

Creating Harmony: AI Governance Playbook

Wa
Ward and Smith, P.A.

Contributor

Ward and Smith, P.A. is the successor to a practice founded in 1895.  Our core values of client satisfaction, reliability, responsiveness, and teamwork are the standards that define who we are as a law firm.  We are an established legal network with offices located in Asheville, Greenville, New Bern, Raleigh, and Wilmington. 
During Ward and Smith's annual In-House Counsel seminar, Mayukh Sircar, a cybersecurity, data privacy, and technology attorney, shared comprehensive guidance on the strategic role of Artificial Intelligence...
United States Technology
Angela P. Doughty’s articles from Ward and Smith, P.A. are most popular:
  • with readers working within the Law Firm industries
Ward and Smith, P.A. are most popular:
  • within International Law topic(s)

During Ward and Smith’s annual In-House Counsel seminar, Mayukh Sircar, a cybersecurity, data privacy, and technology attorney, shared comprehensive guidance on the strategic role of Artificial Intelligence (AI) in the modern business landscape, the key risks associated with implementation, the evolution of AI regulations, and the playbook for AI governance. See Part 1 of this report here.

Developing the AI Governance Process

The first step in the AI governance process is to conduct a comprehensive inventory of existing AI systems and use cases. This involves:

  • Determining the purpose of the AI tool, understanding whether it serves internal operations or customers
  • Conduct an AI audit by partnering with IT and procurement to catalog all AI systems, including shadow ITs
  • Facilitating workshops to identify opportunities for AI integration, bringing in leaders from multiple organizational layers to brainstorm, provide feedback, and problem-solve
  • Deploying departmental questionnaires to uncover repetitive, data-heavy, or decision-intensive tasks that could benefit from AI, such as resume screening or fraud detection
  • Mapping use cases to legitimate business purposes to satisfy data privacy requirements

Proportionate Governance for Risk

Finding the right tool to align with the need of the business is essential. “The key here is applying proportionate governance,” mentioned Sircar. “You should consider how much data a particular tool might need. If there’s a different tool that accomplishes the same goal with less data, that is likely a better way to go.”

Balancing risk with business objectives is an ongoing challenge for legal departments, and the use of AI is no different. Resume screening and loan applications are tools that should be classified as high risk, as these impact legal and/or material rights.

“High-risk AI tools require strict oversight and formal impact assessments,” Sircar explained. “They may even be prohibited in certain cases.”

Tools that create external marketing copy or internal analytics reports pose a moderate level of risk to organizations. The output should always be reviewed by a human.

Use cases for low-risk AI tools could include brainstorming and/or summarizing public articles. “These may be governed by the general use policies you already have,” Sircar said. “The key here is to move beyond the features and assess the tools against a legal and operational checklist to see if it is aligned with the use case and the risks. Basically, determine if the juice is worth the squeeze.”

Questions for legal teams to consider in this context include the level of security held by a vendor and whether the data is encrypted. “If you’re hiring a vendor you, ideally, want zero data retention, such that it is explicitly stated the data will be removed after it’s used. If the data is not in their system, of course, it can’t be stolen in the event of a breach,” added Sircar.

Limiting the use of the data is advisable. “You probably want the private instance option so third parties and the public don’t have access to your data,” noted Sircar.

Accuracy and reliability are key metrics. “This refers to known rates of errors, hallucinations and biases,” Sircar explained. “You want to ask the vendor what they’re doing to remedy these issues. If you don’t, you could open up the organization to a negligence claim.”

Vendors should be able to demonstrate compliance with key regulations. Some AI tools, for example, may only be compliant with regulatory requirements in the US and not approved for international use.

Ask the Right Questions for Compliance and Privacy

With the idea that setting a framework for due diligence is paramount, organizations should create a standardized questionnaire for AI vendors. “This is a very in-depth process, combining technical security and ethical review. It should be a mandatory aspect of procurement,” advised Sircar.

The issue of data provenance should be addressed. “Where did the training data come from? Was it lawfully licensed? Is proprietary data involved? These are important questions to consider,” Sircar said.

The vendor should be able to generally explain how the AI reaches its conclusions. Similarly, the vendor should offer transparency and be able to describe how the tool functions.

“Ask the vendor what steps were taken to test for and mitigate biases,” added Sircar. “Request copies of fairness audits, so you’ll know if the vendor tries to address any issues or just turns a blind eye to them.”

Organizations should consider vendor security protocols, such as incident response plans, access controls, and safeguards. Having audit rights is an important means of confirming whether the vendor complies with contractual security and privacy requirements.

“Check if the vendor has the right to audit its service providers and sub-processors,” noted Sircar. “They should be able to provide a summary of the audit or at least certify that the audit met their compliance requirements.”

Data lineage and retention practices should be considered. “Work with your IT team to understand how things like prompts, uploads, outputs, metadata and telemetry are being handled,” advised Sircar.

Transparency and safety controls should factor into the equation. Organizations should maintain the option to turn off high risk features to safeguard proprietary and/or sensitive information.

Mandating pilot testing is imperative to test the tool against use cases and data. Running a small-scale pilot using actual data can help to identify potential issues or vulnerabilities.

“If the vendor isn’t willing to provide a pilot, that’s a big red flag…our general recommendation is to never procure an enterprise AI tool without running a pilot,” Sircar explained.

Sircar then showed a screenshot of the firm’s AI tool comparison spreadsheet to the audience. The evaluation tool should be customized to the realities of the organization, he said, and anyone should feel free to reach out if they would like to discuss what their organization’s spreadsheet should include.

AI Agreement Negotiations

“This is where we can add immense value as attorneys. Do not accept vendor papers at face value, and make sure that AI risks are explicitly and contractually allocated,” added Sircar.

Creating a standard, non-negotiable AI contract addendum is an effective means of addressing concerns and mitigating risk. The addendum should include a data use restriction stating that the vendor and its sub processors are prohibited from using any customer data to train, develop, or improve an AI model without express written consent.

The document should clearly define IP ownership. The contract must state the organization owns the prompts as well as any outputs that were generated, to the fullest extent permitted by law.

Broad indemnifications should be included. “The vendor should be willing to stand behind their product by covering IP indemnities, data breaches, biases, and negligence in correcting hallucinations. We want the provider to indemnify us in case something goes wrong on their end,” noted Sircar.

With a compliance warranty, the vendor warrants that the system complies with applicable laws, including privacy laws. “You might get pushback here,” Sircar said. “The vendor might say we cannot delete data that goes into the AI.”

Security and audit rights should be negotiated, to encompass topics such as vulnerability management, breach notification timelines, and independent attestation reports. “You’re going to want meaningful audit rights that you can employ if needed,” advised Sircar.

In regard to transparency and change management, the addendum should include verbiage pertaining to model limitations, notice, and review for material updates and changes. Additionally, the organization should maintain the right to disable high-risk features, object to changes, and terminate the agreement.

Practical Policies for Imperfect People

Developing internal AI governance policies are the first line of defense. “There’s no need to reinvent the wheel,” explains Sircar, “so just consider integrating AI rules into policies that are already in place, such as your acceptable use policy, data classification policies, or information security policy.”

Policy updates should include clear prohibitions on inputting personally identifiable information, trade secrets, or privileged data into public AI tools. Since AI glasses integrate with social media, for example, it may be worthwhile to integrate a policy prohibiting their use into the existing social media policy.

Employees should be trained on AI risks and compliance. “They need to understand there is a crucial distinction between free public tools and enterprise-grade solutions,” noted Sircar.

For substantive work generated by AI, human verification should be mandatory. Governance policies should include disclosure rules, providing clear guidance on when employees must disclose that content was created by AI.

A cross-functional team should be established for AI Governance. This committee should be tasked with policy oversight and be able to adapt to regulatory or operational changes.

“The legal department can transform its role from that of a gatekeeper into a strategic enabler. A robust governance framework can allow you to adopt AI sustainably, defensively, and strategically. This is how we can transform a potential liability into a competitive advantage,” Sircar concluded.

Q&A

In response to a question from the audience, Sircar mentioned that legal teams should push for as much transparency as possible and exercise audit rights when needed. Audit rights should include the right to evaluate incidents and security issues.

Another audience member asked if the law is keeping up with the rapid pace of Agentic AI innovation. “The law moves at a glacial pace and technology is always going to outpace the law,” Sircar said. “Data privacy rules are coming on board fairly quickly, however, and things are changing on a weekly basis.”

Data processing agreements are essential and can serve as a baseline, but generally these agreements are not sufficient to protect sensitive information or copyrights.

Outputs cannot be copyrighted; however, it may be possible to copyright a custom-trained AI model.

If a service provider is unwilling to provide indemnification for biases and hallucinations, the business should potentially consider seeking another provider.

The digital Omnibus may force US companies to comply with EU AI regulations. Some of the recent changes may amount to smaller regulatory hurdles for small- and mid-sized companies, however.

When a vendor refuses to negotiate over the use of data for training, the organization should analyze the risk and develop a governance plan that limits the inclusion of confidential information.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More