ARTICLE
7 August 2025

Mitigating Risk For AI Integration

BI
Barnard Inc.

Contributor

Barnard Inc is a full-service commercial law firm, with services covering corporate and compliance, intellectual property, construction, mining and engineering, property, fiduciary services commercial litigation, M&A, restructuring, insurance, and family law. Our attorneys advise listed and private companies, individuals, and local and foreign organisations across South Africa, Africa and internationally.
Your new AI tool, trained on months of client insights, LinkedIn posts, and your best product copy, starts generating pitch decks, summaries, even code.
South Africa Intellectual Property

Your new AI tool, trained on months of client insights, LinkedIn posts, and your best product copy, starts generating pitch decks, summaries, even code. The team is thrilled. Clients are impressed. You sleep a little easier knowing automation is finally pulling its weight after a significant capital investment.

Until a month later, when a letter arrives. A competitor accuses you of lifting their content. A freelance copywriter says the AI output infringes her copyright. Your cloud provider hints that your prompts might violate its terms and conditions, and you realise too late that while the AI works beautifully, the legal groundwork never kept pace.

Small businesses don't need an in-house legal team to use AI responsibly. But they do need a basic playbook, one that blends copyright, data ethics, contract discipline and risk awareness.

AI doesn't own anything

South African copyright law does not explicitly exclude artificial intelligence as an author, but the statutory framework and judicial interpretation suggest a position that only natural or juristic persons may hold copyright. The Copyright Act 98 of 1978 consistently refers to the "author" as a "person" and, in the case of computer programs, defines authorship as the person who exercised control over the making of the work. This terminology presupposes legal personality, which an AI system does not possess. South African courts have not recognised non-human authorship, and our law, aligned with the Berne Convention and comparable jurisdictions such as the United Kingdom and United States, views originality as requiring human intellectual effort.

Where AI is used as a tool under human direction, copyright may vest in the person who made the necessary arrangements for the creation of the work, but never in the AI system itself. That means outputs generated entirely by AI, whether text, image, video or code, may fall into a legal grey zone. If the model "writes" the content, there may be no copyright in it at all. If it simply assembles or modifies existing materials, someone else's copyright might still subsist.

What's more, popular AI tools like ChatGPT or Midjourney operate under broad licence terms. OpenAI, for instance, lets users own outputs provided they comply with its terms and avoid misuse. If your prompts contain third-party data, or if your team feeds in client-confidential content, those assurances may not hold.

Keep it clean and traceable:

  1. Log prompts and outputs. If challenged, you'll need to show how the AI arrived at the work.
  2. Avoid uploading third-party or confidential data. A breach may occur even if the result seems anonymised.
  3. Never use AI to replicate someone else's branding, slogan, or style, especially in competitive industries.

Training Data: The hidden copyright minefield

Many generative AI models are trained on enormous datasets scraped from the public web. While this makes them powerful, it also makes them vulnerable to copyright claims, especially in light of lawsuits like Getty Images v. Stability AI and The New York Times v. OpenAI.

As a business user, you may not control the training process. But you do control how you use the tools. If your AI platform creates content that is "substantially similar" to a protected work, or if it reproduces stylistic elements traceable to another source, liability can land squarely on you.

Mitigate the risk:

  1. Prefer enterprise AI tools with curated, licensed datasets.
  2. Use AI outputs as drafts, not final deliverables. Always ensure human oversight on any outputs that are utilised.
  3. When publishing, always credit human contributors and verify originality, especially for marketing or client-facing work.

Contracts still matter

If your freelance designer uses AI to generate logo options, who owns the final version? If your developer integrates a GPT-based module into your product, is the code clean? If your marketing agency builds you an AI chatbot, who's liable for its responses? These questions don't need complex answers. They just need clear contracts.

Focus on three fronts:

  1. Employment/freelance agreements: State who owns AI-generated outputs and who assumes risk for infringement.
  2. Vendor agreements: Insist on warranties that outputs are original, lawful, and free of third-party claims.
  3. AI platform terms: Read the fine print. Your licence to use, publish or monetise outputs depends on it.

Before 'Launch Day' – Checklist

Business automation is exciting but legal clarity is an absolute requirement. The following checklist can assist with risk mitigation prior to a product or service being shipped out:

  1. Inventory audit – What AI tools are in use? Who uses them and for what purposes?
  2. Prompt review – Are any inputs confidential, personal, or third-party protected?
  3. Output clearance – Are key deliverables reviewed for similarity, bias, or attribution gaps?
  4. Policy gap check – Is your internal AI use policy up to date and understood?

If an AI output sparks trouble, you need to act fast. Take down the content, investigate the prompt, and contact the platform provider. Early responsiveness often prevents full-blown disputes.

Clients trust your outputs because they believe you own and control them. Investors back AI-enabled businesses because they assume that IP ownership considerations are clear cut. Regulators assess compliance based not on novelty but on risk mitigation.

So, the next time your AI tool feels like a secret weapon, ask whether your agreements, prompts, and review systems are strong enough to survive scrutiny. Ensuring that every legal box is ticked is not just good practice, it's what protects your business when someone decides to look under the hood.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More