ARTICLE
3 October 2025

AI-washing and cyber-washing: key legal and regulatory enforcement risks for Australian organisations

CC
Corrs Chambers Westgarth

Contributor

With over 175 years of experience and a team of over 1000 talented professionals, we offer exceptional legal services for major transactions, projects, and disputes. Our client-focused approach and commitment to excellence ensure success for our clients. We connect with top lawyers globally for the best results.
Under the ACL, making false or misleading representations about a business' technological capabilities carries significant penalties.
Australia Consumer Protection
Eugenia Kolivos’s articles from Corrs Chambers Westgarth are most popular:
  • within Consumer Protection topic(s)
  • with Senior Company Executives, HR and Finance and Tax Executives
  • in United States
  • with readers working within the Technology, Media & Information and Oil & Gas industries

As artificial intelligence (AI) and cyber security technologies become increasingly central to both business operations and consumer engagement, organisations are moving to highlight their digital technology capabilities.

In the absence of specific legislation in Australia, however, organisations that exaggerate or misrepresent such capabilities (a practice known as 'AI-washing' or 'cyber-washing'), risk breaching obligations under the Australian Consumer Law (ACL) and other regulatory regimes and could face significant financial penalties.

What is AI-washing?

AI-washing involves making misleading claims about a business' use of AI, including exaggerating the extent to which a business' products, services or operations use AI. Examples of AI-washing include:

  • claiming services are powered by AI when they rely on manual processes;
  • claiming a chatbot is powered by AI when it relies on keyword matching (i.e., feeding users preset responses triggered by particular words or phrases); and
  • exaggerating how complete or operational an AI solution is. Conversely, AI-hushing involves the omission or downplaying of information about the use of AI.
What is cyber-washing?

Cyber-washing involves making misleading claims about a business' cybersecurity credentials, including exaggerating the effectiveness of its cybersecurity measures or data privacy practices. Examples of cyber washing include:

  • making vague, unsubstantiated claims about the expertise of cybersecurity staff, sophistication of cybersecurity technology, or investment in cybersecurity;
  • failing to implement data protection measures set out in a privacy protection policy; and
  • misrepresenting the cause of a data breach, including by downplaying vulnerabilities and exaggerating the complexity of a threat actor's methods.

There is currently no legislation in Australia that specifically regulates AI, meaning that businesses will continue to be subject to a combination of existing legislative regimes in respect of their use, and marketing, of AI. While the Australian Government has taken steps toward a new regulatory environment for AI, Australia is currently operating on a voluntary framework. In effect, this creates statutory ambiguity as to the requirements of AI and sets Australia apart from international enforceable regimes, such as Canada's Artificial Intelligence and Data Act and the European Union's Artificial Intelligence Act.

To bridge the gap between the proliferation of AI and a lack of legislation, in November 2019 the Australian Government released Australia's AI Ethics Principles, a voluntary framework intended to prompt organisations to use AI-enabled systems in a safe, secure and reliable way by incorporating values such as 'fairness', 'privacy protection and security', and 'transparency and explainability'. In September 2024, the Government published both the Voluntary AI Safety Standard (Voluntary Standard) and the Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper).

The Voluntary Standard contains ten voluntary guardrails, including in respect of testing, transparency and accountability, for organisations throughout the AI supply chain, and the Proposals Paper (which noted that voluntary guardrails are insufficient to prevent harms occurring from the use of AI) proposed ten mandatory guardrails applicable to 'high-risk AI', which largely replicate the Voluntary Standard but require organisations to give a certification of compliance. In an interim report released in August 2025, however, the Productivity Commission has called for a pause on work on the mandatory guardrails, warning that new regulations could stifle the development of AI and hinder its benefits.

AI-washing and other laws and regulations

Despite Australia's lack of AI-specific laws, regulators have cautioned that in addition to raising risks of misinformation, unintended discrimination, and data security and privacy breaches, the use of AI is already regulated by technology-neutral consumer protection laws, directors' duties laws, and financial services and credit licensee obligations, such as the obligation to provide licensed services 'efficiently, honestly and fairly'.

Businesses should be mindful that these laws and obligations will typically apply to AI deployers (i.e. individuals or organisations that use AI systems within products or services), rather than AI developers, who are able to contractually exclude liability incurred using their AI products and services.

Consumer protection laws

The Australian Consumer Law (ACL) contains technology-neutral prohibitions on engaging in misleading or deceptive conduct or conduct that is likely to mislead or deceive in trade or commerce and making false or misleading representations about goods or services.

Under the ACL, making false or misleading representations about a business' technological capabilities carries significant penalties including fines of up to A$50 million, three times the value obtained from a breach, or 30% of a company's adjusted turnover during the breach period.

To take the use of AI-powered chatbots as an example, a business could breach the ACL if an AI-powered chatbot produced 'hallucinations' (i.e. outputs which contain fabricated information presented as fact) which contained false or misleading information about consumers' rights or the products or services of a business. This could occur if, for example, a business uses a chatbot to handle warranty claims, and the chatbot mistakenly claims that a customer's warranty has expired or does not acknowledge the existence of statutory guarantees. Further, an AI-powered chatbot could collect client or employee data by saving users' inputs or 'scraping' information from the systems in which they are deployed. Without proper restrictions, this information could be reproduced in outputs to other users.

Financial services and credit licensee obligations

In October 2024, the Australian Securities and Investments Commission (ASIC) announced the publication of a report entitled Beware the gap: Governance arrangements in the face of AI innovation, which reviewed how 23 Australian Financial Services licensee and credit licensees use and plan to use AI. In general, the Beware the gap report found that there were gaps in licensees' governance arrangements for managing some AI risks.

As a result, the report encourages financial services and credit licensees to take measures including:

  • developing specific policies to address the risk of privacy, security and data quality when using AI, particularly considering the rapid adoption of AI and overall shift towards more complex and opaque AI techniques which can pose new challenges for risk management; and
  • proactively assessing novel AI-specific risks like algorithm bias, an issue which has previously resulted in an insurance company receiving significant penalties when its pricing algorithm failed to account for promised discounts.

Trade marks

Businesses that promote AI-related goods or services should also be mindful not to engage in AI-washing in their branding. If a business attempts to register a trade mark which involves AI-related words or graphics that are exaggerated beyond the business' AI capabilities, that trade mark could be rejected on the basis that it is:

  • contrary to law under section 42 of the Trade Marks Act 1995 (Cth) (Trade Marks Act), because it breaches the prohibitions on misleading or deceptive conduct under the ACL; or
  • likely to deceive or cause confusion under section 43 of the Trade Marks Act, because it overstates the involvement of AI in the applicant's goods or services offering.

Advertising and marketing

While Australian regulators have signalled their concerns about AI-washing, they have not yet released guidance on advertising or marketing claims about AI. However, in December 2023, the Australian Competition and Consumer Commission (ACCC) published guidance on making environmental claims intended to mitigate 'greenwashing' (i.e. misleading claims about a business' environmental sustainability), another practice regulated through the lens of misleading or deceptive conduct laws.

Based on this guidance, when making claims about AI businesses should, amongst other things:

  • make accurate and truthful claims, which genuinely reflect the level of AI use in a business' products, services or operations;
  • not hide or omit important information, for example that manual or pre-programmed processes run in the background of 'AI' services; and
  • use clear and easy-to-understand language in advertising and marketing materials, including by avoiding overly technical language which could result in consumers being confused about the role of AI in a business' products, services or operations.

The use of AI in legal practice in Australia

To date, the focus on AI in Australian courts has predominantly been in relation to the use of generative AI in legal practice and as a litigation tool. While there are now practice notes and guidelines concerning AI in place in several jurisdictions, there is by no means a uniform approach to its use across the jurisdictions (bar the fact that all are taking a cautious approach). In the Supreme Court of NSW for example, a practice note is in place which is prescriptive as to when generative AI may be used.

The Federal Court, however, is considering the practices of other courts with a view to balancing the interests of the administration of justice against 'the responsible use of emergent technologies in a way that fairly and efficiently contributes to the work of the Court'. In respect of solicitor practice, the Law Society of NSW, the Legal Practice Board of Western Australia and the Victorian Legal Services Board and Commissioner recently released a joint statement setting out common principles to assist lawyers, who are subject to the Legal Profession Uniform Law, in their use of AI. The guidance is based on the Australian Solicitors' Conduct Rules and encourages lawyers to, among other things, learn the capabilities and limitations of the AI and to properly disclose to clients about their use of AI, including how the use of AI is reflected in costs.

Consistent with the experience overseas, there is a growing spate of decisions in Australia where solicitors have been reprimanded for the inappropriate use of AI, usually resulting in fabricated or false cases or authorities being cited. A further trend that has been noticed by the court is the use of AI by litigants in person, particularly unrepresented litigants. In Queensland, guidelines on the responsible use of generative AI have been published specifically for non-lawyers.

In relation to case law examples, while there have been some cases involving the operation of pricing algorithms here, we are yet to see a substantive Australian proceeding with AI at the heart of the subject matter of the litigation. That said, the experience overseas in this space is likely to be indicative of the types of claims we can expect to be presented in Australia in due course. Some recent examples of AI-washing-related litigation in the United States include:

  • the US Securities and Exchange Commission (SEC) charging certain investment advisors for allegedly exaggerating in their marketing the extent to which AI is used;
  • the SEC charging a restaurant technology company which failed to disclose that its AI product (that purportedly removed the need for humans to take orders) in fact required human intervention; and
  • an ongoing securities class action against an engineering company, catalysed by the publication of a short-sell report exposing allegedly misleading statements in relation to its AI capabilities.

Regulators and courts in Australia and overseas are increasingly concerned with the risks to consumers presented by the rapid development of generative AI. Businesses should take particular care to ensure that they clearly understand the operation of any AI programs that are being integrated into their services, to ensure that any statements made to consumers are accurate and truthful.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Lawyers Weekly Law firm of the year 2021
Employer of Choice for Gender Equality (WGEA)

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More