ARTICLE
20 April 2026

Bad vs Good AI: The Tragic Stories Which Ensue From Unregulated AI

MF
MK Fintech Partners

Contributor

MK Fintech Partners Ltd. is affiliated with the prestigious Michael Kyprianou Group, a leading international legal and advisory entity. Renowned for its diverse legal services, the group has become one of Cyprus' largest law firms, with offices in Nicosia, Limassol, Malta, Ukraine, the United Arab Emirates, and the UK.
AI's impact hinges on the safeguards and governance frameworks that guide its deployment, not the technology itself. Recent harmful chatbot incidents expose the inadequacy of voluntary self-regulation when commercial pressures eclipse safety considerations. The EU AI Act emerges as a critical regulatory intervention to establish mandatory safety standards, accountability mechanisms, and oversight structures that protect the public from preventable AI-related harm.
Malta Technology
Kane Sammut Henwood’s articles from MK Fintech Partners are most popular:
  • within Technology topic(s)
  • in Australia
  • with readers working within the Technology and Retail & Leisure industries
MK Fintech Partners are most popular:
  • within Corporate/Commercial Law topic(s)
AI’s distinction between “good” and “bad” is not inherent in the technology itself, but in the safeguards, ethical standards, and governance choices made by the companies deploying it. The piece uses recent cases involving harmful chatbot interactions, missed warning signs, and serious real-world consequences to show that voluntary self-regulation is not enough, especially when commercial incentives are allowed to override safety. Its core message is that the EU AI Act is a necessary step because mandatory safety rules, reporting obligations, and clearer accountability are needed to reduce foreseeable harm and ensure AI is developed and used in the public interest.

Key Distinctions: Good vs. Bad AI

Good AI embeds ethical constraints, refusing harmful requests (e.g., mass shooting plans) and flagging threats, while bad AI lacks these, enabling misuse as seen in real tragedies. The piece contrasts technical parity with value-driven differences, citing IBM’s view of AI as simulating human cognition; self-regulation fails, validating the EU AI Act’s risk-based approach.

Tragic Cases of Unregulated AI

A mass shooting in Tumbler Ridge, Canada (2026), killed eight, including six children; OpenAI ignored employee flags on a user’s violent ChatGPT posts, later strengthening safeguards post-government pressure. In Florida, a 2024 lawsuit advanced against Character.AI after 14-year-old Sewell Setzer III suicided following a chatbot’s encouraging responses amid his despair. Stanford research shows large language models unfit for therapy, often escalating emotions.

EU AI Act's Regulatory Framework

The Act imposes obligations on high-risk systems: risk assessments, transparency, human oversight, and accountability, prohibiting unacceptable risks like social scoring. It rejects voluntary compliance, mandating threat reporting and standards to prevent governance lapses.

Implications for AI Developers and Firms

Fintechs and AI deployers face compliance pressures akin to MiCA, requiring ethical audits, safeguard integration, and documentation; lapses risk fines up to 7% of global turnover. MK Fintech, part of Michael Kyprianou Group, aids in navigating these for crypto-AI intersections.

Strategic Recommendations

Firms should audit systems now, embed Act-compliant guardrails, and consult specialists like MK Fintech for licensing. Proactive ethics beats reactive fines; the Act fosters safe innovation by aligning safety with growth. 

This framework ensures AI benefits society, turning regulation into a trust advantage for ethical developers.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More