ARTICLE
30 March 2026

Chatbots, Counsel, And Caution: What Employers Should Learn From A New AI Lawsuit

FB
FBT Gibbons

Contributor

FBT Gibbons is a leading national law firm serving clients ranging from mid-sized businesses to multinational corporations and growth-oriented startups operating or investing in middle markets.

We don’t just work for our clients; we go further. Our deep experience across the energy, finance, life sciences, and manufacturing industries helps us see what others sometimes miss. By understanding specific market and sector dynamics, our team develops strategies that align with and support clients’ overall business goals.

Along with industry knowledge, our lawyers leverage technology and innovation for clients, and we are proud to be recognized as one the 2025 Most Innovative Firms in North America by The Financial Times. We know that innovation, particularly in the AI arena, is not simply about adapting to new tools and technologies. It also means continuously seeking better and more creative ways to practice law, invest in our people, and serve our clients and communities.

Nippon Life Insurance Company recently filed a lawsuit that raises a timely question that many employers and their counsel are already grappling with: When does the use of artificial intelligence (AI)...
United States Employment and HR
FBT Gibbons are most popular:
  • within Employment and HR, Finance and Banking and Corporate/Commercial Law topic(s)
  • with readers working within the Accounting & Consultancy, Aerospace & Defence and Banking & Credit industries

Nippon Life Insurance Company recently filed a lawsuit that raises a timely question that many employers and their counsel are already grappling with: When does the use of artificial intelligence (AI) tools cross the line from general information-seeking into the realm of legal advice? The complaint alleges that OpenAI’s chatbot, ChatGPT, provided unlicensed legal advice by purportedly inducing an employee to breach a valid settlement agreement and attempt to revive previously settled litigation. While the lawsuit just started, it underscores growing concerns about how employees and employers are turning to AI to navigate employment-related disputes.

The lawsuit arose out of a benefits dispute under an employer-sponsored group long-term disability policy. The matter ended with a settlement in which the employee forever and irrevocably released the insurer from any claims or judgments connected to the employer’s long-term disability policy or the employee’s claim. In return, the insurer issued a settlement payment to the employee. Nippon’s complaint further alleges the employee raised concerns with her lawyers shortly after the settlement, citing potential errors and omissions that she believed impacted the settlement agreement. When the employee’s attorneys declined to challenge or reopen the settlement, the employee fired her attorneys and relied on ChatGPT for guidance.

For employers, the important takeaway from this high-profile case is what it reveals about how employees are increasingly using AI to shape workplace conflict and negotiation.

Employers’ Conundrum

As many employers have already witnessed, employees are using AI tools in many ways to assist with employment-related decisions, including: seeking advice on how to respond to performance feedback; disputing discipline; requesting a reasonable accommodation; engaging in protected activity; and negotiating employment terms. Employees also use AI tools to draft communications ranging from texts and emails to HR and managers to correspondence intended for outside audiences. Because AI can quickly translate a rough concern into a polished narrative, it can make employees feel more prepared and confident when communicating with management.

Yet AI-empowered employee communications are not infallible: while some are thoughtful and productive, others may be riddled with errors, misunderstandings, and misaligned assumptions. Employers need to be thoughtful in their response. They must continue to acknowledge concerns, route them appropriately, and follow established procedures.

Practical Takeaways for Employers

The lawsuit against OpenAI is a cautionary tale from which several lessons can be distilled. First, employers should avoid using AI tools to ask “what should we do” on employment decisions or to evaluate legal risk. Unlike communications with legal counsel, non-attorney interactions with AI tools are not protected by attorney‑client privilege and may be discoverable in future litigation. Risks abound when attempting to use an AI tool as a replacement for legal counsel. Even well-intentioned reliance on AI-generated guidance can create unintended exposure or lock in flawed assumptions before consulting legal counsel.

Second, employees may be drafting messages with AI while a conversation is unfolding. That means employer communications may be met with faster follow-ups, more polished language, and increasingly formal framing. Employers should treat AI-assisted communications like any other complaint or request by following the normal escalation channels. If legal questions arise, employers should consult legal counsel — not an AI tool.

Finally, employers should ensure that policies and training clearly address AI use so that managers and HR professionals understand what is permitted and what is not. Many organizations have adopted high-level AI guidance, but this case invites a closer look at whether internal policies should specify which AI tools may be used, for what purposes, and with what safeguards. This is especially important when employment disputes, sensitive data, or legal disputes are involved.

Generative AI is not fully replacing HR or legal functions, but it is reshaping how workplace concerns are raised, framed, and escalated — often more quickly and with greater polish than employers have historically seen. As a result, employers are well served to pause before relying on AI‑generated guidance, whether from employees or management. These developments raise practical and legal questions that are highly fact‑specific and evolving. Employers that respond with thoughtfulness, consistent procedures, and clear internal guardrails around AI use and that seek experienced employment counsel when legal issues arise will be better positioned to manage risk while maintaining positive workplace relations.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More