ARTICLE
11 June 2025

Artificial Intelligence And Workplace Harassment: Insights From Carranza v. City Of Los Angeles

KM
Kronick Moskovitz Tiedemann & Girard

Contributor

Kronick is a full-service law firm serving clients throughout California. The firm plays an integral role in many significant matters shaping and defining the state’s legal landscape, and its attorneys take pride in providing exceptional legal representation to every client.

Serving both public and private clients, our attorneys offer strategic advice and guidance on a variety of matters, helping clients to minimize risks while navigating complex regulatory issues. We establish collaborative working relationships with clients and create effective educational tools.

Deepfake photographs are a product of AI technology, specifically utilizing deep learning techniques to create hyper-realistic images that can convincingly mimic real people.
United States California Technology

Deepfake photographs are a product of AI technology, specifically utilizing deep learning techniques to create hyper-realistic images that can convincingly mimic real people. AI algorithms analyze vast amounts of data, including images and videos, to generate new content that appears authentic. This technology can be used to manipulate images and videos, creating content that is indistinguishable from genuine media. A recent California Court of Appeal decision regarding the widespread circulation of a sexually explicit image resembling a police captain's likeness highlights the potential dangers posed by deep-fakes and artificial intelligence in the workplace. In Lillian Carranza v. City of Los Angeles, the Appellate Court affirmed the trial courts' judgment that Carranza, a captain in the Los Angeles Police Department (LAPD), was subjected to a hostile work environment. Although the case does not explicitly discuss deep-fakes, the circulation of an image that was not of, yet intended to depict, an employee demonstrates one risk related to the increased use of AI in the workplace.

Background on Carranza v. City of Los Angeles

In 2018, Lillian Carranza was informed that a sexually explicit photo resembling her—and falsely said to be her—was circulating among LAPD personnel. A subordinate later informed Carranza that everywhere he went, including several LAPD stations throughout Los Angeles, officers—including supervisors—were viewing, distributing, and discussing the photo as well as making derogatory comments and specifically identifying Carranza.

Despite Carranza's requests for the LAPD to issue a statement clarifying that the photo was not of her and warning that distributing it constituted misconduct, the department failed to take corrective action. No one communicated the decision or any reason why it did not issue the message to Carranza. The jury found in Carranza's favor, awarding $4 million in noneconomic damages and concluding that the harassment she experienced was severe or pervasive enough to alter her work environment.

The case underscores the importance of updating existing policies regarding workplace harassment to address the evolving risks associated with the use of artificial intelligence in the workplace. Although the Appellate Court did not specifically address whether the photo was generated using AI technology, the photo had the characteristics of a deepfake photograph intended to resemble Carranza. Moreover, the widespread sharing throughout the organization of an explicit image resembling an employee's likeness, coupled with the department's inadequate response, highlights the potential for such technology to create hostile work environments and the necessity for employers to proactively address such issues.

Take Aways

To address the potential for increased circulation of deep-fake images in the workplace, California employers should review their existing policies and enforcement procedures to ensure the policies adequately address the evolving use of AI in the workplace.

For example, guidelines might prohibit the creation, distribution, or use of deep-fake images within the workplace. Employers might also invest in or modify existing training sessions to educate employees about the risks, ethical considerations, and potential harms associated with artificial intelligence and deep-fake images. Lastly, employers might enhance cybersecurity to prevent access to sensitive data that could be used for their creation. By establishing clear policies and procedures, employers can better protect their employees from similar incidents involving the use of AI or the circulation of inappropriate content in the workplace.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More