ARTICLE
25 August 2025

Protected: ICYMI: The Inaugural Issue

RJ
Roth Jackson

Contributor

Roth Jackson and Marashlian & Donahue’s strategic alliance delivers premier regulatory, litigation,and transactional counsel in telecommunications, privacy, and AI—guiding global technology innovators with forward-thinking strategies that anticipate risk, support growth, and navigate complex government investigations and litigation challenges.
This newsletter is more than just another publication. It's my passion project and is intended to help professionals like you navigate the increasingly complex world of information overload.
United States Privacy

Prefer to listen instead? We've made an audio version of this newsletter so you can catch the highlights on the go:

1669560a.jpg

Letter from the Editor

Dear Readers,

Welcome to the inaugural issue of ICYMI!

This newsletter is more than just another publication. It's my passion project and is intended to help professionals like you navigate the increasingly complex world of information overload. In a time when we're bombarded with endless streams of content, my mission is simple: to save you time and keep you current.

Every day, I witness brilliant individuals struggling to stay informed without becoming overwhelmed. That's where this newsletter comes in. Each edition is carefully curated to deliver the most essential insights, cutting through the noise and delivering what truly matters. My goal is to be your trusted filter, transforming information chaos into actionable knowledge.

I am committed to supporting professionals who are constantly seeking to grow, learn, and stay ahead of the curve. I've poured my professional experience into creating a resource that I hope will become an indispensable part of your routine.

As we embark on this journey together, I invite you to be more than just a reader. Your feedback, thoughts, and suggestions will be the lifeblood of this newsletter. This is our community, and I'm excited to learn and grow alongside you. You can e-mail me your thoughts or, better yet, set up a virtual coffee with me!

Thank you for joining me on this adventure. Let's make every piece of information count.

Warmly,

Susan Duarte

Artificial Intelligence (AI)

Key Takeaways

Here is what you need to know to advise your organization effectively.

  • Compliance Dates. Maine's $1,000 per chatbot violation starts September 23, 2025, and Colorado's comprehensive AI auditing requirements begin in February 2026. The EU AI Act is in effect, and fines of up to €35 million are now enforceable.
  • "Wait and see" strategies are no longer viable. With state enforcement actively beginning, businesses now need AI governance and compliance strategies rather than hoping for future federal preemption or regulatory clarity.

Overview

The AI regulatory landscape has reached a critical inflection point. Businesses, however, face an unprecedented challenge in AI compliance: while the federal government pursues aggressive deregulation, states are racing to fill the void with their comprehensive frameworks. This regulatory split creates a complex landscape where your organization might operate under minimal federal oversight while navigating a patchwork of state laws, each with different requirements, timelines, and penalties. Add in the EU's newly operational enforcement regime, and 2025 has become the year where "wait and see" is no longer a viable AI strategy.

Federal Developments

State AI Moratorium. The Senate overwhelmingly rejected (99-1) a proposed 10-year AI regulation moratorium on July 1, 2025, preserving the states' authority to regulate AI within their borders.

White House AI Action Plan. The White House unveiled a comprehensive AI policy framework on July 23, 2025, marking a significant shift toward deregulation. The plan is articulated in three (3) accompanying executive orders, including: removing regulatory barriers the administration views as hindering innovation, streamlining data center infrastructure development, establishing requirements for "unbiased" AI in government contracts, and aggressively promoting American AI technology globally through new export programs with federal financing to boost American AI dominance internationally.

Federal Agencies Change Course. A key element of this deregulatory strategy involves directly confronting federal agencies that previously pursued enforcement actions against AI companies. The plan tasks the Federal Communications Commission (FCC) with reviewing state-level AI regulations to identify conflicts with federal telecommunications oversight. Meanwhile, the FTC must conduct a comprehensive audit of all AI-related investigations and enforcement actions from the prior administration, with instructions to modify or dismiss those deemed to "unduly burden AI innovation." This systematic reversal of the Biden administration's stricter regulatory approach exemplifies the broader "Remove Red Tape and Onerous Regulation" philosophy that guides the entire policy framework.

AI Innovation in Financial Services. Other members of Congress are taking action that is consistent with the AI Action Plan. For example, Senator Mike Rounds (R-S.D.) reintroduced the Unleashing AI Innovation in Financial Services Act, which is bicameral legislation that promotes AI innovation in the financial services industry. If enacted, the Act will establish AI Innovation Labs at the Federal Reserve, Office of the Comptroller of Currency (OCC), Federal the Federal Deposit Insurance Corporation (FDIC), the Securities and Exchange Commission (SEC), the Consumer Financial Protection Bureau (CFPB), the National Credit Union Administration (NCUA) and the Federal Housing Finance Agency (FHFA).

State Action Accelerates

California. In the wake of the Senate's rejection of the AI moratorium, California Attorney General (AG) Bonta doubled down on his enforcement stance, reiterating to consumers, businesses, and healthcare providers that they must continue to comply with all existing California laws, including both general statutes and AI-specific legislation when developing, selling, or using AI technologies.

Other States. Other states are also enacting AI legislation, with over 1,000 AI bills introduced across all 50 states in 2025 alone, with several landmark measures taking effect soon:

  • Maine's Chatbot Transparency Law is effective on September 25, 2025. It requires businesses to disclose when customers are interacting with AI chatbots, with $1,000 penalties per violation and a private right of action for consumers.
  • The Colorado AI Act represents the first comprehensive U.S. law regulating high-risk AI systems for algorithmic discrimination. It is scheduled to take effect on February 1, 2026. It will require businesses to audit AI systems, conduct bias risk assessments, maintain detailed compliance documentation, and provide customer notice and appeal processes. Governor Polis has called a special legislative session for August 21 to address implementation challenges.
  • Texas Responsible AI Governance Act (TRIAGE) is effective on January 1, 2026, and prohibits AI systems designed to manipulate behavior, produce discriminatory outcomes, or generate harmful deepfakes, particularly those involving minors. The law also establishes a regulatory sandbox allowing companies to test AI systems in controlled environments.
  • California's ADMT Rules were finalized by the California Privacy Protection Agency (CPPA) on July 24, 2025. They cover automated decision-making technologies (AMDTs) that make "significant decisions" about financial services, housing, employment, education, or healthcare. Businesses must provide prior notice, allow opt-outs, conduct risk assessments, and submit compliance attestations by April 2028. The California Civil Rights Council also adopted final rules clarifying that it is unlawful for an employer to use AMDT or selection criteria that discriminate against applicants or employees on a basis protected by the California Fair Employment and Housing Act (FEHA) and other California antidiscrimination laws. California joins Colorado, Illinois, and New York City in enacting laws concerning the use of AI technologies to make employment decisions. The effective dates for these rules are:
    • California: October 1, 2025
    • Colorado: February 1, 2026
    • Illinois: January 1, 2026
    • New York: In effect

European Enforcement Begins

The EU AI Act reached a critical milestone on August 2, 2025, when the European AI Office became fully operational and administrative fines took effect. Companies operating general-purpose AI models now face immediate compliance obligations, including technical documentation requirements, public copyright summaries, detailed model cards, and proof of EU copyright law compliance.

The penalty structure carries significant weight: fines up to €35 million or 7% of global turnover for prohibited practices, with proportional penalties for other violations. Companies with AI models posing systemic risks must immediately implement comprehensive risk assessment and mitigation measures.

To ease compliance burdens, the EU published a voluntary Code of Conduct for General Purpose AI on July 10, 2025. Developed by 13 independent experts with input from over 1,000 stakeholders, companies that voluntarily adopt the code can demonstrate AI Act compliance while reducing administrative overhead and gaining greater legal certainty. Many US tech companies have agreed to comply with the Code of Conduct.

AI Litigation

Employment. The first major lawsuit challenging AI hiring tools, Mobley v. Workday, was certified as a class action on May 16, 2025. The case involves hundreds of millions of potential class members claiming age discrimination through AI-powered recruitment systems, establishing precedent that both AI vendors like Workday and the employers who use their tools face potential liability for discriminatory outcomes. This dual-liability framework signals that AI discrimination cases will likely target entire technology supply chains rather than individual companies, dramatically expanding the scope of potential legal exposure in the rapidly growing AI hiring market.

AI Chatbots. Otter.ai, the AI transcription service with 25 million users, is facing a federal class-action lawsuit in California alleging that its "Otter Notetaker" bot secretly joins virtual meetings without proper consent from all participants, potentially violating state wiretap and privacy laws that require all-party consent for recordings. The case was sparked when the bot appeared unannounced during a sensitive medical appointment, with plaintiffs claiming the company records and processes conversations for transcription and AI training purposes without adequate notification to meeting attendees. While Otter.ai maintains it obtains "explicit permission" from users who integrate the bot through user agreements, critics argue these consents are buried in fine print and don't extend to all meeting participants, creating a legal loophole. If certified as a class action, the lawsuit could represent millions of affected users and set important precedents for AI governance in collaborative digital spaces, potentially forcing industry-wide changes to consent protocols and reshaping standards for how AI companies handle data collection in virtual communications as automated tools become increasingly integrated into everyday workflows.

California Deepfake Law Struck Down. Meanwhile, California's efforts to regulate AI-generated content suffered a significant setback when a federal judge completely struck down the state's anti-deepfake law, ruling that Section 230 preempts state attempts to regulate platform content. This decision highlights the ongoing tension between federal platform protections established in the early days of the internet and newer state efforts to govern AI-generated content. The ruling suggests that meaningful AI content regulation may require federal action rather than the patchwork of state laws currently emerging, as courts appear willing to broadly interpret Section 230's protections to cover AI-generated content on digital platforms.

PRIVACY

Key Takeaways

  • State Privacy Laws. Two state privacy laws took effect, increasing varied and sometimes conflicting requirements. Biometric laws are broadening to include employee data.
  • Federal Government. The Department of Justice's Bulk Data Transfer Rule, effective April 2025, creates new rules for transferring sensitive personal data and bans data transfers to certain countries (China, Russia, Iran, North Korea, Venezuela, Cuba).
  • Enforcement Actions Focus on Technical Compliance and Real Harm. Privacy enforcers are targeting the mechanics of compliance rather than just policies, with California's CPPA focusing on dark patterns in privacy interfaces and checking to make sure consumers can exercise privacy rights. Now is a good time to audit your privacy practices and make sure things work as they should. At a recent privacy meeting, one regulator shared that they are even testing to see if companies respond to the email addresses included in privacy policies.

State Activity

With 20 states now having comprehensive data privacy laws, organizations face growing variation in sensitive data definitions, including neural data and biometric information. Tennessee and Minnesota's data privacy laws took effect in July 2025, along with Colorado's biometric privacy laws. Maryland's data privacy law will be effective October 1, 2025.

Minnesota Consumer Data Privacy Act (MCDPA): The MCDPA was effective on July 31, 2025, and establishes some of the most stringent privacy requirements among state laws. It includes unique provisions, including creating (1) consumer rights to question profiling decisions with significant effects, (2) requiring mandatory data inventories, and (3) adding specific requirements for privacy policies that must be posted prominently with electronic notification of material changes. The law also requires parental consent for targeted advertising to teens aged 13-16, applies to nonprofit organizations (unlike most state laws), and mandates that controllers document compliance procedures and maintain records for 24 months. Small businesses face restrictions on selling sensitive data without consent, and the attorney general can impose penalties up to $7,500 per violation with a cure period ending January 31, 2026.

Tennessee Information Protection Act (TIPA): TIPA, effective July 1, 2025, follows a more business-friendly approach that is similar to Virginia's privacy law but with notably high damage potential. It requires both revenue thresholds ($25 million) and processing volume (175,000+ consumers) for applicability, offers broader exemptions for pseudonymous data when proper controls prevent re-identification, and provides a safe harbor defense for companies following NIST privacy frameworks. While the AG can seek penalties of $7,500 per violation, courts also may award treble damages for knowing or willful violations, creating particularly severe financial exposure. Unlike Minnesota, TIPA provides a permanent 60-day cure period without sunset provisions.

Biometric Laws

Colorado's biometric amendments took effect on July 1, 2025. The laws take the broadest approach to biometric regulation seen in any U.S. state privacy law. Unlike other states that only regulate biometric data when companies intend to use it for identification, Colorado's amendment governs any "biometric identifiers" that "can" be used to identify someone, regardless of the collecting entity's actual intent or use. The rules apply to any entity processing biometric data from Colorado residents without regard to the CPA's usual data volume thresholds, requiring consent for the sale, lease, or disclosure of biometric identifiers and introducing a novel prohibition against selling such data unless the controller pays the subject an unspecified amount and obtains consent. Companies must conduct annual reviews to assess whether continued retention of biometric identifiers remains necessary and publish public guidelines for deletion, along with their data retention schedules and incident response plans specifically addressing biometric data breaches. Perhaps most significantly, Colorado becomes the first state outside California to extend biometric privacy protections to employees and job applicants, allowing employers to collect biometric identifiers without consent for limited workplace purposes like access control, attendance monitoring, and safety, while requiring consent for other uses and prohibiting employment conditioning on such consent for non-essential purposes.

In addition, we are seeing a wave of genetic privacy legislation in the states, likely driven by the growing concern over foreign adversaries seeking American genetic data. Texas prevents genetic data from being sold to foreign adversaries during company bankruptcies. Florida went further, banning laboratories from using genetic sequencing software from China, Russia, Iran, North Korea, Cuba, Venezuela, or Syria. Indiana focused on consumer protection with HB 1521, which took effect in May. The law makes genetic discrimination illegal and requires testing companies to get specific consent before sharing data. The state can fine companies up to $7,500 per violation. Montana's law is the most restrictive and includes neurotechnology data or brain activity information. The Montana law requires multiple layers of consent depending on how genetic data will be used. The law creates a detailed framework that gives consumers granular control over their most sensitive information.

Federal: DOJ Bulk Transfer Rule

Privacy enforcement has reached historic intensity with the Department of Justice's Bulk Data Transfer Rule.

On April 8, 2025, a new reality took hold for companies handling Americans' most sensitive data. The Bulk Data Transfer Rule went into effect, fundamentally changing how businesses can share personal information across borders—and for the first time, the U.S. government drew hard lines around what data can never leave the country.

The rule creates a stark two-tier system that reflects growing national security concerns about foreign access to American data. For certain countries like China, Russia, Iran, North Korea, Venezuela, and Cuba, all data brokerage activities are now completely prohibited, with no exceptions. Any human genomic data heading to these "countries of concern" is banned entirely, recognizing that DNA information represents a uniquely sensitive national asset.

The government has also declared that any data related to U.S. government operations, regardless of how small the amount, cannot be transferred to these countries under any circumstances. It is a recognition that even seemingly minor government-related information could pose security risks in the wrong hands.

For other international data transfers, the rule allows businesses to continue, but only with strict safeguards. Companies can still enter vendor agreements and employment arrangements with foreign entities, but they must implement CISA-approved cybersecurity controls and comprehensive Data Compliance Programs. Investment deals can proceed, but only if they meet specific security requirements designed to protect American data.

The thresholds that trigger these rules reveal just how seriously the government takes different types of information. Genetic data from just 100 Americans is enough to invoke the restrictions, while it takes 100,000 people's covered personal identifiers to cross the line. Biometric and location data sit in the middle at 1,000 people, with health and financial data requiring 10,000 individuals before the rules kick in.

Companies now have until October 6, 2025, to build the comprehensive compliance infrastructure the rule demands—complete with due diligence procedures, security requirements, audit capabilities, and detailed documentation. For businesses that have operated in a relatively unrestricted global data environment, it represents a fundamental shift toward a world where data sovereignty and national security concerns increasingly shape how information flows across borders.

Privacy Enforcement

California. The California Privacy Protection Agency (CPPA) is focusing its enforcement efforts on compliance violations related to the "mechanics" of privacy compliance. This includes looking at privacy platforms to detect "dark patterns" in privacy interfaces. The CPPA is also checking compliance with its rules related to processing consumer requests to exercise privacy rights. Recent actions included:

  • Todd Snyder Inc. The CPPA issued a decision requiring national clothes retailer Todd Synder to change its business practices and pay $345,178 fine to resolve allegations that it violated the CCPA by failing to properly configure its privacy portal and process consumer requests to opt out of the sale or sharing of personal information. Todd Synder also requested more information than was necessary to process the request and required consumers to verify their identity before they could opt out of the sale or sharing of personal data.
  • Healthline Media (Healthline), California Attorney General Rob Bonta announced a $1.55 million settlement with Healthline for CCPA violations found on Healthline's health information website. This marks Attorney General Bonta's fourth CCPA enforcement action. The AG alleged that Healthline failed to honor opt-out requests for targeted advertising and shared sensitive data with third parties, including article titles that could reveal medical diagnoses like "You've Been Newly Diagnosed with MS. What's Next?" The company used dozens of invisible trackers that transmitted consumer data to advertisers without the proper privacy protections required under California law. Under the settlement, Healthline must fix its opt-out mechanisms, stop sharing diagnostic information that can be linked to specific consumers, and maintain a CCPA compliance program with proper contract auditing.

Privacy Litigation

Courts delivered major privacy victories against tech giants with significant precedential value:

Meta-Flo Health Data Victory: A California federal jury found Meta violated the California Invasion of Privacy Act by collecting sensitive menstrual and reproductive health data from 38 million women using the Flo-tracking period app without their consent. The verdict is significant for many reasons. Health care providers are on notice that while parts of their business are covered by HIPAA< not every aspect is and may be subject to other laws. In addition, the case offers lessons about how to secure consent when handling sensitive data and highlights the risks for companies using SDKs and tracking technologies in mobile applications.

Tesla Tracking Pixel Class Actions: Tesla faces federal litigation for allegedly sharing website visitor data with Google through tracking pixels without consent, highlighting the vulnerability of companies using standard web analytics tools.

Marketing and Consumer Protection

Key Takeaways

  • Record Financial Penalties for Aggressive Advertising: The FTC's $145 million recovery and the $329 million Tesla verdict demonstrate that marketing violations and product liability claims can result in company-threatening financial exposure, requiring businesses to prioritize compliance over aggressive marketing tactics.
  • Made in America. The FTC is holding major platforms like Amazon and Walmart accountable for third-party seller "Made in USA" claims, meaning marketplace operators must monitor and control seller representations on their platforms.
  • Children's Marketing Gets Stricter. Enhanced COPPA rules require separate parental consent for advertising data sharing and impose hefty fines that could create substantial exposure for any business collecting data from children under 13.
  • Subscription Service Uncertainty. The Eighth Circuit's action to vacate the FTC's "Click to Cancel" rule creates regulatory uncertainty. However, state laws like Massachusetts' "Junk Fee" regulations (effective September 2025) still require upfront total price disclosure and fee transparency.

Federal Trade Commission (FTC)

The FTC's "Made in USA" enforcement efforts expanded beyond individual companies to hold major platforms accountable for third-party seller claims. The agency sent warning letters to Amazon and Walmart regarding false origin claims made by sellers on their platforms, while warning four other companies for false origin claims. The move signaled that the "all or virtually all" standard for Made in USA claims was being strictly enforced across all distribution channels.

National Advertising Division (NAD)

The NAD required Olly Kids Chillax to discontinue all calm and relaxation claims for children's products due to flawed clinical study methodology, signaling heightened scrutiny for health claims targeting minors. The self-regulatory body also clarified that authentic consumer reviews need not be removed even when associated with unsupported product claims, creating an essential distinction between company-controlled marketing messages and genuine customer feedback. In addition, the NAD ruled that structure/function claims must be "clearly tailored" to avoid requiring more rigorous interventional clinical data, which provides companies with more precise guidance for making legitimate wellness claims without crossing into drug claim territory. This decision demonstrates a more nuanced regulatory approach that maintains strict standards for children's products and company claims while protecting authentic consumer experiences and providing clearer pathways for compliant health-related advertising.

Subscription Service Enforcement Uncertainty

The regulatory landscape became particularly complex following the Eighth Circuit's action to vacate the FTC's "Click to Cancel" rule just days before its July 14, 2025, effective date. The decision created immediate uncertainty for subscription-based businesses that had prepared for the new requirements. However, state laws continue to apply.

Massachusetts' comprehensive "Junk Fee" regulations, requiring total price disclosure upfront, will take effect on September 2, 2025. The rules apply to unfair or deceptive fees in the purchase, lease, or rental of products by Massachusetts consumers. The law exempts specific industries, including airlines, securities, insurance, and restaurants, charging a mandatory service fee (provided that the cost is disclosed and paid to wait staff and other employees). Moving forward, companies must include processing and convenience fees in the total price. They can exclude government charges and shipping charges, however. The Massachusetts AG has prepared a guide to help the industry comply with the new rules.

Childhood Privacy Gets a Major Upgrade

The landscape of children's online privacy underwent a seismic shift in January 2025 when the Federal Trade Commission rolled out sweeping amendments to the Children's Online Privacy Protection Act (COPPA) Rule, updating protections to catch up with modern digital realities. The most significant change requires parents to provide separate, explicit opt-in consent before their child's information can be shared with third parties for advertising purposes. The rule now mandates that businesses can only keep children's data for as long as it is "reasonably necessary" for the purpose it was collected. Businesses can no longer hold children's information indefinitely, must justify data retention practices, and regularly purge unnecessary information. The enforcement mechanism has substantial teeth, with civil penalties jumping to $53,088 per violation.

Educational technology platforms and services targeting young children face particular scrutiny as regulators recognize the unique trust relationship between schools, ed-tech companies, and families. For example, the FTC broadened the definition of "personal information" to include biometric identifiers such as fingerprints, facial recognition data, and voice prints, ensuring these sensitive identifiers receive the same protections as a child's name or address, as school apps increasingly use biometric technologies for everything from lunch payments to device unlocking.

Consumer Protection Enforcement

The Federal Trade Commission, state attorneys general, and private litigants are taking aggressive actions against the industry for consumer protection violations.

FTC. In August 2025, the FTC secured one of its most significant settlements in years, with a combined $145 million penalty against two companies that deceived consumers seeking health insurance coverage. Assurance IQ paid $100 million for misleading consumers about health coverage options and engaging in illegal telemarketing practices. MediaAlpha faced a $45 million penalty for deceptive lead generation that resulted in robocall harassment to a staggering 119 million consumer leads.

The FTC and the Nevada AG worked together to secure a $2.5 million fine against IYOVIA/IM Mastery Academy, alleging $1.2 billion in consumer harm from multi-level marketing schemes that targeted young adults through social media with false promises about forex and cryptocurrency trading success. The FTC order permanently bans Global Dynasty Network from:

  • making any representations about potential earnings without having written evidence that those claims are typical for consumers;
  • misrepresenting or assisting in the misrepresentation of any good or service they market or sell;
  • violating the Telemarketing Sales Rule, including by making any misrepresentations about earnings potential or profitability; and
  • offering any good or service on a negative option basis without clearly and conspicuously disclosing and obtaining consumers' express consent before charging their credit card, debit card, bank account, or other financial account.

Product Liability

A Florida jury ordered Tesla to pay $329 million in compensatory and punitive damages after a fatal Autopilot crash. This marks the first significant case finding manufacturers liable for autonomous vehicle crashes. Throughout the three-week trial, plaintiffs argued that Tesla's Autopilot is defective because Tesla allows drivers to engage it on roads for which it was not designed. The plaintiffs also argued that Tesla does not sufficiently monitor whether drivers are paying attention to the road. They argued that Tesla "set the stage" for the crash by overhyping its Autopilot software's capabilities despite knowing about vulnerabilities in the program. Businesses should review advertising claims to ensure that they are not overstating product capabilities while also detecting and correcting product vulnerabilities to mitigate risk against product liability claims.

OF NOTE

As the final quarter of 2025 approaches, businesses face a wave of overlapping compliance deadlines in AI governance, data privacy, and cross-border operations. Regulatory obligations now arrive in a continuous stream, making it critical for legal and compliance teams to prioritize immediate, actionable steps rather than sift through abstract frameworks.

This roadmap delivers just that: a pragmatic, checkbox-driven guide that highlights the intersections between regulatory regimes where one action can satisfy multiple requirements and flags areas that demand distinct approaches. With deadlines paired to specific implementation steps, it functions as both an internal workflow tool and a client-ready resource to demonstrate proactive risk management. In today's enforcement environment, the question is not whether scrutiny will come, but whether you'll be ready when it does.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More