ARTICLE
13 October 2025

AI Adoption Surges Among S&P 500 Companies—But So Do The Risks

JL
Jackson Lewis P.C.

Contributor

Focused on employment and labor law since 1958, Jackson Lewis P.C.’s 1,000+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged, stable and diverse, and share our clients’ goals to emphasize inclusivity and respect for the contribution of every employee.
According to Cybersecurity Dive, artificial intelligence is no longer experimental technology as more than 70% of S&P 500 companies now identify AI as a material risk...
United States Technology
Joseph Lazzarotti’s articles from Jackson Lewis P.C. are most popular:
  • with readers working within the Technology industries
Jackson Lewis P.C. are most popular:
  • within Intellectual Property, Law Practice Management and Compliance topic(s)

According to Cybersecurity Dive, artificial intelligence is no longer experimental technology as more than 70% of S&P 500 companies now identify AI as a material risk in their public disclosures, according to a recent report from The Conference Board. In 2023, that percentage was 12%.

The article reports that major companies are no longer just testing AI in isolated pilots; they're embedding it across core business systems including product design, logistics, credit modeling, and customer-facing interfaces. At the same time, it is important to note, these companies acknowledge confronting significant security and privacy challenges, among others, in their public disclosures.

  • Reputational Risk: Leading the way is reputational risk, with more than a third of companies worried about potential brand damage. This concern centers on scenarios like service breakdowns, mishandling of consumer privacy, or customer-facing AI tools that fail to meet expectations.
  • Cybersecurity Risk: One in five S&P 500 companies explicitly cite cybersecurity concerns related to AI deployment. According to Cybersecurity Dive, AI technology expands the attack surface, creating new vulnerabilities that malicious actors can exploit. Compounding these risks, companies face dual exposure—both from their own AI implementations and from third-party AI applications.
  • Regulatory Risk: Companies are also navigating a rapidly shifting legal landscape as state and federal governments scramble to establish guardrails while supporting continued innovation.

One of the biggest drivers of these risks, perhaps, is a lack of governance. PwC's 2025 Annual Corporate Director's Survey reveals that only 35% of corporate boards have formally integrated AI into their oversight responsibilities—a clear indication that governance structures are struggling to keep pace with technological deployment.

Not surprisingly, innovation seems to be moving quite a bit faster than governance. That gap is contributing to various risks identified by most of the S&P 500. Extrapolating that reality, there is a good chance that small and mid-sized companies are in a similar position. Enhancing governance, such as through sensible risk assessment, robust security frameworks, training, etc., may help to narrow that gap.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More