ARTICLE
28 October 2024

NYDFS Speaks Out On AI And Its Cybersecurity Risks

SM
Sheppard, Mullin, Richter & Hampton LLP

Contributor

Businesses turn to Sheppard to deliver sophisticated counsel to help clients move ahead. With more than 1,200 lawyers located in 16 offices worldwide, our client-centered approach is grounded in nearly a century of building enduring relationships on trust and collaboration. Our broad and diversified practices serve global clients—from startups to Fortune 500 companies—at every stage of the business cycle, including high-stakes litigation, complex transactions, sophisticated financings and regulatory issues. With leading edge technologies and innovation behind our team, we pride ourselves on being a strategic partner to our clients.
The New York Department of Financial Services ("NYDFS") recently published guidance on managing cyber risks related to AI for the financial services and insurance industry.
United States New York Technology
Sheppard, Mullin, Richter & Hampton LLP are most popular:
  • within Insolvency/Bankruptcy/Re-Structuring and Cannabis & Hemp topic(s)

The New York Department of Financial Services ("NYDFS") recently published guidance on managing cyber risks related to AI for the financial services and insurance industry. Though the circular letter does not introduce any per se "new" obligations, the guidance speaks to the Agency's expectations for addressing AI within its existing cybersecurity regulations.

The letter identifies specific AI-related cybersecurity threats, such as AI-enabled social engineering. AI may also enhance typical cybersecurity attacks by amplifying the potency, scale, and speed of an attack. The letter also notes that AI modules may leverage large volumes of non-public information and become a target of an attack. Additionally, reliance on third party providers and vendors for AI-tools introduces supply chain vulnerabilities.

To mitigate these risks, NYDFS advises regulated companies to consider the specific risks related to AI when conducting comprehensive risk assessments. These assessments should consider not only the organization's own use of AI, but also any AI technologies used by a third party service provider. Based on findings of the risk assessments, policies, procedures, and incident response plans may need to be updated to sufficiently address these AI-related risks. NYDFS also highlights the need for cybersecurity training for all personnel (including senior executives) that includes awareness around AI-related threats and response strategies.

Putting it into practice: This latest thinking from NYDFS adds to the growing patchwork of regulatory guidance about specific considerations related to AI (here, the cybersecurity risks). Other guidance has largely focused on other types of harm from AI such as bias and discrimination. It also serves as a reminder for companies that might not use AI themselves to be aware of the potential risks of engaging third parties who do and implement proper mitigating measures.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More