Noting the unique risks posed by certain artificial intelligence (AI) solutions, the New York State Department of Financial Services (NYDFS) issued guidance urging financial services organizations to evaluate and address these types of issues when developing cybersecurity strategies.
The guidance outlines some of the most pressing AI-related threats. Two primary risks are tied to how malicious actors who use AI to deceive unsuspecting people, while two others relate to how organizations deploy AI solutions.
With the financial industry’s growing reliance on AI to increase productivity, cyber criminals have discovered new opportunities to strike at integrating AI considerations into existing cybersecurity controls, the agency asserted. Companies often rely on AI for its ability to analyze vast amounts of data quickly and accurately, automate routine tasks, detect anomalies, and predict potential threats.
Responding to the ever-evolving threats posed by AI as it continues to evolve will require advanced countermeasures, making it crucial for organizations to regularly review and update their cybersecurity programs and controls, according to the guidance.
AI-enabled social engineering is a significant threat, particularly because AI can craft highly convincing personalized attacks through deepfakes — realistic audio, video, and text used to deceive individuals. These AI-driven scams often lead to the exposure of sensitive information, such as nonpublic information (NPI), or unauthorized actions like wire transfers to fraudulent accounts. The ability to impersonate individuals with deepfakes also undermines biometric verification systems.
Another major risk addressed in the guidance is AI-enhanced cyberattacks through which bad actors use AI to rapidly scan for vulnerabilities, deploy malware, and exfiltrate data, bypassing traditional security measures. AI can be used to generate new variants of malware and accelerate the development of ransomware, heightening the scale and speed of attacks. Experts believe that the rise of AI-powered tools will lower the barrier for launching sophisticated cyberattacks, increasing their frequency and severity, particularly in data-sensitive industries, according the guidance.
The fact that AI often requires vast amounts of data, including NPI and biometric data, presents inherent risks. Threat actors can use stolen biometric data to bypass authentication systems, and AI’s reliance on third-party vendors and supply chains creates further vulnerabilities. If a third-party vendor is compromised, the risk extends to all entities within that supply chain.
To mitigate these AI-related risks, organizations are advised to adopt robust cybersecurity programs, conduct regular risk assessments, and update controls as needed. Third-party vendors should be rigorously vetted, and access controls, such as multi-factor authentication, should be strengthened to prevent unauthorized access. Additionally, the agency recommended agencies provide specialized training to ensure staff can recognize and respond to AI-driven threats, while maintaining strong data management practices to limit the exposure of NPI.