Last week, the New York Department of Financial Services (“DFS”) issued guidance addressed to executives and information security personnel of entities regulated by DFS to assist them in understanding and assessing cybersecurity risks associated with the use of artificial intelligence (“AI”), and implementing appropriate controls to mitigate such risks (the “Guidance”).[1] In particular, and to address inquiries received by DFS regarding AI’s impact on cyber risk, the Guidance is intended is to explain how the framework set forth in DFS’ Cybersecurity Regulation (23 NYCRR Part 500) should be used to assess and address such risks.Continue Reading New York Department of Financial Services Issues Guidance on Cybersecurity Risks Arising from Artificial Intelligence
Nexus of AI, AI Regulation and Dispute Resolution
The rapid development of AI is introducing new opportunities and challenges to dispute resolution. AI is already impacting the document review and production process, legal research, and the drafting of court submissions. It is expected that the use of AI will expand into other areas, including predicting case outcomes and adjudicating disputes. However, the use of AI in litigation also bears risk, as highlighted by a recent First-tier Tribunal (Tax) decision, where an appellant had sought to rely on precedent authorities that, in fact, were fabricated by AI (a known risk with AI using large language models, referred to as hallucination).[1] While, in this particular case, no further consequences seemed to follow (in light of the fact that the appellant, a litigant in person, “had been unaware that the AI cases were not genuine and that she did not know how to check their validity”[2]), the Tribunal did highlight that “providing authorities which are not genuine and asking a court or tribunal to rely on them is a serious and important issue”,[3] suggesting that litigants may incur certain risks by relying on authorities suggested by AI, unless these are independently verified. On 12 December 2023, a group of senior judges, including the Master of the Rolls and the Lady Chief Justice, issued guidance on AI for judicial office holders, which, amongst other things, discourages the use of AI for legal research and analysis and highlights the risk of AI being relied on by litigants to provide legal advice and/or to produce evidence.[4]Continue Reading Nexus of AI, AI Regulation and Dispute Resolution
SEC Proposes Rules Limiting the Use of Artificial Intelligence by Registered Investment Advisers and Broker-Dealers
On July 26, 2023, the Securities and Exchange Commission (“SEC”) proposed new rules targeting the use of predictive data analytics and artificial intelligence (“AI”) by registered investment advisers (“RIAs”) and broker-dealers.[1] The new proposed rules focus on the potential for conflicts of interest and the possibility that newer, more complex analytics models (including those using AI) might optimize decision making for RIAs and broker-dealers by placing those firms’ interests above the interests of their clients.[2] The proposed rules would require RIAs and broker-dealers to: (i) evaluate whether their use of technologies “that optimize for, predict, forecast or direct investment-related behaviors or outcomes” create such a conflict of interest, and (ii) either stop using or address the effects of tools that place a firm’s interests before the interests of clients. RIAs and broker-dealers will also will be required to adopt policies to ensure compliance with the new proposed rules.[3] Continue Reading SEC Proposes Rules Limiting the Use of Artificial Intelligence by Registered Investment Advisers and Broker-Dealers