Why SEC Is Worried AI Could Lead to Recession, Racial Bias
Editor's note: Authored by Nancy Wojtas, this article was originally published in Law360.
There is little doubt that artificial intelligence represents one of the most disruptive technologies of the modern era.
AI has been described as the use of mathematics and software code to teach computers how to understand, synthesize and generate knowledge in ways similar to how people do it.
Although the rapid progression of AI over the last decade has generally occurred outside of the bounds of significant regulatory oversight, the 2022 and 2023 launches of numerous chatbots powered by AI models sparked wonder across the globe.
This led to significant regulatory questioning of this emergent technology worldwide, seemingly not because robots will spring to life and attack the human race, but because of concerns over the potential misuse of such technology.
On the U.S. regulatory side, the Federal Trade Commission, the Consumer Financial Protection Bureau and the Federal Communications Commission are all proposing to play a role in regulating artificial intelligence, but these product, consumer and technology-focused agencies are not the only ones monitoring risks they believe that AI may pose.
Once again — much like he did for cryptocurrency — U.S. Securities Exchange Commission Chair Gary Gensler has indicated, most recently at an event in September marking the 15th anniversary of the collapse of Lehman Brothers Inc., that he believes the SEC should have a seat at the AI regulatory table.
Similarly, in his speech at a National Press Club in July, Gensler waxed poetic about the fundamental progressions in technology over the last five centuries — from Newton to the mass production of the modern automobile — and then turned his sights to AI, launching into a discussion of the various risks that this new technology may pose to the U.S. financial markets.
The role of the SEC is to protect investors in securities and to maintain fair, orderly and efficient securities markets, but only time will tell what Gensler's regulatory and enforcement approach to effectuating these aims in the AI context will look like.
Gensler explained in July, flexing his technical terminology, that AI models are nonlinear and hyperdimensional, which makes them notoriously difficult to interpret and reverse engineer. Models are trained on trillions of data points, but the models themselves do not actually retain copies of the data that are used to train them.
In fact, relative to the data underlying most AI large language models — the most common and, to date, effective category of an AI model — the models themselves are incredibly lightweight and unburdened by the terabytes of underlying data. This is, of course, the case because it would be a highly inefficient approach for a model to review trillions of data points whenever a new question is asked of it.
Rather, it is much more efficient for the data to train the model first — over days, weeks or even years - and for the model to then answer a question simply as an algorithm derived from such data.
Why might the SEC care that it is difficult to discern what data trained a model to respond to an input in a certain way? Gensler noted that initially, the debate around AI was over who owns an individual's data.
Gensler posited that the issues about today's AI are around privacy and intellectual property rights, not just about any one individual, but rather how the data being collected on each of us results in all of us helping to train the parameters of AI models. Such data collection may result in significant value to those AI developers.
Further, Gensler noted that at the present time, the debate on ownership is playing out in Hollywood, with software developers and with social media companies.
For the SEC, Gensler noted that the commission's challenge is trying to ensure competitive, efficient markets in the face of what could be dominant base layers at the center of the capital markets, and that the SEC must closely assess this trend so that it can continue to promote competition, transparency and fair access to the securities markets.
Gensler touched on several potential fundamental legal areas that may fall under the jurisdiction of the SEC:
First, Gensler discussed that AI models may be trained on data "reflecting historical biases as well as latent features that may inadvertently be proxies for protected characteristics," and thus Gensler postulated "may mask underlying systemic racism and bias."
For example, imagine that a potential investor is not permitted to make a leveraged trade through a broker-dealer because the broker-dealer deems the investor to pose a high risk, but it is an AI model providing this broker-dealer with the risk assessment.
How does anyone know whether the model is discerning that the investor's last name is more common among a historically marginalized ethnic group to make the risk assessment, even if the use of such ethnic information may not be permissible under the law?
Not only are there billions of parameters, weightings that bias a model toward one response over another, in each AI model, and, thus, far too many details for any single person — even an expert AI programmer — to wade through, but the AI models themselves rarely reveal the underlying data that are creating these weightings.
If a parameter exists within a model that biases the investor against receiving a positive risk assessment based on her last name, no one would ever know. However, whether the SEC may opt to regulate, if at all, against such bias in the future remains to be seen.
Second, Gensler discussed a separate legal issue that arises from not knowing the underlying data used to train AI models. Are the AI models utilizing IP not owned or licensed by the AI developer?
If such data allows an AI model to customize a response, or ultimately a product or service, then any entity using such product or service may be in constant breach of a third party's IP rights. Gensler did not specifically discuss how the government — or, in particular, the SEC — may intervene in such scenarios in which such IP is being used to create unfair advantages in trading.
Again, we will have to wait and see how Gensler — and any subsequently appointed chairs — will feel the need to have the SEC propose and adopt rules by which traders or investment advisers and their vendors will be responsible for disgorging profits from any trade subsequently determined to have been made using illegally obtained IP.
Third, Gensler asserted that there are risks that may arise in the financial markets resulting from just one or a few companies ultimately winning the AI model race, and virtually all financial institutions growing dependent on that small number of models.
Gensler noted that such consolidation may lead to AI causing a herd effect in trading whereby the most powerful models drive all major investors in a single direction at once, causing a flash recession or other dangerous ramifications thereby negatively influencing the financial stability of the securities markets.
The SEC may play a role in limiting the use of such super-models in trading or implementing risk mitigation emergency frameworks in the case of a crash caused by the convergence of AI model behavior. Financial firms also may wish to implement internal risk procedures against a harmful financial event caused by models moving in the same direction at once.
Gensler discussed a wide array of ideas, issues and risks relating to AI. While thoughtful regulation may be needed for AI, only time will tell how and when the SEC as well as the other regulatory agencies will take any significant actions.
The SEC has started already attempting to police this area by exercising its jurisdiction over broker-dealers and investment advisers.
It proposed, in July, by a 3-2 vote of the commissioners, new conflicts of interest rules[1] that are designed to prevent a broker-dealer or an investment adviser from using predictive data analytics or similar technologies in such a manner that results in a broker-dealer or investment adviser placing its own interests above those of its client.
The proposed rules, in addition to requiring written policies and procedures designed to prevent violations of the proposed rules and requiring additional recordkeeping requirements, appear to be designed to prevent broker-dealers and investment advisers from using covered technologies — including AI — when communicating with their clients that optimize proprietary revenue or change their clients' behaviors in such a way that benefits the broker-dealer or investment adviser, to the detriment of those clients.
The proposed rules, however, rather than requiring broker-dealers or investment advisers to disclose such conflicts of interest and requiring a client's consent — the current framework for dealing with conflicts of interest — are asking those entities to prove a negative, meaning that the technology does not in any way put the interests of broker-dealers or investment advisers before the client's interests.
This new approach by the SEC may have a chilling effect on the types of technology, including AI, these firms will use, which could have made them more effective advisers to their clients.
Those developing AI models and those utilizing those models should exercise care, since we know at least one regulatory agency that is prepared to be a watchdog. These proposed rules may only be the start of such AI rulemaking.
[1] See https://www.sec.gov/rules/2023/07/s7-12-23#34-97990.
This content is provided for general informational purposes only, and your access or use of the content does not create an attorney-client relationship between you or your organization and Cooley LLP, Cooley (UK) LLP, or any other affiliated practice or entity (collectively referred to as “Cooley”). By accessing this content, you agree that the information provided does not constitute legal or other professional advice. This content is not a substitute for obtaining legal advice from a qualified attorney licensed in your jurisdiction and you should not act or refrain from acting based on this content. This content may be changed without notice. It is not guaranteed to be complete, correct or up to date, and it may not reflect the most current legal developments. Prior results do not guarantee a similar outcome. Do not send any confidential information to Cooley, as we do not have any duty to keep any information you provide to us confidential. This content may be considered Attorney Advertising and is subject to our legal notices.