Williams worked at the SEC in enforcement for several years before joining Ropes & Gray and now advises clients on SEC regulatory issues. “One of the biggest issues right now in enterprise AI use is the ability to trust what AI is and how you’re using it internally,” Eliuk said. “Te big goals over the next couple years are to make it more trusted and make enterprise-wide adoption more seamless and trusted.”
Elliott asked for recommendations on how com- panies can establish a framework to begin using AI. “There is more than one way and it’s going to
differ by organization,” Williams said. “For context, the SEC itself is trying to figure out its own AI usage and just recently set up a working group to determine how it’s going to use AI. “What you should not do is begin using AI tools
without having conversations with in-house com- pliance people and your in house legal staff. The SEC looks at whether you have proper policies and controls in place. “For smaller organizations that may be just get-
ting started, this may be understanding how they’re using an external public AI tool that’s not customized to them. Tey need to think about what information they are allowed to present to that tool.” Eliuk added, “You want more people to join these discussions because we’re not experts across every domain and others may highlight important issues. Ten you can begin to create an AI policy.” He cautioned, “Tere is often an assumption that
data belongs to the user, when that’s not the case. Data often belongs to whoever provided the data, which, for example, can be a sovereign state. All these things are important and many of us don’t have the expertise on how to deal with them. Bringing in partners you trust is really important, because when you understand that supplier, the license constraints, and what you can share, that policy goes a lot further. If you ever do get into trouble you need to have a reference trail to justify the company’s actions in that space. If you don’t have that, you’re deficient in that you didn’t do your due diligence.” When asked by Elliott to analyze different AI models, Eliuk answered, “Tere are so many models out there, including open, semi-open and closed models. Some
2 2 FA L L 2 0 24 ■ IR UPDAT E
are highly protected where you subscribe to a service and you’re able to use that service, but you do not have open access to that model. When you’re using those semi-closed models, you’re actually pushing out your data to an endpoint, whether that be on a public cloud or a company’s proprietary cloud. But what happens along the way with that data? It is probably encrypted, but how did it get used? You might have used it on the surface just to get an answer, but did that company that you’re subscribing to use it in a different way, shape or form? You have to be careful with that. “With open-source models it is very different. Many people compare it with open-source software, where cohorts of individuals are contributing to improve it. Tat’s not the case here. Open-source models are incredibly enabling, but at the same time incredibly problematic, because if an exploit is found amongst an open model that you’re using, you don’t get new versions of the model. You can’t take that previous version out of the market. “Also the safeguards and guardrails built into
open-source models can be broken if people start to train certain layers of the models. Then what information might leak out of it? Was it potentially unlicensed data? Maybe it was licensed data at one point and maybe a new regulation came out and that data is no longer allowed to be public. “Tere are other models where you can subscribe and use them and you have indemnification. Find a trusted company. Tere are many different models so research the model that you want to use and then understand how it’s going to be effective for your use case.”
Elliott asked the panelists to comment on ways AI affects how companies train employees about protecting MNPI. “Many organizations believe that everyone needs
to be trained on MNPI because so many people have access to it,” Williams said. “Tat training should be universal, and anyone who could potentially distribute information also needs training on Regulation Fair Disclosure. Erring on the side of having training for a wider range of people is a good thing.” Eliuk added, “You don’t want people wander-
ing around AI models, so develop policies so they understand the accepted areas. Ten provide alerts
niri.org/ irupdate
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40