Responsible AI Development: Working Hand-in-Hand with Legal

Discover the impact of future AI Regulation with Grace Chu from our panel session ELC Annual 2023 - The largest conference for engineering leaders in San Francisco!

This is the first article in our 3-part series where we uncover the insights from our panel discussion at ELC Annual 2023. Let’s dive in!

In this blog, we will uncover the following:

  • Transparent and early engagement with compliance teams is key.
  • View legal guidance as enabling regulatory compliance in using and providing AI products and services without unnecessary risks and delays.
  • Open communication and providing timely updates when plans change is crucial to avoid regulatory pitfalls as AI legislation rapidly expands and evolves.

About Grace Chu

Grace, former Senior Product Counsel at Adobe, was recently the inaugural AI/ML and lead data use attorney for Adobe’s Digital Experience business, with $4.42 billion in revenue in fiscal 2022. She has advised clients from Fortune 100 companies to local startups in open source, copyright, licensing, and M&A.

How serious is this wave of regulation, and what is coming?

A seismic wave of AI regulation is underway across jurisdictions worldwide. According to legal expert Grace, extensive rules with serious implications are already here and expanding quickly. Fines, bans, and business limitations await companies that don't adequately prepare.

“As one key example, the EU will forbid certain types of AI that they consider unacceptable risks. While the EU AI Act is not yet final, the current amendment proposes fines of up to €40 million or 7% of your gross annual turnover for the preceding financial year.”

“A fine is very serious, but even more serious is not to be able to do business.”

Will new regulations apply only to advanced models like generative AI or basic models?

Understanding these rules apply far beyond just advanced AI systems is important. Regulated industries such as banking and healthcare will receive heavy scrutiny. Independent of industry, employment stands out as an area in which many jurisdictions will regulate AI. Grace emphasizes it's often more about the application than the sophistication of the model. Even basic AI won't get a pass in regulated domains.

"Don't think necessarily about the complexity of the model or that any one jurisdiction is going to be indicative of regulation across the globe."

What is your main advice to leaders who lead modeling teams?

Grace emphasized the importance of early and ongoing communication with legal and compliance teams when working on AI models. She stated that model developers should "make it legal's and compliance's problem by talking to us, and letting us help you lay out a plan in which it's feasible for you to comply. Let's be happy with each other instead of fake-polite."

"Raise what you need to accomplish early on and keep communicating. If there are changes, such as changes in the function of your model, vendors that are starting to use AI or new vendors, or you're changing the datasets you want to use, the countries you're releasing in, or the release timeline, especially if you want to release early. These are all important things you want to let legal know. If you're not sure whether something matters for compliance, it's far better to raise it early and get confirmation."

Given the landscape of developing legislation and regulation worldwide, there is a need for early and continuous collaboration between model developers and legal/compliance to create mutually agreeable and feasible plans rather than leaving compliance to the end. Though this may seem counterintuitive, Grace emphasizes that early and open communication with legal and compliance will save significant time and churn in the long run.

What daily advice do you have for model developers who need to discuss their work with legal teams?

As legislation increases, compliance teams must increase services accordingly. Organizations need more AI expertise in legal, risk management, and compliance. Grace stresses modeling teams should loop these groups in early and often when developing or changing AI systems. Last-minute regulatory reviews lead to pain on both sides, often resulting in avoidable delays and elevated risk to the organization.

"Because there is so much legislation that is going to be passed and revised, guidance is going to keep evolving. Developers and compliance are going to have to work hand-in-hand, or their organizations will face the risk of penalties for non-compliance."

Ongoing communication about model changes, data sources, release countries, and more is essential as rules come into effect and evolve. Requirements and, therefore, guidance will keep sharpening over time, and best practices may look very different in six months.

Grace's Best Practices when Working with Legal on AI Models:

  • Share plans early when working on AI models and keep legal teams continuously updated with changes, including to model function, vendors, datasets, countries, and timelines.- Don't let these be surprises or grow into roadblocks!
  • Collaborate early to create feasible plans that work for both model developers and legal/compliance, with an acceptable compliance burden and risk profile.
  • Don't leave compliance as an afterthought or believe legal has the ability to create compliance without collaboration from developers, including in product design and documentation.

Navigating the New Landscape

In a nutshell, Grace's advice for navigating this new era is early and continuous engagement with compliance teams. Rather than seeing legal compliance as roadblocks, view collaboration with legal as a path to providing AI products and services without taking on unnecessary risks or delays. With open communication and a plan agreed upon between developers, the business, and legal, companies can provide and use AI broadly while avoiding pitfalls in this rapidly changing regulatory landscape.

Get all the insights from Grace:

Back to Blog