Welcome to the Trustible Newsletter!
Our bi-weekly newsletter covers top news & analysis in AI policy, AI governance best practices, and product updates.
Welcome! Responsible AI is moving incredibly fast. On one side, AI capabilities are accelerating faster than Moore’s Law. On the other side, intensified efforts in AI regulation and governance are emerging globally to address security, privacy, and societal concerns. This dynamic creates a VUCA (volatility, uncertainty, complexity, ambiguity) environment in AI, highlighting the critical role of AI governance professionals in managing complexities and safeguarding their organizations, customers, and society as a whole while continuing to foster innovation.
The Trustible Newsletter will be a bi-weekly publication to help you – the AI and AI governance professional – cut through the AI noise and understand what's happening in the policy landscape, best practices for AI governance, and product updates from us.
Our hope is to deliver content that is concise, insightful, and actionable. Every edition, we’ll cover topics ranging from techniques to mitigate AI risks to the latest developments in AI policy, as well as highlight various AI benefits and incidents making the news – and how they may impact your organization.Â
Let’s get started.
1. SEC allows Disney and Apple shareholders to vote on AI use
The U.S. Securities and Exchange Commission (SEC) has required Apple and Disney to allow shareholder votes on their use of AI. This decision could mark the beginning of increased shareholder activism in artificial intelligence. The SEC's action demands detailed AI usage reports and ethical guidelines, highlighting concerns over AI's impact on jobs, as seen in recent labor disputes and legal actions such as the New York Times' lawsuit against OpenAI.Â
This trend is also mirrored in international regulations, like the EU’s AI Act, which requires worker consultation before deploying high-risk AI systems. The SEC rejected attempts by Apple and Disney to categorize these proposals as "ordinary business operations," which would have placed them beyond the scope of shareholder voting.
Our take: This ruling shows that AI Governance is a board level concern, and the SEC has endorsed that it belongs at that level. Corporate boards must be informed about AI's capabilities and risks, integrating AI ethics and governance into their strategic discussions and oversight.Â
2. New NIST taxonomy for adversarial AL/ML attacksÂ
Last week, the National Institute of Standards and Technology (NIST) published a report containing a robust taxonomy of AI/ML attacks (full report here). It describes various types of AI attacks and exploits, and explores the current state of attack mitigations. The paper covers attacks for ‘conventional’ predictive AI, as well as attacks specific to generative models. This is the latest report by NIST seeking to equip organizations with a standard terminology for research, and documentation purposes. It is complementary to a more general AI terminology document published by NIST last year.
Why it matters for legal & risk professionals: NIST confirms that there are no current ‘foolproof’ methods for protecting AI from misdirection, and even the best in class mitigation strategies cannot promise full prevention or mitigation. AI security is not a ‘solved’ problem, and there are a lot of unknowns that create additional risks and challenges for organizations.
3. The questions you should be asking your AI vendors
Customers and regulators are increasingly probing into how and where AI is being used throughout organizations’ supply chains, given that AI risks can exist anywhere. That’s why we see many companies building out robust procurement guidelines for AI vendors. But, oftentimes, they don’t even know what questions to ask.
We put together this blog post as a practical guide with key questions you should be asking your AI vendors… and why. They include:Â
What type of AI model does your system use and is it explainable?
What is the source of your training data for the AI model?
Will inputs to the system be used as training data?
What are known limitations of the system?Â
What are your organization’s AI policies?
How do you document your AI systems?
What bias and fairness considerations went into your AI model?
And more!
Read the full post here.Â
4. CHEAT SHEET: Comparing EU AI Act, NIST AI RMF, and ISO 42001
Why this Matters: Navigating the evolving and complex landscape of AI governance requirements is a real challenge for organizations. We've created this cheat sheet comparing three important compliance frameworks: the EU AI Act, the NIST AI Risk Management Framework, and ISO 42001.Â
5. Life After the Rite Aid Order: A Discussion on How the FTC is Shaking Up AI Oversight
AI incidents can have major implications for companies looking to develop and deploy AI. Oftentimes, organizations can be financially or reputationally harmed if the AI system performs harmful or unintended actions impacting people – furthering the need for AI governance.Â
The American drugstore chain Rite Aid was recently banned by the Federal Trade Commission (FTC) from using AI facial recognition after the retailer deployed the system without reasonable safeguards. This will have far-reaching implications for facial recognition technology. It also signals the FTC willingness to go big on regulating AI in the US.
Join us on Wednesday, January 31 at 12:00 P.M. EST / 9:00 A.M. PST for a Linkedin Live.Â
Speakers:
John Heflin – Director of Policy, Trustible
Jon Leibowitz – former Chairman of the FTC and former Partner, Davis Polk & Wardwell.Â
Maneesha Mithal – Partner, Privacy & Security, Wilson Sonsini
Agenda:
The FTC’s role in regulating AI
The impact of the Rite Aid decision on American businessesÂ
AI and consumer rights, notices, and complaintsÂ
Effects on AI vendors and suppliersÂ
****
That’s it for today! We welcome your feedback on this edition and what you’d like to see going forward.Â
- The Trustible TeamÂ