A Little Less Conversation, A Little More Action 🎵
Plus, US federal agencies opine on AI, and the question of whether machines can 'unlearn'
Ok, we admit today’s edition feels serious. So here are 5 funny cartoons about AI.
In today’s edition of The Trustible Newsletter (6 minute read):
Tennessee enacts the ELVIS law
US federal agencies opine on AI – all at once
The 3 Lines of AI Defense for AI Governance
The SEC cracks down on AI washing & fraud
Can machines unlearn?
Going to the IAPP Global Privacy Summit in DC this week? Come to our booth and say hello 👋
–
1. A Little Less Conversation, A Little More Action: Tennessee's Protectionist AI Law
Last week, Tennessee formally enacted the Ensuring Likeness Voice and Image Security Act (ELVIS Act) which grants property rights to an individual over their physical likeness and voice. The legislation passed through both legislatures unanimously, as it was widely seen as an effort to protect artists and the recording industry against audio deep fakes and other kinds of exploitation from AI.
Artists like Billie Eilish, Katy Perry, Nicki Minaj, Jon Batiste, and others have submitted letters to AI developers requesting them to stop using AI to devalue human artists. According to Axios, the letter addresses artists’ concerns such as replicating their voices, using their work to train AI models without compensation, and diluting royalty pools that are paid out to them. It’s no surprise that this bill moved so quickly through the Tennessee legislature given Nashville’s prominence in the music industry, which contributes nearly $10bn a year to the local economy.
This bill is an early example of a ‘protectionist AI’ law, that seeks to protect specific industries or interest groups from disruption from AI. Last year's SAG-AFTRA strike in Hollywood prominently featured the use of AI, and the final agreement defined ground rules for AI ‘replicas’ of union actors. We expect to see additional lobbying efforts by interest groups favoring similar protectionist industry laws both across the US states, as well as internationally (looking at you, Michigan).
Our Thought Bubble: This scenario could result in a highly fragmented regulatory landscape, with some jurisdictions aiming to promote AI development and others prioritizing the protection of their established industries.
2. Federal Agencies Opine on AI
The past two weeks have seen a flurry of activity from the U.S. Federal Government on AI, largely as follow-ups to President Biden’s Executive Order on AI.
On March 27, NTIA published its report on AI accountability in response to an earlier Request for Information aiming to shape AI policy. The findings align with President Biden's Executive Order and suggest non-binding measures for AI auditing, disclosure levels, and governance through procurement. On the same day, the US Treasury Department outlined AI's financial sector risks, emphasizing challenges like the scarce AI talent, advantages favoring large firms, legal complexities, supply chain vulnerabilities, and significant cybersecurity issues, which are also top concerns in other regulated sectors.
The next day, on March 28, OMB released its final memo on AI governance and risk management for federal agencies. The memo requires, among other things, that agencies name a Chief AI Officer by the end of May. It also mandates that agencies implement safeguards for AI technologies they are using by December 1 or stop using the technology, unless it can demonstrate it is a necessary function of the agency.
Our Take: The surge in initiatives may lead to mixed results. Although the Biden Administration has advanced a “whole-of-government” strategy for AI, the practical effects of the NTIA report and OMB's memorandum raise several questions. NTIA, serving as an advisory body to the President, leaves open the question of whether its suggestions for AI accountability will be implemented or put into practice. Moreover, the OMB's memorandum lacks specifics on the necessary safeguards for continued AI usage by agencies, and it remains unclear how the Treasury Department will translate their research finding into action.
3. The Lines of Defense for AI Governance
AI Governance involves a multidisciplinary set of stakeholders, implementing policies and processes across their organization, for every AI use case and model an organization deploys. The structure of AI governance within an organization will be different depending on risk tolerance, resources, and sector, but there are some good guidelines and best practices that can be taken from other various industries. Specifically, the ‘3 lines of defense’ from the financial sector’s model risk management guidelines can be adapted for AI.
In our latest blog post, we outline what the 3 lines of defense are for AI governance. We highlight who are the primary stakeholder involved, what they are responsible for, and how they fit into the overall AI governance strategy of any organization.
4. SEC Cracks Down on AI Washing & Fraud
The AI Hype is real. So real that companies that don’t actually implement anything resembling ‘AI’ are claiming they do. They do this to improve their brand reputation, attract customers, or appeal to potential investors. However, claiming that you use artificial intelligence when you don’t is starting to land companies in hot water with regulators.
Last week, the Securities and Exchange Commission (SEC) announced penalties against 2 investment advisory firms that fraudulently claimed to use AI. These companies discussed their use of AI when talking to potential clients, despite not actually using the technology for the purposes they described. This trend of trying to solicit more business, claim differentiation, or otherwise grow a business by falsely claiming to use AI is known as ‘AI washing’.
The SEC isn’t the only federal regulator looking at fraudulent use of AI. The FTC has published several guidelines and warnings over the past few months related to fraudulent AI claims in marketing. The FTC also recently stated that silently changing terms of service to allow for collected data to be used for AI, can also be considered a fraudulent practice.
Key Takeaway: Top regulators are monitoring firms’ statements around AI use. Overstating the outcomes, performance, or capabilities of an AI system can land companies in regulatory trouble.
5. Can Machines Unlearn?
For humans, old habits die hard. The same might be true for AI.
While recent advances in machine learning have allowed AI models to recognize images, contexts, and grammatical rules, teaching them to unlearn this information presents a larger challenge. Specifically, since large language models (LLMs) represent specific terms, facts, or patterns are represented as vector embeddings across billions of parameters, figuring out how to remove one of these concepts can’t be easily done without affecting other parts of the model and degrading its performance. Similarly, it is difficult to identify how significant a single training data point was on the final model’s weights.
This matters because various legal and regulatory standards apply to AI. There are dozens of legal situations where the ability to remove precise information from a pre-trained model could prove immensely useful. Whether it is to remove illegally obtained data from a model, to fully implement the ‘right to be forgotten’ for GDPR, or to remove dangerous, toxic, or deliberately poisoning information from a model.
In recognition of this, many leading AI research teams have started to tackle the problem of machine unlearning. Researchers at Microsoft last year unveiled their first technique, and demonstrated erasing the term ‘Harry Potter’ from an LLM, albeit to mixed results. Meanwhile, Google has announced a funded ‘Machine Unlearning Challenge’ to promote research in this space. Finally, a group of researchers from leading AI safety labs have created the first benchmark for biological, chemical and cybersecurity threats for the purpose of measuring how effective a machine unlearning technique may be for risky subjects. However, unlearning techniques that can be relied upon for legal processes are likely several years away.
Key Takeaways: Getting AI to forget specific things is a very difficult task, despite it being potentially very useful as a safety and regulatory tool. Research in this area is just beginning and will take a while to be used at scale.
*********
As always, we welcome your feedback on content and how to improve this newsletter!
AI Responsibly,
- Trustible team