Trustible and Databricks team up to operationalize the DAGF
Plus why the AI moratorium (RIP) would have backfired, why AI slop is making human-generated content a premium, and the inflection point in the AI copyright debate
Hi there, Happy Wednesday, and welcome to the latest edition of the Trustible Newsletter! The newscycle in AI never slows down, and that goes for holiday weekends for all of us here in the U.S. Between last week’s AI regulatory moratorium eventually failing to pass, separate lawsuits involving Anthropic, Meta, and Midjourney being both filed and resolved on the topic of AI copyright infringement, and the European Commission signaling its full steam ahead for implementation of the EU AI Act, we’re watching history being made in real-time.
In today’s edition (5-6 minute read):
Trustible operationalizing Databricks AI Governance Framework
Why the AI moratorium would have backfire
AI policy & regulatory roundup
Why AI slop matters
The great AI copyright conundrum
Trustible Announces Databricks AI Governance Framework Implementation
Last week, our partner Databricks introduced their AI Governance Framework (DAGF v1.0), a structured and practical approach to governing AI adoption across the enterprise. The DAGF acknowledges what many organizations are already discovering: AI governance is not simply a technical exercise. It’s about aligning people, processes, policies, and platforms to ensure that AI systems are trustworthy, compliant, and scalable.
The Databricks AI Governance Framework marks a pivotal step in helping organizations balance innovation with responsible deployments. But success depends on operationalizing them effectively across people, processes, and technology.
Yesterday, we announced that Trustible is proud to serve as the official Technology Implementation Partner of the Databricks AI Governance Framework and a key contributor alongside leading organizations such as Capital One, Meta, Netflix, Grammarly, and others. DAGF offers a practical, flexible framework designed to help enterprises embed AI governance into day-to-day operations, regardless of where they are in their AI maturity journey. You can read more about how we’ve interpreted the DAGF in the Trustible platform in our whitepaper here.
Key Takeaway: Starting this week, Trustible customers will be able to align their AI governance efforts directly to the framework through a dedicated DAGF module within the Trustible platform and help embed AI governance into the fabric of your AI strategy so you can build, deploy, procure, and scale with confidence. This is the first of many partnerships to come with AI deployers, infrastructure providers, and ecosystem partners to ensure enterprises of all size and shapes have access to ready-to-deploy governance solutions that adapt as quickly as the market.
2. Why the AI Moratorium would have backfired
While we now know the ultimate fate of the proposed State AI Legislative Moratorium that was included in the ‘One Big Beautiful Bill’ budget (it was removed by the Senate in a 99-1 vote), the idea is likely to stick around, and similar proposals may appear in the future. We supported its removal for a variety of reasons, but our biggest argument against it was that it would have backfired. Specifically, we think banning AI regulations in the absence of any federal clarity would have hurt the very startups and innovative environment that its proponents were trying to protect. Here’s a brief outline of why:
Trust - The majority of Americans don’t trust AI. Even the perception that AI will become less regulated, would hurt that trust even more. Trust in AI systems is directly proportional to revenue for AI companies which can fuel further innovation.
Level Playing Field - Big companies and Big Tech have dozens of lawyers, machine learning experts, and marketing teams that are able to deal with the uncertainty of the current environment. Smaller companies and startups don’t. A clear set of AI standards can help a startup be on even footing as Big Tech from a compliance perspective.
Legal Uncertainty - Let’s be honest, with so much opposition from State Governors and Attorney Generals, this Moratorium would be challenged almost instantly, and would likely take years to work through the federal circuits. During that time some State laws in Colorado and Texas would be in a State of uncertainty which businesses and investors hate.
A Solution Before A Problem - The overwhelming majority of the ‘1000+’ AI bills at the State level don’t actually regulate AI directly (most simply mention the term ‘Artificial Intelligence), and many have common sense overlapping requirements around issues like AI disclosure, and protecting against Deep Fakes. Perhaps ironically, Big Tech’s State lobbying efforts have already created a fairly lightweight and consistent State regulatory environment.
Reduced Information Sharing - At the moment, there is virtually no information sharing within the AI ecosystem for AI vulnerabilities and incidents. That is primarily because of legal liability issues where companies do not want to admit on paper about any incident. This can be done with a regulator or standards body to get around competition issues. This lack of information sharing hurts everyone because it means we’re not learning from the mistakes of others and innovating as fast.
For a bigger deep-dive analysis, we have a more in-depth post here.
Our Take: The idea that all regulation is bad for business is a little simplistic, and can often just be a form of regulatory capture. A balanced federal framework would be the best approach for everyone involved, but pre-empting things before that framework has even been discussed would hurt the ability of all but the biggest AI companies.
3. Policy Round-Up
Here is our quick synopsis of the major AI policy developments:
U.S. Federal Government. The failed federal AI moratorium (see our write-up for more details) has some speculating that it may renew a push for federal legislation that explicitly preempts state laws, though the contours of potential legislation remain murky.
U.S. States. AI-related policy developments at the state level include:
California. The California Civil Rights Council approved a final regulation that clarifies how existing discrimination laws apply to AI tools for employment decisions. The new rules will take effect on October 1, 2025.
New York. Governor Kathy Hochul announced the construction of a new nuclear power plant in upstate New York to fulfill new energy demands, which is caused in part by growing AI usage. The announcement comes amidst a push from the Trump Administration and big tech companies to power AI computing infrastructure with nuclear energy.
Canada. It appears Canada’s new government will not revive the Artificial Intelligence and Data Act, which Parliament terminated ahead of the federal elections in April 2025. Canadian lawmakers are considering which aspects of the former legislation they may want to pursue, such as addressing issues with copyright and AI. The move aligns with a broader global movement away from AI regulation and towards AI innovation.
South America. AI-related policy developments in South America include:
Chile. Chilean lawmakers are facing opposition to their proposed comprehensive AI law. The proposed bill is an EU-inspired framework, which critics claim will harm technological investments and development if enacted.
Brazil. Leaders of the BRICS countries (Brazil, Russia, India, China, and South Africa) are expected to release a statement that calls for data protections against unauthorized AI use. The push will come as part of a two-day summit among BRICS leaders in Rio de Janeiro. BRICS serves as a diplomatic forum for developing countries and has recently been accused by President Trump of promoting "anti-American policies."
EU. The next set of EU AI implementation deadlines is approaching on August 2 and the European Commission squashed rumors that it may pause enforcement obligations for certain EU AI Act provisions. Tech companies have been working behind the scenes to delay the Act’s enforcement timelines, with the obligations for general purpose AI models being top of mind (which kick-in on August 2). Tech companies argued that delays with the voluntary Codes of Practice warranted the postponement, which may not be released until the end of 2025.
Industry. Microsoft announced that it would layoff approximately 4% of its workforce as it seeks to make heavier investments in developing its own AI. The move comes as big tech remains locked in an AI arms race, which has seen companies like Meta offer $100 million signing bonuses for top AI talent.
4. Why AI Slop Matters
With recent improvements in the quality and cost-effectiveness of AI generated content, it seems impossible to escape ‘AI Slop’ - low quality generated content used mainly for driving online engagement. We see it on our social media feeds, in the content we read, and even increasingly in our professional work. The public is becoming increasingly aware of it as well, with recent news stories covering its impact on events like the most recent season of Squid Games, or even the Sean Combs trial. The topic was even covered in-depth by John Oliver in a recent ‘Last Week Tonight’ episode. How big of an issue is ‘AI Slop’ however? Is it simply the new ‘Spam’, something to be ignored that will eventually fade into the background, or are there major governance implications for it? Here’s a brief overview of why ‘AI Slop’ may be relevant to organizations using AI.
Wasteful Spend - Even before the current ‘Age of AI’, there was a conspiracy theory called the ‘Dead Internet Theory’ that postulated that the majority of online content and interactions were driven by automated bots. The biggest challenge with this is that many organizations spend massive amounts of money on ads for ‘engagement’, or derive market insights from it. Big Tech platforms unfortunately have an incentive to ‘boost’ engagement artificially, even though it will yield poor ROI for the advertisers.
Degraded Reputation - Many organizations differentiate themselves based on the quality of the services they provide. Consider every top tier law firm, editorial publication, or consulting business that charges high rates for access to top thinkers. The problem: What if those ‘top thinkers’ are using the same AI as everyone else? The temptation to use AI systems may win-out, even as studies show that the diversity of AI content is actually quite low, and that overuse can degrade our own cognitive abilities over time. It will be a constant fight for organizations that seek to differentiate based on quality to avoid their reputation degrading as a result of too much slop.
Key Takeaway: While AI generated content is quick and easy to create, there may be a persistent bias against such content, and the internet will likely become overwhelmed for it. This could create a market for authentic human-generated content, but then maintaining that quality and output could become difficult to maintain. For enterprises that rely on AI to generate content, especially as part of their marketing strategy, expect that lack of authenticity ultimately to reduce the effectiveness of their strategies. It’s an important reminder that while AI is a transformative technology, it’s also a tool, not to replace human content generation.
5. The Great AI Copyright Conundrum
Source: New York Times
Over the span of three days, two major cases on AI and copyright law were released. On June 23, 2025, a judge ruled in favor of Anthropic after a group of authors sued Anthropic for copyright infringement by alleging that the company trained Claude on their protected works without their permission. The judge found that, while Anthropic may have broken the law when it trained Claude on millions of pirated books, the books that were legally purchased for model training purposes did not violate copyright law. The judge reasoned training the model on the books was a fair use because Claude’s outputs generated new text that was "quintessentially transformative” from the original material.
Two days later, a judge found in favor of Meta after a separate group of authors sued the company for using their copyrighted books to train Llama. The judge found that Llama’s outputs did not cause sufficient market harm to the authors because Llama was not able to generate “any meaningful portion” of the authors’ books that would threaten the books’ market value. Moreover, the authors did not present meaningful evidence that Meta’s use of their books diluted their value. The judge seemingly left open the door to further litigation on this issue by noting that the ruling only impacted “the rights of [the] 13 authors—not the countless others whose works Meta used to train its models … this ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”
The big tech companies are heralding this as a win, however it does not solve the broader issues related to how AI model providers are using protected works to train their models. Companies should also take note that this does not alleviate them from infringing on someone’s intellectual property (IP) rights. Not all models are created equal and it is important to understand whether the underlying model(s) for a company’s AI products or services have guardrails in place to avoid violating IP laws. Companies should understand how a model handles IP in their training data. For instance, Trustible Model Ratings identifies when a model has policies around IP in their training data. Moreover, groups like Creative Commons are working on frameworks that help balance the IP rights with the need to have high quality datasets available to train AI systems.
Our Take: Big tech won the battle but is far from winning the war when it comes to how AI uses IP. Policymakers are thinking about how best to balance IP law with AI innovation, but do not expect an answer in the foreseeable future. In the meantime, companies should be taking steps to ensure that training data is properly licensed when appropriate, selecting models that have IP safeguards in place, and reviewing outputs to avoid using potentially infringing materials.
—
As always, we welcome your feedback on content! Have suggestions? Drop us a line at newsletter@trustible.ai.
AI Responsibly,
- Trustible Team