Moratorium on AI would have a negative impact on AI adoption
Plus choosing the right model, system prompts, and NYT vs OpenAI lawsuit
Good morning. Trustible is hiring! We have several roles available across engineering and business development.
In today’s edition (5-6 minute read).
Moratorium on AI would have a negative impact on AI adoption
How to choose the right AI model for your task
Understanding System Prompts
OpenAI must preserve vast amount of data in latest lawsuit development
1. Moratorium on AI would have a negative impact on AI adoption
Congressional Republicans are one step closer to preventing states and localities from regulating AI. On May 22, 2025, the U.S. House of Representatives narrowly passed their reconciliation budget bill that included a 10 year moratorium on state and local AI regulations. Should the language be signed into law as is, it would effectively halt all ongoing state-led efforts to regulate AI. It would also raise questions as to whether existing state laws fall within the moratorium’s scope. For instance, Colorado’s comprehensive AI law and Tennessee’s ELVIS Act would very likely be pre-empted by the moratorium. It is possible that the language is removed from the bill due to the Senate’s procedural rules, which limit reconciliation bills to provisions that directly impact federal spending or revenue.
Article VI of the U.S. Constitution gives Congress the authority to pre-empt state and local laws with a federal law. However, states could still argue that Congress does not have the authority to regulate AI in this manner. There are also concerns over the federal government’s ability to restrict how states enforce their own laws. Supporters of the proposed moratorium assert that this will help AI innovation by avoiding overly prescriptive rules from state or local governments. However, it does not excuse organizations from obligations under international regulations (e.g., the EU AI Act or Korea’s AI law). Proponents argue that overriding existing state and local laws inject further regulatory uncertainty into a fracturing legal landscape, as state and local governments could use other non-AI specific laws for workaround enforcement.
But here’s the thing. Preventing states and localities from implementing AI legal frameworks or protections may also hinder further AI adoption. The vast majority of Americans want companies to slow down and “get [AI] right the first time,” which underscores the prevailing trust gap with AI technologies. Since it’s unlikely that Congress will regulate AI in the coming years, enacting the moratorium may chill AI adoption if the public sees that the technology does not have adequate and reasonable safeguards. Moreover, most enterprises prefer AI to be regulated as it clarifies their liability.
"More than three-quarters of Americans (77%) want companies to create AI slowly and get it right the first time, even if that delays breakthroughs, the 2025 Axios Harris 100 poll found. Only 23% of Americans want companies to develop AI quickly to speed breakthroughs, even at the price of mistakes along the way." The notion that regulation would hinder innovation is a “false dichotomy”, as IAPP’s Ashley Casovan shares.
Our Take: Passing the AI moratorium could have the opposite effect on AI innovation. The public is already skeptical of the technology and a 10 year freeze on any safeguards may amplify the narrative that AI is unsafe - which will ultimately stifle the AI ecosystem.
2. How to choose the right AI model for your task
With all of the recent developments in model types, we ourselves have been confused as to which models to use for which purpose. Selecting the “right” large-language model in 2025 feels like standing in front of a shelf of identical remotes—each hides a different power button. Brand names don’t help much: OpenAI, Google, Anthropic and dozens of start-ups ship whole families of LLMs tuned for separate jobs. The good news is that nearly every model fits into one of six capability buckets. Once you know the buckets, choosing becomes a matching exercise instead of a guessing game. Here’s some guidance:
General-purpose conversational models – these instruction-tuned “chat” brains aim for every day asks. They’re what you open when you just need smart text: OpenAI’s GPT-4o, Anthropic’s Claude 3.7 Sonnet and Google’s Gemini 2.5 Pro all live here. Each holds world knowledge, follows prompts politely and is the default choice when you don’t have special constraints.
Deep reasoning & code specialists – when you need to be logically correct more than you need quick responses, pick a model optimized for multi-step thinking. OpenAI’s o3 and Anthropic’s new Claude 4 both advertise frontier-level planning, maths and debugging skills, while Mistral’s Mixtral-8×22B is a popular open-source option. Use them for legal analysis, algorithm design, or any task where a wrong answer is fatal.
Lightweight & on-device models – sometimes latency, battery life or privacy matter more than raw IQ. Google’s Gemini Nano runs entirely on Android phones, Apple’s Ferret-v2 powers vision tasks on Macs and iPhones, and Anthropic’s Claude Haiku streams answers at bargain token prices. They summarize emails, draft replies or power edge IoT devices without shipping data to the cloud.
Knowledge-grounded & agentic models – these systems pair an LLM with live search or tool calls, so answers stay fresh and can trigger actions. Perplexity AI’s RAG stack, OpenAI’s ChatGPT “deep research” mode and the Assistants/Responses API let models fetch websites, crunch files or operate virtual computers before speaking. Reach for them when the source needs to be cited—or when you want the bot to do something, not just describe it.
Multimodal & creative generators – need vision, audio or imagery? Multimodal LLMs such as GPT 4o convert speech, pictures, and text interchangeably, while image-first diffusion models like Midjourney V7 and OpenAI’s DALL·E 3 turn prompts into artwork. They’re the engines behind product mock-ups, accessibility tools, and voice assistants that can see what you’re talking about.
Domain-specialized vertical models – when industry expertise and risk are unique, grab a model steeped in that field. Google’s Med-PaLM family for healthcare, BloombergGPT for finance, and open-source FinGPT all inherit a general brain but are retrained on expert corpora and guardrails. They excel at jargon, regulatory nuance and narrow benchmarks where generalists stumble.
Bottom Line: Ultimately, model selection is just the first checkpoint on the road to value. The real differentiator is governance—clear policies, controls, and cross-functional accountability that ensure every model stays aligned with your organization’s values and objectives. Even the perfect model for your task can drift, hallucinate, or leak data if guardrails are weak. Treat governance as the operating system beneath your AI stack: define roles, document risks, validate outputs, document decisions, and keep humans in the loop. Governance is the biggest enabler of AI adoption at scale.
3. Understanding System Prompts
System prompts are one of the final levers in an AI System that dictates the behavior of a Large Language Model. They are instructions passed to the model before the user prompt that can specify tone, boundaries, output format and more. For example, a customer service chatbot may have a system prompt like: “You are a customer support assistant. You use a professional and friendly tone. You only answer questions about product XYZ. If a user asks an unrelated question, you will decline to answer”. This message would be input to the model before the first user prompt and customize a generic model (e.g. GPT-X, Llama, etc.) to your application. In addition, they are used to specify the tools available to an agentic system, as information on those is not available during training. However, since system prompts are just initial prompts set by the developer instead of the system user, they are limited in how much they can alter the system; to achieve a larger shift in behavior or give the model extensive new knowledge, larger interventions, like fine-tuning the model, may be necessary. In addition, a malicious actor may be able to override system prompt instructions by telling it to ‘ignore previous instructions and do X instead’. Not all applications need a “system prompt”, when a developer is interacting with a model directly, they may only pass in one prompt that includes information on style, tools and the actual task; the distinction between “system” and “user” prompts is primarily necessary for customer applications.
Major consumer AI systems – like ChatGPT, Grok and Claude – use system prompts to dictate your interaction with them. These are hidden from the user with some exceptions: After Grok output unsolicited statements about white genocide in South Africa, xAI explained the behavior as an unauthorized change to the system prompt by a rogue employee and publicized the full fixed prompt (the offending prompt was not shared). Meanwhile, Anthropic has already been publicizing prompts for several past model iterations, but these releases exclude some of the sections that guide agentic tool use. Independent parties have leaked prompts for many other major systems to promote transparency (see this Github Repository). These prompts range widely in length and content: Grok-3’s is under 500 words, with a large focus on details about xAI’s products that a user may ask about. On the other hand, Claude-4’s is nearly 10,000 words and covers a variety of topics ranging from banned topics to minimizing sycophancy to the results of the 2020 US election (see detailed breakdown). Overall, these prompts both reinforce behaviors instilled during training and contain information that would not be available beforehand, such as today’s date, available tools and information about the product itself (which may change since training time). Transparency around these prompts may help a user understand how the system will handle challenging situations (e.g. Claude’s prompt includes a section on therapeutic support). On the other hand, knowledge of the system prompt may make it easier for malicious actors to jailbreak the system, and developers may believe that a well-crafted prompt makes their system unique.
Our take: Developers of major AI systems have crafted in-depth system prompts that govern many aspects of model behavior, but many have not chosen to publicize them for privacy and security concerns. However, transparency may give users a better understanding of the risks and opportunities of these systems, but the actual effects may be hard to decipher for non-expert readers.
4. OpenAI must preserve vast amount of data in latest lawsuit development
The New York Times’ ongoing copyright lawsuit against OpenAI has taken on an interesting twist in recent weeks. On May 13, 2025, Magistrate Judge Ona Wang issued a sweeping evidence preservation order that required OpenAI to “preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court.” The order came after the New York Times (and other news organizations partied to the case) voiced concerns over output log data that had been destroyed by OpenAI since the lawsuit was originally filed in December 2023. OpenAI filed a request to reconsider the May 13 order, but Magistrate Judge Wang denied the request and scheduled a hearing to address potential spoliation issues (i.e., whether OpenAI ignored its obligations to preserve evidence).
Generally, discovering evidence is limited to information that is relevant and proportional to the case. The news organizations assert that OpenAI has an obligation to preserve all output logs data as it would be relevant to their copyright infringement claims. The news organizations claim that OpenAI’s data destruction processes violated their right to preserve evidence as part of the lawsuit. However, the evidence claims come into direct conflict with commonly understood data privacy principles (e.g., data minimization and storage limitations). OpenAI maintains that it has a routine data deletion process in place, which would make sense given the vast amounts of data ChatGPT processes. More troubling is that the May 13 order (and subsequent request for reconsideration denial) also requires that OpenAI retain data from users who asked to have their data deleted. While there is an understandable interest in preserving evidence, the order seemingly undermines the data privacy rights of those who asked OpenAI to delete their data.
Our Take: The preservation order has a disproportionately broad scope that may impact how judges view evidence preservation in similar or future cases against frontier model companies. Moreover, while discovery has been limited to OpenAI’s ChatGPT products, Microsoft is also a party in this case and could be implicated in further evidence disputes if the news organizations believe there is additional, relevant evidence being held by Microsoft’s servers.
—
As always, we welcome your feedback on content and how to improve this newsletter!
AI Responsibly,
- Trustible team