A unified standard for AI Agent interactions
Plus AI in education, Colorado and Texas updates, and overheard at IAPP
In today’s edition (5-6 minute read).
A unified standard for AI Agent interactions
AI Literacy is needed to fill the AI trust gap
Colorado and Texas make changes to their AI rules
Overheard at IAPP GPS 2025
1. A unified standard for AI Agent interactions
One of the key powers of AI Agents is their ability to interact with other systems: like searching the web, running code or calling with an API. A major challenge for building multi-purpose agents is implementing the code that powers the interaction with other systems, because the interfaces (e.g. APIs) can come in a variety of formats.
To try to alleviate this challenge, Anthropic developed the Model Context Protocol (MCP), which creates a unified standard for interactions. The protocol involves two components: a Client and a Server. Servers provide a standardized set of functions (in MCP format) for interacting with the external service; while the Client, with the help of an LLM, converts a user request to that format. Overall, this shifts the burden of figuring out how to talk to other systems from the end users to the Client and Server developers. One of the primary benefits of MCP is in a “low-code” setting (e.g. a desktop assistant), because integrating MCP into a complex internal system would require coding custom servers and clients.
While MCP is meant to simplify interactions between agents and external services, Google's Agent2Agent Protocol (A2A), addresses inter-agent communication by defining a standardized method for AI agents from different platforms or vendors to discover each other's capabilities, negotiate interactions, and coordinate complex tasks. When multiple agents implement A2A, the initiator agent can send requests to a second agent in a standardized format and expect replies in the same format. This can simplify interactions, because the "initiator" system will not need to write custom integrations for every other system. For example, a travel-agent agent could seamlessly prompt separate flight, hotel and restaurant booking agents that are implemented by other vendors. The protocol includes a convention for querying the user during the tasks. A2A is a voluntary standard, meaning its success relies heavily on industry wide adoption.
MCP can simplify complex code to a config, but it does not inherently guarantee the quality or consistency of the external data or tools it connects to; developers must still vet and manage these integrations carefully. While a list of official MCP servers exist, it is possible to develop a malicious server that steals user data or injects malicious code into the response (that is processed by the LLM before being returned to the user). While A2A does not pose risks directly, simplified interactions between AI systems can accelerate multiple potential threats associated with rogue or compromised external systems (see our previous discussion Agentic AI risks: https://trustible.substack.com/p/ai-safety-institutes-play-a-critical )
Our Take: Many organizations have struggled with moving complex AI systems from POCs to productions. Recent protocols from Anthropic and Google attempt to alleviate some of the challenges, but focus primarily on straight-forward interactions with external systems. If these standards and protocols are heavily adopted we may one day talk about ‘MCP’ and ‘A2A’ for AI the same way we talk about REST and SQL for the cloud.
2. AI Literacy is needed to fill the AI trust gap
Americans are continuing to express some skepticism over AI’s benefits. A recent CBS poll found that about 44 percent of Americans thought AI would create more problems than it would solve. The same survey found about one-third of respondents did not know whether AI would create or solve more problems. Moreover, most Americans remain doubtful about information generated by AI and have concerns about misleading AI-created content. While these numbers paint a less than optimistic picture about how the American public feels about the AI growing presence in daily life, the increasing push to improve AI literacy can help change the sentiment.
The EU AI Act codifies requirements for organizations to implement AI literacy programs for all employees that deal with or use AI systems. Other non-binding frameworks, like the NIST AI Risk Management Framework (AI RMF) and ISO 42001, address AI competencies among an organization's workforce. In the U.S., the only state-level AI (Colorado SB-205) is silent on explicit AI literacy requirements, though the law implies its because adopting the NIST AI RMF or ISO 42001 shows presumptive compliance. While these frameworks provide a strong foundation for improving AI skills among the workforce, there remains an AI skills gap among students. The Trump Administration is apparently paying attention to that gap, as its recent AI policy activity focused specifically on AI literacy for K-12 students.
On April 23, 2025, the President signed an executive order (EO) aimed specifically at improving AI literacy skills for K-12 students and educators. The EO directs federal agency heads to form public-private partnerships with AI-focused entities (i.e., industry, non-profits, and academia) to develop resources that help K-12 students understand “foundational AI literacy and critical thinking skills.” It also seeks to encourage educators on understanding AI and how it can be integrated into their day-to-day jobs, such that it can help reduce administrative tasks and allow educators to incorporate the fundamentals of AI into all subject areas. The EO also directs the Secretary of Education to develop guidance for grants that would fund improving educational outcomes using AI.
Our Take: Instilling AI skills across generations will help build trust with AI and change the broader lack-luster sentiment. The latest Trump EO moves the needle in the right direction, but predictably excludes references to AI risks, safety, accountability, or trustworthiness.
3. Colorado and Texas make changes to their AI rules
As the U.S. federal government continues to stall out on AI regulations, states are becoming the main driver for AI laws. Last year, Colorado passed the first comprehensive AI law and this year several other states have attempted to follow suit, Texas. However, 2025 has been a unique year in terms of AI policy in the U.S. (as well as globally), and we are starting to see that trickling down into state-level AI policies. Within the past week, both Colorado and Texas unveiled changes to their respective approach to AI regulations.
Since becoming law roughly a year ago, discussion has swirled around potential amendments prior to the law taking effect in January 2026. On April 28, 2025, SB25-318 was introduced in the Colorado state legislature that makes some significant changes to SB-205. The proposed changes include:
New Effective Date. Enforcement would be delayed until January 1, 2027.
Narrowing Algorithmic Discrimination. The new definition is limited to violations of existing local, state, or federal anti-discrimination law.
Removing “Reasonable Care” Standard. The proposed amendments remove a developer and deployers reasonable care standard for protecting consumers from any known or reasonably foreseeable risks of algorithmic discrimination and alerting the state Attorney General about such algorithmic discrimination.
Small Business Exemptions. Deployers that meet certain tiers of employees thresholds over a period of time are exempt from certain risk management requirements. This is changed from the flat exemption for deployers with less than 50 full-time employees.
New Affirmative Defense Requirements. The amendments narrow when a developer or deployer can invoke the safe harbor for non-compliance, with such requirements as fixing curable violations within 7 days of discovery and the violation impacting less than 1,000 consumers.
Meanwhile in Texas, the Republican-controlled state house passed the Texas Responsible AI Governance Act (TRAIGA). As we previously discussed, the original version of TRAIGA (HB 1709) was significantly revised and re-introduced as HB 149. The revised TRAIGA was further amended prior to passage, which changes addressing issues such as:
Political Viewpoint Discrimination. The revised bill removes the original prohibition on developing or deploying AI systems that discriminate based on political viewpoint.
Unlawful Discrimination. The prohibition on developing or deploying AI systems that discriminate based on protected classes exempts insurance entities, so long as those entities are subject to applicable anti-discrimination laws.
Sexually Explicit Content. A new section was added that prohibits intentionally developing or deploying AI systems that can engage in sexual, text-based conversations that imitate a minor under the age of 18.
Enforcement. New language was added to clarify that the bill would not create a private right of action.
Consumer Rights. The revised bill removes a previous section that allowed consumers to appeal adverse AI decisions.
Our Take: While neither state made wholesale changes to their AI regimes, the changes they did make reflect a major shift in the broader AI policy ecosystem – the growing desire to narrow the impact of AI rules on AI innovation.
4. Overheard at IAPP GPS 2025
Source: IAPP Linkedin
Trustible was a proud sponsor and exhibitor of IAPP’s Global Privacy Summit 2025 last week in Washington DC. While the IAPP’s origin was in training privacy professionals, it has since expanded into educating and certifying cybersecurity and AI governance professionals with its recent ‘AI Governance Professional’ (AIGP) certification. Here’s some highlights and insights from this year’s summit based on the sessions Trusible attended, and conversations at our exhibitor booth:
Inventorying is still hard
A common pain point we heard is that many organizations are still struggling to build and maintain a high quality AI inventory. Many automated tools can’t capture everything, or are too noisy, and there’s a fear that if processes aren’t put in place soon, it will become too big of a problem to catch up with.
Privacy is only one part of the puzzle
While privacy risks and harms are a major consideration for AI, there are also entire use cases and systems that don’t leverage or collect personal information, but still pose massive risks for organizations and people.
AI governance isn’t just about the regulations
While regulations and standards are relevant, they aren’t the biggest driver of AI governance activities for most organizations. In fact, it’s the absence of strong regulations or clarity on AI regulation enforcement that is leading many organizations to create more governance policies and processes as they fear procurement of less mature, or untrustworthy AI products that may flourish in a regulatory vacuum.
Sam Altman didn’t come to talk about AI
Sam Altman participated virtually in a keynote panel, but not to directly discuss OpenAI, but rather another organization he co-founded: Tools for Humanity. The organization’s first products are meant to provide open internet tools for proving someone is a human leveraging a variety of privacy-by-design biometric tools and blockchain technology. The organization’s cryptocurrency and devices have faced numerous challenges from data protection regulators and reception of the pitch was met with mixed reactions from the privacy crowd.
Key Takeaways: AI Governance is still in its early days and the pace of development and innovation in the AI space is creating a lot of uncertainty and challenges for professionals trying to take responsibility for oversight.
—
As always, we welcome your feedback on content and how to improve this newsletter!
AI Responsibly,
- Trustible team