Whether or not AI succeeds will depend on trust
Plus, the Pope weighs in on AI, and our summaries of two important AI trust conferences
Hi there! In today’s newsletter:
Microsoft vs Apple - A game of trust
Recapping IAPP’s AI Global Governance Conference 2024
Second Thoughts on Colorado’s AI Law as California Moves Forward
Pope Francis weighs in on AI Ethics at the G7
FaaCT finding in Rio de Janeiro
1. Microsoft vs Apple - A game of trust
Apple has invested significant time, research, and marketing resources to convince users that they respect their user’s privacy. Apple hasn’t just talked a big game about privacy, they’ve also gone toe-to-toe with law enforcement about it – and won. Their previous efforts may now pay-off in the AI race and give the company a definitive edge: user trust.
Apple announced their on-device ‘Apple Intelligence’ last week. While some have expressed serious concerns about it, the reception has been generally more positive than similar features announced by Microsoft. Microsoft’s Recall feature was found to have several notable security flaws, and an ‘opt-out’ model which caused significant backlash. The ultimate irony in a lot of this is that at least some services Apple is planning to roll out are powered by OpenAI, who has received significant investment from Microsoft.
The reception towards Apple Intelligence has been notably more favorable compared to Microsoft's recent announcements. One potential reason is the higher level of consumer trust in Apple. When both companies assert they won’t use data for training or engage in AI for surveillance purposes, people tend to believe Apple more given their countless years emphasizing privacy and security in their products. Additionally, Apple’s “slower” pace in developing its AI products, despite facing criticism, seems to work in its favor. By not rushing to release features that users haven't requested, Apple has cultivated an image of delivering more polished and reliable products. This approach suggests that Apple's strategy of "slow and steady" might once again prove successful.
Our Take: Actions speak louder than words. Apple’s reputation for privacy, user centric design, and quality may help them leapfrog into a leading position on AI.
2. Recapping IAPP’s AI Global Governance Conference 2024
IAPP has once again put the spotlight on AI with its second AI Global Governance (AIGG), which took place in Brussels between June 4 and 5. Over the course of those two days, professionals from the private sector, government, and academia convened to discuss the rapidly evolving AI governance landscape. IAPP hosted its first AIGG conference last November in Boston, and since then policymakers in Europe finalized the EU AI Act and state lawmakers in Colorado enacted the first state-level AI law. The conference’s location in Belgium put an emphasis on what organizations should expect as the EU AI Act comes into effect.
However, the impending legal implications of emerging AI regulatory frameworks were not the only topics of conversation. Conference attendees also expressed varying levels of preparedness when it came to AI governance structures within their own organizations. Many conversations among the individual attendees centered on starting points for their organization’s AI governance structures or how to evolve their AI governance committees into formalized AI accountability structures. Additionally, conference sessions sought to focus attention on key governance questions that organizations are grappling with, such as safety standards and addressing AI risks. Additionally, some sessions highlighted organizations’ real world successes and frustrations with implementing and adapting their AI governance programs.
Our Takeaway: The emerging theme from the conference is that, while a majority of organizations are still in the early stages of developing and implementing AI governance structures, leaders recognize that the clock is ticking for these laws to take effect.
3. Second Thoughts on Colorado’s AI Law as California Moves Forward
Less than a month after enacting the first state AI law in the U.S., lawmakers in Colorado are rethinking how to implement the law. On June 13, 2024, Governor Jard Polis, along with Colorado Attorney General Phil Weiser and Senate Majority Leader Robert Rodriguez, co-signed a letter to “innovators, consumers, and all those interested in the AI space” on potential revisions to the state’s AI law. Specifically, the changes would include seeking to align the definition of high-risk AI systems with existing definitions, moving away from proactive disclosures by AI developers and deployers, as well as narrowing the scope to focus on AI developers rather than smaller companies that may use AI through third-party software. The announcement was not entirely surprising, as Governor Polis expressed reservations when he signed the bill into law last month. Any changes to the law will likely not occur until the 2025 legislative session unless the Governor were to call a special session.
Meanwhile, lawmakers in California are working to advance a number of AI-related bills before the legislature adjourns on August 31, 2024. Among the bills being considered is SB-1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. SB-1047 targets the largest AI model developers (e.g., OpenAI and Meta) and would require them to offer reasonable assurances that their models could not cause “critical harms,” such as creating weapons of mass destruction or participating in cyberattacks against critical infrastructure. The bill has caused some controversy by also requiring that models include a “kill switch” to prevent critical harms. The bill passed the state senate in May 2024 and is expected to be voted on by the state assembly in the next two months.
Our Takeaway: While state lawmakers are pressing forward with bills to regulate AI, not every bill will translate into the perfect law. As we have seen in the past (i.e., privacy laws), as new risks emerge or the technology changes, so will the regulatory frameworks.
4. Pope Francis weighs in on AI Ethics at the G7
We’ll never know whether Pope Francis had strong thoughts about AI before a deepfake of him wearing a puffy Balenciaga coat went viral in 2023, but we do know his current thoughts on AI. Pope Francis already broke conventions when he addressed a meeting of the G7 last week in Italy, but then defied expectations again when he chose to talk about the power and perils of AI as part of his remarks amid so many other global conflicts and trends.
Pope Francis focused on a ban on autonomous weapons, treating AI as a tool and not an ‘oracle’, and ensuring that humans always stay in the loop when making decisions about other humans. He specifically called out the use of AI for judicial recommendations, just a week after a US federal judge admitted to using ChatGPT to help decide a court case. This was not the first time the Vatican has weighed in about AI ethics. Last year, the Vatican released a joint publication on AI ethics with Santa Clara University which examined key theological and technological questions and identified several ethical principles to follow. The Pope followed up with a call for an international treaty to regulate highly capable AI.
Our Take: Many cultures and communities have often based their ethical principles on religious teaching and principles, so perhaps it shouldn’t be surprising that religion will get involved with the AI ethics debate. Core questions about human dignity, autonomy, and expression may be challenged, and we expect other religious leaders to get involved in the discussions. However, this could bring the risk of further ‘culture war’ conflicts that could impact the safe development of highly capable AI systems.
5. FaaCT finding in Rio de Janeiro
Two weeks ago, experts in computer science, law and social science gathered at the ACM conference on Fairness, Accountability and Transparency to discuss challenges in AI and other algorithmic systems. We highlight a few key insights for each theme:
For fairness, a large focus was on how relying on popular metrics may not be enough. Several studies highlighted how these metrics are not robust and can be manipulated. Different definitions of race and ethnicity can, also, significantly affect metrics. Other studies highlighted the trade-off between being able to measure and mitigate bias and preserving individual data privacy, some proposing technical solutions.
For accountability, several studies scrutinized the failures of NYC Local Law 144 that required audits of automated hiring tools for bias. While the major limitation was in the narrow scope, a survey of auditors found that their role was poorly defined and that they had trouble getting sufficient data from organizations. Potential challenges with the upcoming EU AI Act were, also, discussed including insufficient definitions of "human oversight" and of open-source.
For transparency, some studies proposed new frameworks for documenting dataset curation, changes to models-as-a-service and activity of AI Agents. Others focused on a broader form of transparency through participatory approaches (i.e. involving system subjects in the development process).
This is only a fraction of the interesting talks we saw at this conference. Others included case studies of deployed AI systems across the globe, proposed new harm taxonomies and new developments in explainability techniques.
Key Take-away: FAccT 2024 highlighted how popular approaches to measuring bias of AI systems are flawed and may lead to a risk of systems appearing deceptively fair. While the conference presented many interesting analyses and frameworks; it is unclear how some of them will translate to practice in industry.
*********
As always, we welcome your feedback on content and how to improve this newsletter!
AI Responsibly,
- Trustible team