Last week was the busiest week in AI policy in the history of AI policy. President Biden signed a one-hundred-plus page executive order setting standards for the government’s approach to AI. The G7 adopted “Guiding Principles for Organizations Developing Advanced AI Systems.” World leaders, tech companies, and representatives of civil society and academia met at Britain’s AI Safety Summit. Spurred on by a request from Vice President Harris, a group of ten philanthropies launched a $200 million fund to ensure AI is developed in the public interest.
Before anyone excoriates me for being a contrarian or a nihilist, both tendencies in the Washington policy blob that I hate: these are all good things. The fact that the global community is thinking proactively about AI governance is necessary. That the U.S. government is setting clear mandates for government agencies in the field and encouraging philanthropic investment and research in the public interest is encouraging. And we need executive action on this, given that Congress is accomplishing so little at the moment.
But...
I am also underwhelmed and frustrated. And I am not the only one. In conversations with other advocates for women and marginalized communities over the past several days, we’ve felt a bit left behind. Yes, the Executive Order and the multilateral conversations that happened this week are the beginning of the policymaking process, not the end, but we were all hoping to feel a bit more seen in these actions.
So, in hopes that these policies will be improved as we go forward, I’m sharing my biggest frustrations with the past week.
3. The Executive Order highlights the fierce tension between economic and technological advancement and human rights. As The Atlantic’s Karen Hao points out, the Biden Executive Order is a sprawling and at times contradictory document:
“One section of the order adopts wholesale the talking points of a handful of influential AI companies such as OpenAI and Google, while others center the concerns of workers, vulnerable and underserved communities, and civil-rights groups most critical of Big Tech.”
While the administration has done a pretty decent job of gathering input from civil society in addition to Big Tech as it formulated this plan, the focus in this EO and the majority of the leadup to it has felt like it’s been on industry and the economic potential the United States can derive from leading it. The EO wants to make sure America can continue to innovate and lead in AI, and reap the benefits that come with it. It’s true, the Order emphasizes that “Artificial Intelligence policies must be consistent with [the Biden] Administration’s dedication to advancing equity and civil rights” and “Americans’ privacy and civil liberties must be protected as AI continues advancing” but...
2. How can we protect Americans’ rights when we aren’t explicit about what or who needs protecting? As I wrote on LinkedIn this week, I was extremely frustrated to see AI’s inherent misogyny and racism all but whitewashed from the text of the Executive Order. We don’t see women, gender, or race cited in ways that acknowledge the significant and disproportionate harms that AI technologies pose to marginalized groups. Instead, the Order falls back on catchall terms like discrimination and civil rights abuses.
From a policymaking perspective, I understand it; adding gender and race to the mix will make the Order less palatable to some on the Right who might consider it “woke.” Generic terms can also allow future policymakers a bit more latitude beyond the political moment of a specific policy’s passage. But from the perspective of a victim of deepfake pornography, I’m upset. We know that the most tangible, quantifiable harms AI causes today are non-consensual deepfake intimate image creation and distribution and racial biases that lead to discrimination in sectors like policing, and yet we chose not to call these harms out explicitly. Not doing so means that the broad-based term “discrimination” that is peppered throughout the EO can be co-opted by any group in the future (see, for instance, how many conservatives are now weaponizing “censorship”), while the concerns of women and other marginalized folks will be shunted aside again.
“Non-consensual intimate image abuse” is mentioned twice. One directs the Secretary of Commerce to submit a report “identifying the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques, for...preventing generative AI from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals.” While this sounds like a worthwhile report, it won’t do much to help current victims of deepfake porn. The President unfortunately can’t criminalize the distribution of deepfake porn through an EO, but I wish we had seen some acknowledgement of that and a promise to work with Congress to get this done for the many women who have been affected by this technology.
In her address in London this week, Vice President Harris tried to rectify that and some of the other omissions in the EO. She said:
“Let us be clear, there are additional threats that also demand our action. Threats that are currently causing harm, and which to many people also feel existential. When a woman is threatened by an abusive partner with explicit deepfake photographs, is that not existential for her? When a young father is wrongfully imprisoned because of bias? Is that not existential for his family? And when people around the world cannot discern fact from fiction because of a flood of AI enabled myths and disinformation. I ask, is that not existential for democracy?”
I’m glad these threats are being emphasized somewhere (even if I doubt the Vice President would have delivered the same speech Stateside), but I had higher hopes.
1. Rishi Sunak is an Elon Musk fanboy. UK Prime Minister Rishi Sunak decided it was a good look for his AI Safety Summit to interview Elon Musk on X, the platform formerly known as Twitter (may it rest in peace). Nothing says “safety” like platforming a misogynist, racist, conspiracy theorist! Ick.
To be more constructive: I do not believe that any democratic world leader should be engaging with Elon Musk at this juncture. There are plenty of other AI moguls whose views are less antithetical to democracy and who are actually concerned about the real safety risks of these technologies. Elon Musk ain’t it.
Maybe I sound like a broken record to those who have been reading and listening to me for a while. I promise I’ll move onto something else when and if we finally start to take women and marginalized communities’ concern about Big Tech seriously. But something tells me I’ll be repeating myself for a while.
Other good links on AI developments:
Watch today at 2pm ET: the House Oversight Committee, Subcommittee on Cybersecurity, Information Technology, and Government Innovation will hold a hearing on “Advances in Deepfake Technology.”
“White House AI Executive Order Takes On Complexity of Content Integrity Issues”
“When Musk met Sunak: the prime minister was more starry-eyed than a SpaceX telescope”
“Will the White House AI Executive Order deliver on its promises?”