This post was originally published by Khari Johnson at Venture Beat
International digital and human rights organization Access Now has resigned in protest from the Partnership on AI (PAI), citing a lack of meaningful change on the part of businesses associated with the group and their failure to incoporate positions held by civil society organizations. PAI was formed in September 2016 by a consortium of Big Tech companies and corporate giants like Apple, Amazon, Facebook, Google, IBM, and Microsoft. PAI has grown to include more than 100 member organizations, over half of which are now nonprofit, civic, or human rights-focused groups like Data & Society and Human Rights Watch.
“We have learned from the conversations with our peers, and PAI has afforded us the chance to contribute to the larger discussion on artificial intelligence in a new forum,” Access Now states in a letter published Tuesday. “However, we have found that there is an increasingly smaller role for civil society to play within PAI. We joined PAI hoping it would be a helpful forum for civil society to make an impact on corporate behavior and to establish evidence-based policies and best practices that will ensure that the use of AI systems is for the benefit of people and society. While we support dialogue between stakeholders, we did not find that PAI influenced or changed the attitude of member companies or encouraged them to respond to or consult with civil society on a systematic basis.”
Access Now, which had joined PAI about a year ago, said it was also frustrated by the lack of support for its proposed ban on facial recognition and other biometric technology that can be used for mass surveillance. Earlier this year, the Partnership on AI produced an educational resource on facial recognition for policymakers and the public, but PAI has taken no position on whether the technology should be used. In the letter, Access Now leaders concluded that PAI is unlikely to change its stance and support a ban on facial recognition.
“The events of this year, from the public health crisis to the global reckoning on racial justice, have only underscored the urgency of addressing the risks of these technologies in a meaningful way,” the letter reads. “As more government authorities around the world are open to imposing outright bans on technologies like facial recognition, we want to continue to focus our efforts where they will be most impactful to achieve our priorities.”
Government use of surveillance technology has been on the rise in democratic and authoritarian nations alike in recent years. The 2020 Freedom of the Net report released today by Freedom House found a year-over-year decline in internet freedom in many parts of the world as governments enable increased surveillance under the cover of COVID-19.
The American Civil Liberties Union (ACLU), Amnesty International, and Electronic Frontier Foundation (EFF) — all members of PAI — have led or supported facial recognition bans in major cities, state legislatures, and the U.S. Congress. Conversely, PAI members like Amazon and Microsoft are some of the best-known facial recognition vendors in the world. During the largest protests in U.S. history in June, Amazon and Microsoft announced temporary moratoriums on facial recognition sales to police in the United States. Reform efforts may be on the agenda for the next elected Congress to address privacy, racial bias, and free speech issues raised by facial recognition.
More than two years after its founding, PAI began to engage with specific policy and AI ethics issues, such as advocating that governments create special visas for AI researchers. PAI also opposed the use of algorithms in pretrial risk assessments. This includes algorithms used by the Bureau of Prisons earlier this year to decide which prisoners should be released early to reduce overcrowding during the pandemic. PAI makes the names of its members public, but it rarely divulges which specific members contributed to policy position papers produced by PAI staff.
In response to the Access Now resignation letter, PAI executive director Terah Lyons told VentureBeat that PAI works closely with tech companies to adjust their behavior, work that will hopefully come to fruition over the course of the next year. She noted that engaging in a multi-stakeholder process and trying to reach consensus among diverse voices is a challenge that can’t be rushed.
“It’s definitely been a learning journey for us,” she said. “It’s also something that takes a lot of time to accomplish, to move industry practice in meaningful ways. And because we have just had program work for two years as a pretty young nonprofit organization, I anticipate it will still take us some time to really meaningfully move the needle in that respect. But I think the good news is that we’ve laid a lot of important groundwork and we’re already starting to see evidence of that paying dividends and some of the incremental choices that our corporate members have made as a result of their engagement.”
Examples of the incremental change Lyons referred to include the participation of companies like Facebook and Microsoft in the deepfake detection challenge PAI oversaw. She also pointed to specific examples from PAI’s work in fairness, accountability, and transparency, although she declined to share the names of specific companies or organizations that had taken part.
“A lot of the work we did with them on that issue set specifically I think really influenced how they thought about and internally addressed the challenges they face related to those questions, in addition to some of the other companies involved,” she said.
Lyons said PAI chose not to take a stand on facial recognition because the nonprofit assesses each topic on a case-by-case basis to determine where it can have maximum impact.
“It’s not necessarily the case that on every single question we are going to be in the best position to take a stance. But we do try to do our best to make sure that we’re providing some sort of service and value in support of making sure these debates as they unfold in public or private settings are as well informed and evidence-based as possible, and that we are equipping and empowering all of our organizations to really be in direct conversation with one another over these tough issues,” she said.
In other AI ethics and policy issues, Lyons said PAI has not produced any research or formed a steering committee to address the role AI plays in the concentration of power by tech companies. Last week, an antitrust subcommittee in the House of Representatives concluded a 16-month investigation with a lengthy report that concluded Amazon, Apple, Facebook, and Google are monopolies. The report states that power consolidated by Big Tech companies threatens competitive markets and democracy. It also concludes that Big Tech companies have relied on AI and the acquisition of startups in AI and emerging fields to grow their competitive advantage.
PAI has created a shared prosperity initiative to look at more equally distributing power and wealth so tech giants’ dominance does not continue to expand unchecked. The shared prosperity group includes a number of noted AI ethics researchers, as detailed in a blog post last month.
This post was originally published by Khari Johnson at Venture Beat