This post was originally published by Skyler Wharton at Hacker Noon
An introduction to the harm that ML systems cause and to the power imbalance that exists between ML system developers and ML system participants …and 10 concrete ways for machine learning practitioners to help build fairer ML systems.
Machine learning systems are increasingly used as tools of oppression. All too often, they’re used in high-stakes processes without participants’ consent and with no reasonable opportunity for participants to contest the system’s decisions — like when risk assessment systems are used by child welfare services to identify at-risk children; when a machine learning (or “ML”) model decides who sees which online ads for employment, housing, or credit opportunities; or when facial recognition systems are used to surveil neighborhoods where Black and Brown people live.
ML systems are deployed widely because they are viewed as “neutral” and “objective.”
In reality though, machine learning systems reflect the beliefs and biases of those who design and develop them.
As a result, ML systems mirror and amplify the beliefs and biases of their designers, and are at least as susceptible to making mistakes as human arbiters.
When ML systems are deployed at scale, they cause harm — especially when their decisions are wrong. This harm is disproportionately felt by members of marginalized communities . This is especially evident in this moment, when people protesting as part of the global movement for Black Lives are being tracked by police departments using facial recognition systems  and when an ML system was recently used to determine students’ A-level grades in the U.K. after the tests were cancelled due to the pandemic, jeopardizing the futures of poorer students, many of whom are people of color and immigrants .
In this post, I’ll describe some examples of harm caused by machine learning systems. Then I’ll offer some concrete recommendations and resources that machine learning practitioners can use to develop fairer machine learning systems. I hope this post encourages other machine learning practitioners to start using and educating their peers about practices for developing fairer ML systems within their teams and companies.
How machine learning systems cause harm
In June 2020, Robert Williams, a Black man, was arrested by the Detroit Police Department because a facial recognition system identified him as the person who committed a recent shoplifting; however, visual comparison of his face to the face in the photo clearly revealed that they weren’t the same person .
Nevertheless, Mr. Williams was arrested, interrogated, kept in custody for more than 24 hours, released on bail on his own money, and had to court before his case was dismissed.
This “accident” significantly harmed Mr. Williams and his family:
- He felt humiliated and embarrassed. When interviewed by the New York Times about this incident, he said, “My mother doesn’t know about it. It’s not something I’m proud of … It’s humiliating.”
- It caused lasting trauma to him and his family. Had Mr. Williams resisted arrest — which would have been reasonable given that it was unjust — he could have been killed. As it was, the experience was harrowing. He and his wife now wonder whether they need to put their two young daughters into therapy.
- It put his job — and thus his ability to support himself and his family — at risk. He could have lost his job, even though his case was ultimately dismissed; companies have fired employees with impunity for far less. Fortunately, his boss was understanding of the situation, but his boss still advised him not to tell others at work.
- It nearly resulted in him having a permanent criminal record. When Mr. Williams went to court, his case was initially dismissed “without prejudice,” which meant that he could still be charged later. Only after the false positive received widespread media attention did the prosecutor apologize and offer to expunge his record and fingerprints.
The harms caused here by a facial recognition system used by a local police department are unacceptable.
Facebook’s ad delivery system is another example of a harmful machine learning system. In 2019, Dr. Piotr Sapieżyński, a research scientist at Northeastern University, and his collaborators conducted an experiment using Facebook’s own marketing tools to discover how employment ads are distributed on Facebook [5, 6]. Through this experiment, they discovered that Facebook’s ad delivery system, despite neutral targeting preferences, shows significantly different job ads to each user depending upon their gender and race. In other words, even if an advertiser specifies that they want their ad to be seen uniformly by all genders and all races, Facebook’s ad delivery system will, depending on the content of the ad, show the ad to a race- and/or gender-skewed audience.
Specifically, Dr. Sapieżyński and collaborators discovered that women are more likely to receive ads for supermarket, janitor, and preschool jobs, whereas men are more likely to receive ads for taxi, artificial intelligence, and lumber jobs. (The researchers acknowledge that the study was limited to binary genders due to restrictions in Facebook’s advertising tools.) They similarly discovered that Black people are more likely to receive ads for taxi, janitor, and restaurant jobs, whereas white people are more likely to receive ads for secretary, artificial intelligence, and lumber jobs.
Facebook’s ad delivery system is an example of a consumer-facing ML system that causes harm to those who participate in it:
- It perpetuates and amplifies gender- and race-based employment stereotypes for people who use Facebook. For example, women are shown ads for jobs that have historically been associated with “womanhood” (e.g., caregiving or cleaning jobs); seeing such ads reinforces their own — and also other genders’ — perceptions of jobs that women can or “should” do. This is also the case for the ads shown to Black people.
- It restricts Black users’ and woman users’ access to economic opportunity. The advertisements that Facebook shows to Black people and women are for noticeably lower-paying jobs. If Black people and women do not even know about available higher-paying jobs, then they are unable to apply for and be hired for them.
The harms caused by Facebook’s ad delivery system are also unacceptable.
In the case of both aforementioned algorithmic systems, the harm they cause goes deeper: they amplify existing systems of oppression, often in the name of “neutrality” and “objectivity.” In other words, the examples above are not isolated incidents; they contribute to long-standing patterns of harm.
For example, Black people, especially Black men and Black masculine people, have been systematically overpoliced, targeted, and murdered for the last four hundred years. This is undoubtedly still true, as evidenced by the recent murders by the police of George Floyd, Breonna Taylor, Tony McDade, and Ahmaud Arbery and recent shooting by the police of Jacob Blake.
Commercial facial recognition systems allow police departments to more easily and subtly target Black men and masculine people, including to target them at scale. A facial recognition system can identify more “criminals” in an hour than a hundred police officers could in a month, and it can do so less expensively. Thus, commercial facial recognition systems allow police departments to “mass produce” their practice of overpolicing, targeting, and murdering Black people.
Moreover, in 2018, computer science researchers Joy Buolamwini and Dr. Timnit Gebru showed that commercial facial recognition systems are significantly less accurate for darker-skinned people than they are for lighter-skinned people . Indeed, when used for surveillance, facial recognition systems identify the wrong person up to 98% of the time . As a result, when allowed to be used by police departments, commercial facial recognition systems cause harm not only by “scaling” police forces’ discriminatory practices but also by identifying the wrong person the majority of the time.
Facebook’s ad delivery system also amplifies a well-documented system of oppression: wealth inequality by race. In the United States, the median adjusted household income of white and Asian households is 1.6x greater than that of Black and Hispanic households (~$71K vs. $43K), and the median net worth of white households is 13x greater than that of Black households (~$144K vs. $11K) . Thus, by consistently showing ads for only lower-paying jobs to the millions of Black people who use Facebook, Facebook is entrenching and widening the wealth gap between Black people and more affluent demographic groups (especially white people) in the United States. Facebook’s ad delivery system is likely similarly amplifying wealth inequities in other countries around the world.
How collecting labels for machine learning systems causes harm
Harm is not only caused by machine learning systems that have been deployed; harm is also caused while machine learning systems are being developed. That is, harm is often caused while labels are being collected for the purpose of training machine learning models.
For example, in February 2019, The Verge’s Casey Newton released a piece about the working conditions inside Cognizant, a vendor that Facebook hires to label and moderate Facebook content . His findings were shocking: Facebook was essentially running a digital sweatshop.
What they discovered:
- Employees were underpaid: In Phoenix, AZ, a moderator made $28,800/year (versus the $240,000/year total compensation of a full-time Facebook employee).
- Working conditions at Cognizant were abysmal: Employees were often fired after making just a few mistakes a week. Since a “mistake” occurred when two employees disagreed about how a piece of content should be moderated, resentment grew between employees. Fired employees often threatened to return to work and harm their old colleagues. Additionally, employees were micromanaged: they got two 15-minute breaks and one 30-minute lunch per day. Much of their break time was spent waiting in line for the bathroom, as often >500 people had to share six bathroom stalls.
- Employees’ mental health was damaged: Moderators spent most of their time reviewing graphically violent or hateful content, including animal abuse, child abuse, and murders. As a result of watching six hours per day of violent or hateful content, employees developed severe anxiety, often while still in training. After leaving the company, employees developed symptoms of PTSD. While employed, employees had access to only nine minutes of mental health support per day; after they left the company, they had no mental health support from Facebook or Cognizant.
Similar harms are caused by crowdsourcing platforms like Amazon Mechanical Turk, through which individuals, academic labs, or companies submit tasks for “crowdworkers” to complete:
- Employees are underpaid. Mechanical Turk and other similar platforms are premised on a large amount of unpaid labor: workers are not paid to find tasks, for tasks they start but can’t complete due to vague instructions, for tasks rejected by task authors for often arbitrary reasons, or for breaks. As a result, the median wage for a crowdworker on Mechanical Turk is approximately $2/hour . Workers who do not live in the United States, are women, and/or are disabled are likely to earn much less per hour .
- Working conditions are abysmal. Workers’ income fluctuates over time, so they can’t plan for themselves or their families for the long-term; workers don’t get healthcare or any other benefits; and workers have no legal protections.
- Employees’ mental health is damaged. Crowdworkers often struggle to find enough well-paying tasks, which causes stress and anxiety. For example, workers report waking up at 2 or 3am in order to get tasks that pay better .
Contrary to popular belief, many people who complete tasks on crowdsourcing platforms do so in order to earn the bulk of their income. Thus, people who work for private labeling companies like Cognizant and people who work for crowdsourcing platforms like Mechanical Turk have a similar goal: to complete labeling tasks in a safe and healthy work environment in exchange for fair wages.
Why these harms are happening
At this point, you might be asking yourself, “Why are these harms happening?” The answer is multifaceted: there are many reasons why deployed machine learning systems cause harm to their participants.
When ML systems are used
A big reason that machine learning systems cause harm is due to the contexts in which they’re used. That is, because machine learning systems are considered “neutral” and “objective,” they’re often used in high-stakes decision processes as a way to save money. High-stakes decision processes are inherently more likely to cause harm, since a mistake made during the decision process could have a significant negative impact on someone’s life.
At best, introducing a machine learning system into a high-stakes decision process does not affect the probability that the system causes harm; at worst, it increases the probability of harm, due to machine learning models’ tendency to amplify biases against marginalized groups, human complacency around auditing the model’s decisions (since they’re “neutral” and “objective”), and that machine learning models’ decisions are often uninterpretable.
How ML systems are designed
Machine learning systems also cause harm because of how they’re designed. For example, when designing a system, engineers often do not account for the possibility that the system could make an incorrect decision; thus, machine learning systems often do not include a mechanism for participants to feasibly contest the decision or seek recourse.
Whose perspectives are centered when ML systems are designed
Another reason that ML systems cause harm is that the perspectives of people who are most likely to be harmed by them are not centered when the system is being designed.
Systems designed by people will reflect the beliefs and biases — both conscious and unconscious — of those people. Machine learning systems are overwhelmingly built by a very homogenous group of people: white, Asian-American, or Asian heterosexual cisgender men who are between 20 and 50 years old, who are able-bodied and neurotypical, who are American and/or who live in the United States, and who have a traditional educational background, including a degree in computer science from one of ~50 elite universities. As a result, machine learning systems are biased towards the experiences of this narrow group of people.
Additionally, machine learning systems are used in often contexts that disproportionately involve historically marginalized groups (like predicting recidivism or surveilling “high crime” neighborhoods) or to determine access to resources that have long been unfairly denied to marginalized groups (like housing, employment opportunities, credit and loans, and healthcare). For example, since Black people have historically been denied fair access to healthcare, machine learning systems used in such contexts display similar patterns of discrimination, because they hinge on historical assumptions and data . As a result, unless deliberate action is taken to center the experiences of the groups that ML systems are arbitrating, machine learning systems lead to history repeating itself.
At the intersection of the aforementioned two points is a chilling realization: the people who design machine learning systems are rarely the people who are affected by machine learning systems. This rings eerily similar to the fact that most police do not live in the cities where they work .
Lack of transparency around when ML systems are used
Harm is also caused by machine learning systems because it’s often unclear when an algorithm has been used to make a decision. This is because companies are not required to disclose when and how machine learning systems are used (much less get participants’ consent), even when the outcomes of those decisions affect human lives. If someone is unaware that they’ve been affected by an ML system, then they can’t attribute harm they may have experienced to it.
Additionally, even if a person knows or suspects that they’ve been harmed by a machine learning system, proving that they’ve been discriminated against is difficult or impossible, since the complete set of decisions made by the ML system is private and thus cannot be audited for discrimination. As a result, harm that machine learning systems cause often cannot be “proven.”
Lack of legal protection for ML system participants
Finally, machine learning systems cause harm because there is currently very little regulatory or legal oversight around when and how machine learning systems are used, so companies, governments, and other organizations can use them to discriminate against participants with impunity.
With respect to facial recognition, this is slowly changing: in 2019, San Francisco became the first major city to ban the use of facial recognition by local government agencies . Since then, several other cities have done the same, including Oakland, CA; Somerville, MA; and Boston, MA [16, 17].
Nevertheless, there are still hundreds of known instances of local government agencies using facial recognition, including at points of entry into the United States like borders and airports and by local police for unspecified purposes . Use of facial recognition systems in these contexts — especially given that the majority of their decisions are likely wrong  — have real-world impact, including harassment, unjustified imprisonment, and deportation.
With respect to other types of machine learning systems, there have been few legal advances.
Call to action
Given the contexts in which ML systems are used, the current lack of legal and regulatory oversight for such contexts, and the lack of societal power that people harmed by ML systems tend to have (due to their, e.g., race, gender, disability, citizenship, and/or wealth), ML system developers have massively more power than participants.
Image caption: There are huge power imbalances in machine learning system development: ML system developers have more power than ML system participants, and labeling task requesters have more power than labeling agents. [Image source: http://www.clker.com/clipart-scales-uneven.html] Image description: Imbalanced scale image — ML system developer & labeling task requester weigh more than ML system participant & labeling agent
There’s a similar power dynamic between people who design labeling tasks and people who complete labeling tasks: labeling task requesters have more power than labeling agents.
Here, ML system developer is defined as anyone who is involved in the design, development, and deployment of machine learning systems, including machine learning engineers and data scientists and also software engineers of other technical disciplines, product managers, engineering managers, UX researchers, UX writers, lawyers, mid-level managers, and C-suite executives. All of these roles are included in order to emphasize that even if you don’t work directly on a machine learning system, if you work at a company or organization that uses machine learning systems, then you have power to affect change on when and how machine learning is used at your company.
Let me be clear: individual action is not enough — we desperately need well-designed legislation to guide when and how ML systems can be used. Importantly, there should be some contexts in which ML systems cannot be used, no matter how “accurate” they are, because the probability of misuse and mistakes are too great — like police departments using facial recognition systems .
Unfortunately, we do not have necessary legislation and regulation in place yet. In the meantime, as ML system developers, we should intentionally consider the ML systems that we, our teams, or our companies own and utilize.
How to build fairer machine learning systems
If you are a machine learning system developer — especially if you are machine learning practitioner, like an ML engineer or data scientist — here are 10 ways you can help build machine learning systems that are more fair:
When designing a new ML system or evaluating an existing ML system, ask yourself and your team the following questions about the context in which the system is being deployed/is deployed :
- What could go wrong when this ML system is deployed?
- When something goes wrong, who is harmed?
- How likely is it that something will go wrong?
- Does the harm disproportionately fall on marginalized groups?
Use your answers to these questions to evaluate how to proceed. For example, if possible, proactively engineer solutions that prevent harms from occurring (e.g., add safeguards to prevent harm, like including human intervention and mechanisms for participants to contest system decisions, and inform participants that a machine learning algorithm is being used). Alternately, if the likelihood and scale of harm are too high, do not deploy it. Instead, consider pursuing a solution that does not depend on machine learning or that uses machine learning in a less risky way. Deploying a biased machine learning system can cause real-world harm to system participants as well as reputational damage to your company [21, 22, 23].
Utilize best practices for developing fairer ML systems. Machine learning fairness researchers have been designing and testing best practices for several years now. For example, one best practice is to, when releasing a dataset for public or internal use, simultaneously release a datasheet, a short document that shares information that consumers of the dataset need in order to make informed decisions about using it (e.g., mechanisms or procedures used to collect the data, whether an ethical review process was conducted, whether or not the dataset relates to people) .
Similarly, when releasing a trained model for public or internal use, simultaneously release a model card, a short document that shares information about the model (e.g., evaluation results (ideally disaggregated across different demographic groups and communities), intended usage(s), usages to avoid, insight into model training processes) .
Finally, consider implementing a company-wide process for internal algorithmic auditing, like that which Deb Raji, Andrew Smart, and their collaborators proposed in their 2020 paper Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing.
Work with your company or organization to develop partnerships with advocacy organizations that represent groups of people that machine learning systems tend to marginalize, in order to responsibly engage marginalized communities as stakeholders. Examples of such organizations include Color Of Change and the NAACP. Then, while developing new machine learning systems or evaluating existing machine learning systems, seek and incorporate their feedback.
Hire machine learning engineers and data scientists from underrepresented backgrounds, especially Black people, Indigenous people, Latinx people, disabled people, transgender and nonbinary people, formerly incarcerated people, and people from countries that are underrepresented in technology (e.g., countries in Africa, countries in Southeast Asia, and counties in South America). Note that this will require rethinking how talent is discovered and trained  — consider recruiting from historically-black colleges and universities (HBCUs) in the U.S. and coding and data science bootcamps or starting an internal program like Slack’s Next Chapter.
On a related note, work with your company to support organizations that foster talent from underrepresented backgrounds, like AI4ALL, Black Girls Code, Code2040, NCWIT, TECHNOLOchicas, TransTech, and Out for Undergrad. Organizations like these are critical for increasing the number of people from underrepresented backgrounds in technology jobs, including in ML/AI jobs, and all of them have a proven track record of success. Additionally, consider supporting organizations like these with your own money and time.
Work with your company or organization to sign the Safe Face Pledge, an opportunity for organizations to make public commitments towards mitigating the abuse of facial analysis technology. This pledge was jointly drafted by the Algorithmic Justice League and the Center on Technology & Privacy at Georgetown Law, and has already been signed by many leading ethics and privacy experts.
Learn more about the ways in which machine learning systems cause harm. For example, here are seven recommended resources to continue learning:
- [Book] Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil (2016)
- [Book] Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Noble (2018)
- [Book] Artificial Unintelligence: How Computers Misunderstand the World by Meredith Broussard (2018)
- [Book] Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks (2019)
- [Book] Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin (2019)
- [Book] Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary L. Gray and Siddharth Suri (2019)
- [Film] Coded Bias (2020)
Additionally, you can learn more about harms caused by ML systems by reading the work of journalists and researchers who are uncovering biases in machine learning systems. In addition to the researchers and journalists I’ve already named in this essay (e.g., Dr. Piotr Sapieżyński, Casey Newton, Joy Buolamwini, Dr. Timnit Gebru, Deb Raji, Andrew Smart), some examples include Julia Angwin (and anything written by The Markup), Khari Johnson, Moira Weigel, Lauren Kirchner, and anything written by Upturn. The work of journalists and researchers serve as important case studies for how not to design machine learning systems, which is valuable for ML practitioners’ who are aiming to develop fair and equitable ML systems.
Learn about ways in which existing machine learning systems have been improved in order to cause less harm. For example, IBM has worked to improve the performance of their commercial facial recognition system with respect to racial and gender bias (direct link), Google has worked to reduce gender bias in Google Translate (direct link), and Jigsaw (within Google) has worked to change Perspective AI (a public API for hate speech detection algorithm) to less often classify phrases containing frequently targeted groups (e.g., Muslims, women, queer people) as being hate speech (direct link).
Conduct an audit of a machine learning system for disparate impact. Disparate impact occurs when, even though a policy or system is neutral, one group of people is adversely affected more than another. Facebook’s ad delivery system is an example of a system causing disparate impact.
For example, use Project Lighthouse, a methodology that Airbnb released earlier this year that uses anonymized demographic data to measure user experience discrepancies that may be due to discrimination or bias, or ArthurAI, an ML monitoring framework that allows you to also monitor model bias. (Full disclosure: I work at Airbnb.)
Alternatively, hire an algorithmic consulting firm to conduct an audit of a machine learning system that your team or company owns, like O’Neil Risk Consulting & Algorithmic Auditing or the Algorithmic Justice League.
When hiring third-party vendors or using crowdsourcing platforms for machine learning labeling tasks, be critical of who you choose to support. Inquire about the working conditions of the people who will be labeling for you. Additionally, if possible, make an onsite visit to the vendor to gauge working conditions for yourself. What is their hourly pay? Do they have healthcare and other benefits? Are they full-time employees or contractors? Do they expose their workforce to graphically violent or hateful content? Are there opportunities for career growth and advancement within the company?
Give a presentation to your team or company about harms that machine learning systems’ cause and how to mitigate them. The more people who understand the harms that machine learning systems cause and the power imbalance that currently exists between ML system developers and ML system participants, the more likely it is that we can affect change on our teams and in our companies.
Finally, the bonus #11 in this list is, if you are eligible to do so in the United States, VOTE. There is so much at stake in this upcoming election, including the rights of BIPOC people, immigrants, women, LGBTQ people, and disabled people as well as — quite literally — the future of our democracy. If you are not registered to vote, please do so now: Register to vote. If you are registered to vote but have not requested your absentee or mail-in ballot, please do so now: Request your absentee ballot. Even though Joe Biden is far from the perfect candidate, we need to elect him and Kamala Harris; this country, the people in it, and so many people around the world cannot survive another four years of a Trump presidency.
Machine learning systems are incredibly powerful tools; unfortunately though, they can be either agents of empowerment or agents of harm. As machine learning practitioners, we have a responsibility to recognize the harm that systems we build cause and then act accordingly. Together, we can work toward a world in which machine learning systems are used responsibly, do not reinforce existing systemic biases, and uplift and empower people from marginalized communities.
This piece was inspired in part by Participatory Approaches to Machine Learning, a workshop at the 2020 International Conference on Machine Learning (ICML) that I had the opportunity to attend in July. I would like to deeply thank the organizers of this event for calling attention to the power imbalance between ML system developers and ML system participants and for creating a space to discuss it: Angela Zhou, David Madras, Inioluwa Deborah Raji, Bogdan Kulynych, Smitha Milli, and Richard Zemel. Also published at here.
 Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil. Published 2016.
 NYPD used facial recognition to track down Black Lives Matter activist. The Verge. August 18, 2020.
 An Algorithm Determined UK Students’ Grades. Chaos Ensued. Wired. August 15, 2020.
 Wrongfully Accused by an Algorithm. The New York Times. June 24, 2020.
 Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Biased Outcomes. Muhammad Ali, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. CSCW 2019.
 Turning the tables on Facebook: How we audit Facebook using their own marketing tools. Piotr Sapiezynski, Muhammad Ali, Aleksandra Korolova, Alan Mislove, Aaron Rieke, Miranda Bogen, and Avijit Ghosh. Talk given at PAML Workshop at ICML 2020.
 Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Joy Buolamwini and Timnit Gebru. ACM FAT* 2018.
 Facial-recognition software inaccurate in 98% of cases, report finds. CNET. May 13, 2018.
 On Views of Race and Inequality, Blacks and Whites Are Worlds Apart: Demographic trends and economic well-being. Pew Research Center. June 27, 2016.
 The Trauma Floor: The secret lives of Facebook moderators in America. The Verge. February 25, 2019.
 The Internet Is Enabling a New Kind of Poorly Paid Hell. The Atlantic. January 23, 2018.
 Worker Demographics and Earnings on Amazon Mechanical Turk: An Exploratory Analysis. Kotaro Hara, Abigail Adams, Kristy Milland, Saiph Savage, Benjamin V. Hanrahan, Jeffrey P. Bigham, and Chris Callison-Burch. CHI Late Breaking Work 2019.
 Millions of black people affected by racial bias in health-care algorithms. Nature. October 24, 2019.
 Most Police Don’t Live In The Cities They Serve. FiveThirtyEight. August 20, 2014.
 San Francisco’s facial recognition technology ban, explained. Vox. May 14, 2019.
 Beyond San Francisco, more cities are saying no to facial recognition. CNN. July 17, 2019.
 Boston is second-largest US city to ban facial recognition. Smart Cities Dive. July 6, 2020.
 Ban Facial Recognition: Map. Accessed August 30, 2020.
 Defending Black Lives Means Banning Facial Recognition. Wired. July 10, 2020.
 Credit for the framing goes to Dr. Cathy O’Neil, of O’Neil Risk Consulting & Algorithmic Auditing.
 Amazon reportedly scraps internal AI recruiting tool that was biased against women. The Verge. October 10, 2018.
 Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech. The Verge. January 12, 2018.
 Facebook’s ad-serving algorithm discriminates by gender and race. MIT Technology Review. April 5, 2019.
 Datasheets for Datasets. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. ArXiv preprint 2018.
 Model Cards for Model Reporting. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. ACM FAT* 2019.
This post was originally published by Skyler Wharton at Hacker Noon