Shirley Chisholm, a trailblazer who ignited flames of change in adversity, embodies the very essence of resilience and courage. In the face of a system built against her, she dared to be the flame that burned brightly, lighting the path for generations. As the first black woman elected to Congress, she demanded equality and justice. Despite Chisholm’s moving breakthroughs for African American women in politics, modern Artificial Recognition software misgender her. Google’s Vision AI identifies a portrait of Chisholm as male with 71% confirmation, and Amazon’s AWS Image Recognition System reports that Chisholm “appears to be male” with 86.2% confirmation. But perhaps an even more disturbing reality is that Shirley Chisholm is not the first to face brutal misclassification by AI systems. Michelle Obama, Oprah Winfrey, Serena Williams, Sojourner Truth, and Ida B. Wells – all incredibly influential women – have been subject to this misidentification [1].
These facts invite us to look for patterns in these AI fallacies. It has become obvious that women – particularly African American women – are more likely to be unclassified by AI systems. This misclassification underscores the deeply embedded biases within the algorithms that power these systems. The same technologies that have the potential to advance society are inadvertently perpetuating historical stereotypes and erasing the identities of those who have fought against such categorizations. Misgendering in AI systems serves as a poignant reminder that even the most extraordinary achievements can be overshadowed by the limitations of technology. We must examine the reasons behind these errors and seek ways to address and correct them.
The role of AI has become a transformative force across various sectors, influencing how we live, work, and interact with technology. AI has been used in adaptive education for personalized learning paths in sites such as Khan Academy, Edmentum, Duolingo, and more. In healthcare, technology has been introduced for disease diagnosis and drug discovery. Federal Agencies such as the FBI, have begun using Artificial Intelligence to determine suspects in investigations, and even beyond these divisions, Artificial Intelligence may be used to determine who gets hired and who does not. As AI has become an integral part of everyday life, its pervasive influence draws attention to the critical importance of fostering inclusivity within AI development and deployment. The ubiquity of AI technologies, ranging from virtual assistants to disease diagnosis algorithms, means that these systems have a profound impact on communities worldwide.
AI algorithms, if not carefully designed, may exhibit biases in decision-making processes. This could result in unfair discrimination against individuals from particular racial or ethnic groups, as demonstrated by the misgendering of African American women mentioned earlier. If left unregulated, AI may perpetuate racism, sexism, ableism, and other forms of discrimination.
II. Unveiling Bias in Data: How do AI Systems learn from Data
In the NPR radio series Fresh Air, Cade Metz, a New York Times Technology Contributor, explains how AI systems generate answers: “What this technology has done, the way it is built is that researchers, scientists at places like Google or the San Francisco AI lab OpenAI will take vast amounts of text from the internet, and they’ll feed it into these systems. And it analyzes all that text. And it looks for patterns in the text. And in identifying those patterns, it then learns to generate new language on its own” [10]. In essence, Metz’s explanation describes the basic principles of machine learning in the context of AI systems.
Let’s investigate how biases appear in AI by breaking down the learning process of machine learning algorithms. Supervised, Unsupervised, and Reinforcement Learning are all different types of machine learning, yet they all follow the same basic steps to develop an output [8]. The first step in most machine learning algorithms is data collection. It is the programmer’s responsibility to feed the algorithm diverse and holistic data sets so that patterns are not misconstrued. Using the data provided by the programmer, the machine learning algorithm will learn to identify commonalities.
For instance, if an image recognition program intends to identify melanoma, the programmer may provide a data set with images of both cancerous and benign moles. The algorithm will learn to discern the images based on patterns such as border irregularity, color, and asymmetry. This step is also known as feature extraction, where the algorithm identifies categorical variables within data sets. However, particularly in the case of women and people of color, when training data contains biases, the algorithms can amplify stereotypes.
III. Biases in Training Data
Biases embedded in historical data can perpetuate harmful stereotypes when fed into AI systems. Historical data, especially pre-abolition, describes African Americans in a different way than we do today that, when uploaded to the algorithm, it may result in harmful conclusions. For instance, in a study conducted by the University of Washington, AI-powered robots concluded rampant stereotypes of race and ethnicity. A Washington Post report on the study summarizes, “When researchers asked robots to identify blocks as “homemakers,” Black and Latina women were more commonly selected than White men” [5]. Black and Latina women were likely to be perceived as homemakers as that is how they have stereotypically been depicted in the media. The conclusion of the AI-based robot not only shows the effects of gender roles, implicit bias, and stereotypes of women and girls, but it should also empower us to make a change in the AI Systems that we develop by feeding these systems more diverse data sets.
IV. The Human Element in AI
The lack of diversity in AI development teams is a significant and persistent issue with far-reaching implications. The United Negro College Fund (UNCF) remarks, “Even though many Black women have made significant strides within technology, Black women are significantly underrepresented across the computer sciences spectrum—making up only 3% of the tech workforce. And even fewer Black women have leadership roles in Silicon Valley (less than .5%)” [9]. If African American women are not the predominant developers of these programs, then who are? Over half of the computer scientists in the world are white (64%), and an overwhelmingly large number of them are male (79%) [11]. So it makes sense why the AI systems demonstrate a lack of diversity in their data sets: the algorithm reflects the lack of diversity in the workplace.
When Joy Buolamwini, a computer scientist and self-proclaimed poet of code, was working at the MIT Media lab, she found that the facial recognition software was unable to detect her face – that is until she put on a white mask [1]. Upon further investigation, she found that the software did not work because the people who programmed the algorithm did not expose it to a broad range of skin tones and facial structures.
“We have a technology created and designed by one demographic, that is only mostly effective on that one demographic.”
– Alexandria Ocasio Cortez, 2019 Congressional Hearing [7]
The lack of diversity in development teams has contributed to the creation of biased algorithms. Advocating for diverse and inclusive teams to ensure a more equitable AI landscape is crucial now more than ever.
V. Creating a Fair and Inclusive Future
Who codes matters, how we code matters, and why we code matters. Considering the mass integration of AI in everyday life, we must assess the ethical implications before implementation on a mass scale. We already see AI every day in our lives, which should be a point of concern considering racial biases have not entirely been eliminated. At the 2019 Leadership Conference on Civil and Human Rights, numerous public interest groups wrote a document urging tech companies to enact changes to their AI policies. They called on major companies to “Reaffirm their commitment to increasing diversity in the tech industry so that marginalized communities are part of the creation and implementation of products.” This was one of the many steps taken toward paving a more fair and inclusive future involving AI. I invite you to recall the importance of advocacy to create a truly equal environment. We can all take a step in creating a fair and inclusive future. By making diverse contributions to data sets, assembling diverse workspaces, and establishing ethical guidelines for Artificial intelligence, we can make a true change. Together, let us make the future a more welcoming place for all.
Sources
1. Buolamwini, J. (Speaker). (2018, November). How I’m fighting bias in algorithms. TED Talks. https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms
2. Buolamwini, J. (2018, November). Unmasking bias in facial recognition algorithms. MIT Sloan Ideas Made to Matter. https://mitsloan.mit.edu/ideas-made-to-matter/unmasking-bias-facial-recognition-algorithms
3. Perpetual Lineup. (n.d.). The Perpetual Lineup. https://www.perpetuallineup.org/
4. Eubanks, V., & Noble, S. U. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. International Journal of Communication, 12, 3758-3776. https://doi.org/10.24926/ijoc.2018.6182
5. Harwell, D. (2019, December 19). Federal study confirms racial bias in many facial recognition systems, casts doubt on their expanding use. The Washington Post. https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/
6. The Leadership Conference on Civil and Human Rights. (2019, August 9). Civil Rights and Public Interest Groups Urge Tech Companies to Enact Meaningful Changes. https://civilrights.org/2019/08/09/civil-rights-and-public-interest-groups-urge-tech-companies-to-enact-meaningful-changes/
7. C-SPAN. (2019, May 22). House Hearing on Facial Recognition Technology. https://www.c-span.org/video/?460959-1/house-hearing-facial-recognition-technology
8. IBM. (n.d.). Machine Learning. https://www.ibm.com/topics/machine-learning
9. UNCF. (2020). Black Females Moving Forward in Computing. UNCF Annual Report 2020. https://uncf.org/annual-report-2020/black-females-moving-forward-in-computing#:~:text=Even%20though%20many%20Black%20women,5%25).
10. Metz, C. (Host). (2023, June 15). Fresh Air for June 15, 2023 – Cade Metz on Artificial Intelligence. NPR. https://www.npr.org/programs/fresh-air/2023/06/15/1182419594/fresh-air-for-june-15-2023-cade-metz-on-artificial-intelligence
11. Georgetown Center for Security and Emerging Technology. (n.d.). Levers for Improving Diversity in Computer Science. https://cset.georgetown.edu/article/levers-for-improving-diversity-in-computer-science/#:~:text=Although%20the%20computer%20science%20workforce,historically%20underrepresented%20racial%2Fethnic%20groups.
12. Poet of Code. (n.d.). About. https://poetofcode.com/about/
13. Buolamwini, J. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Massachusetts Institute of Technology. https://www.media.mit.edu/publications/full-gender-shades-thesis-17/
14. UCLA Equity, Diversity, and Inclusion. (2019). The Science of Equality, Volume 2. https://equity.ucla.edu/wp-content/uploads/2019/12/Science-of-Equality-Volume-2.pdf
One response to “Unmasking Bias: The Hidden Racial Impact of Artificial Intelligence”
Hi, this is a comment.
To get started with moderating, editing, and deleting comments, please visit the Comments screen in the dashboard.
Commenter avatars come from Gravatar.