Is AI a solution or a barrier to gender equity?
Inspired by my time in New York for CSW67, I’d like to reflect on the subject of Artificial intelligence or AI as it is more commonly known. AI has the potential to transform our society in countless ways. However, when it comes to gender equity, the impact of AI is a complex and nuanced issue that cannot be ignored.
I’d like to kick off this blog with a movie recommendation. Have you come across this film, CODED BIAS, on Netflix?
This documentary investigates the bias in algorithms after M.I.T. Media Lab researcher Joy Buolamwini uncovered flaws in facial recognition technology. This is a must-watch! Watching it also gave me loads of nostalgia about living in Boston and Cambridge.
argument for
On one hand, AI can be a powerful tool in promoting gender equity. The use of AI algorithms can help eliminate biases in decision-making processes and make visible patterns that may otherwise have been missed. This is particularly relevant when it comes to identifying areas where gender disparities exist: AI can be used to analyse job postings, anonymise resumes, and identify pay disparities between men and women in the same roles. In addition, AI has the capacity to reduce gender-based violence. By employing AI-powered chatbots to provide support to victims of domestic violence, sexual harassment, and other forms of gender-based violence, women can be offered a new level of help and care when dealing with these issues. AI can also be used to analyze social media posts for hate speech and toxic behaviour, helping promote a safer online environment for women.
argument against
However, AI can also be a barrier to gender equity. This is especially true when AI perpetuates biases. AI algorithms are only as unbiased as the data they are trained on. If the data used to train AI is biased, then AI will be biased too. One example of this is facial recognition technology, which has been shown to be less accurate in identifying women and people of colour.
A personal example of racial and gender-coded bias. When applying for a US student visa, the online application system would not accept my original passport photo( LHS); my husband who is a photographer figured out the bias in the system and 'lightened' my complexion until the application accepted my photo. If I had not proceeded with online application and instead been forced to mail my application via courier, I would have missed a very narrow window to successfully apply for an F1 Visa to allow me to commence my Masters degree in person at Harvard
This bias is due to the predominantly male and white data that was used to train the algorithms. In situations where accuracy is crucial, such as in law enforcement, such biases can have severe consequences, leading to wrongful arrests and convictions. Furthermore, AI can also be used to perpetuate harmful stereotypes based on gender. The use of female voices for virtual assistants like Siri and Alexa, along with their flirtatious responses to sexist remarks, can reinforce gender-based discrimination and perpetuate harmful gender norms.
Another example of technological bias closer to home for me is the pulse oximeter- a portable device used to measure levels of oxygen in one’s bloodstream. It is a probe placed on your finger(toe or ear) that tells healthcare workers how sick you are. The medical field has known for decades that pulse oximeters are likely to be less reliable in people with darker skin.
Pulse oximeters were widely used during the peak of the COVID-19 pandemic- a time when racialised inequities in health outcomes for Black and Brown people were being exposed. An awareness of technological biases is relevant because we may misdiagnose racially minoritised people by overestimating how much oxygen they are getting and subsequently triage them to receive less supportive care.
This paper here explores how to move from problems to solutions for bias related to pulse oximetry!
These concerns have also been highlighted by the NHS Race and Health Observatory guidelines. Many healthcare workers who use pulse oximeters daily in clinical decision-making are still unaware of this.
My personal experience and the three research studies above are examples of how easy it is to create and embed systemic and institutional racism- a face perpetrator into our every lives, guidelines and practice. So I believe that things that we have created as a society can be re-imagined better and recreated. We collectively know how. All we need is the will.
In conclusion, AI has the potential to be both a solution and a barrier to gender equity. While it is important to continue to develop AI technology, we must also be aware of the risks that we may face. We must strive to ensure that AI is used to promote gender equity rather than create new barriers that women must face in their daily lives. By remaining cognizant of these issues, we can work together to create a future that is inclusive and equitable for all.
As we explored in my blog on positionality, power and privilege- many biases are inevitable. Our biases are the blind spots that exist at the edge of our lived experiences and privileges. The only way to overcome biases which continue to systematically harm groups of people and take away from our collective potential as humanity is to work in and problem-solve in diverse, inclusive teams with a strong sense of belonging.
Looking forward to sharing more on making a case for equality, diversity and inclusion ( EDI) beyond the social justice argument.
Stay tuned!
P.S
As always, if my thoughts this week struck a cord, piqued your interest, or you’d like to explore some of these ideas further or have questions, leave a comment and write to me HERE.