UNESCO Study Exposes Gender and Other Bias in AI Language Models

A recent UNESCO study delves into the biases embedded within Large Language Models (LLMs), including widely used platforms like GPT-3.5 and GPT-2 by OpenAI, and Llama 2 by META. The study, titled “Challenging systematic prejudices: an investigation into bias against women and girls in large language models”, uncovers concerning trends of gender bias, homophobia, and racial stereotyping in the content generated by these models. It reveals that women are consistently depicted in domestic roles, while men are associated with high-status careers. Moreover, LLMs exhibit negativity towards gay individuals and perpetuate cultural biases against certain ethnic groups. 

UNESCO Director General, Audrey Azoulay, emphasises the significant impact of these biases, calling for regulatory frameworks and continuous monitoring by both governments and private companies. Open-source LLMs display the most significant gender bias, but their transparent nature presents an opportunity for collaborative efforts to address these issues compared to closed models like GPT-3.5 and Google’s Gemini. 

The study also examines narratives generated by these models, revealing richer and more diverse stories for men compared to women. Additionally, it highlights the urgent need for implementing UNESCO’s Recommendation on the Ethics of AI, which aims to ensure gender equality in AI design and calls for actions like diversifying recruitment in tech companies and investing in programs to increase women’s participation in STEM fields. 

Overall, the study underscores the importance of addressing biases in AI systems to mitigate their real-world impact on perpetuating inequalities and harming diverse communities. 

Download the full study from the UNESCO website. 

SUBSCRIBE TO OUR NEWSLETTER

Why not keep up to date with all our latest news and events?