Psychometric methods can reveal gender-based prejudices of AI systems, research finds

Large-language model (LLM) used for many different AI applications can mimic human psychological traits, including gender stereotypes, research from Mannheim

Facebook
LinkedIn
X

Large-language model (LLM) used for many different AI applications can mimic human psychological traits, including gender stereotypes, research from Mannheim Business School reveals.

Max Pellert, Assistant Professor at MBS, and co-authors used established psychological tests to measure traits such as personality and value orientation of several openly available LLMs.

These psychological characteristics are acquired in a currently poorly understood way by LLMs during their training process. AI models process texts written by humans during their training, which may contain traces of the authors’ personality traits, values, and biases that are absorbed by the models.

The researchers found that some models reproduce gender-specific prejudices. Responding to two versions of the same questionnaire, one using male pronouns and one using female pronouns, the LLMs evaluated them differently.

If the questionnaire focuses on a male person, the value “achievement” is emphasised by the AI models; for a female, the value “security” is most dominant.

The specific reasons why these values were associated with certain genders are currently poorly understood. But the findings align with prior research in social psychology and other disciplines, suggesting some humans make the same associations based on gender stereotypes, Pellert says.

The LLMs also revealed low levels of affirmation towards gender or sexual minority groups, tending to disagree with statements such as “there are many different gender identities people can have” or “nonbinary gender identities are valid”.

“This may have far-reaching consequences on society. LLMs are increasingly used in application processes and such uncovered traits may have an influence on models’ decision making. It’s therefore important to start analysing AI models now, or the prejudices they reproduce may become ingrained and have a damaging impact on society,” says Pellert.

This research was published in the journal Perspectives on Psychological Science.

Related Stories from Silicon Scotland

£150M AI framework set to give boost to NHS
Large businesses double down on AI investments
AI-mazing boost: Scotland’s AI surge powers small business growth
UK Government unveils ambitious AI strategy: implications for Scottish business and rnergy sectors
ScotRail ‘twins up’ with AI to track down emissions
Glasgow airport launches new ai-powered digital assistant

Other Stories from Silicon Scotland