LangBiTe: An open-source platform to automate bias testing of large language models

Auteurs

Morales S., Clarisó R., Cabot J.

Référence

Softwarex, vol. 31, art. no. 102248, 2025

Description

The popularity of large language models (LLMs) raises concerns about their potential biases and their impact on society. Typically, those models are trained on a vast amount of data scrapped from forums, websites, social media and other internet sources, which may instill harmful and discriminating behavior into the model. To address this issue, we present LangBiTe, a testing platform to systematically assess the presence of biases within an LLM. Sociologists, ethicists and other researchers can leverage LangBite to execute their studies, by automatically generating and executing tests according to a set of user-defined ethical requirements and a scenario definition. Each test consists of a prompt fed into the LLM and a corresponding test oracle that scrutinizes the LLM's response for the identification of biases. LangBite provides users with the bias evaluation of LLMs, and end-to-end traceability between the initial ethical requirements and the insights obtained.

Lien

doi:10.1016/j.softx.2025.102248

Partager cette page :