University of Washington researchers developed a system for detecting subtle biases in AI models. They found seven of the eight popular AI models they tested in conversations around race and caste generated significant amounts of biased text in interactions — particularly when discussing caste. Open-source models fared far worse than two proprietary ChatGPT models.

Read More