Generative AI models are encoding biases and negative stereotypes in their users
The likes of ChatGPT, Google’s Bard and Midjourney can also help spread incorrect, nonsensical information
Marginalised groups are disproportionately affected
Children are at particular risk
In the space of a few months generative AI models, such as ChatGPT, Google’s Bard and Midjourney, have been adopted by more and more people in a variety of professional and personal ways. But growing research is underlining that they are encoding biases and negative stereotypes in their users, as well as mass generating and spreading seemingly accurate but nonsensical information. Worryingly, marginalised groups are disproportionately affected by the fabrication of this nonsensical information.
In ...










