Generative Artificial Intelligence: A Double-Edged Sword

Despite the advancements that have come out of the artificial intelligence (AI) boom and subsequent generative AI race, there are a significant number of problems under the surface. With the rise of AI generators in social media and pop culture, these problems amplify as the demand for AI tools skyrockets. These solutions regularly exhibit harmful biases and stereotypes deeply embedded in the data developers use to train AI models. Perhaps exacerbating the immediacy of the issue, generative AI models can create highly realistic imagery, empowering users with malicious intent. This session will explore the pervasive issue of bias in generative AI and its implications and examine how these models, often trained on biased datasets, can unintentionally amplify, replicate, and reinforce harmful biases and stereotypes. Additionally, it delves into the objectification of individuals, particularly women, and in a novel study, discusses how a popular TikTok filter that uses generative AI exhibits gender and racial biases while excessively sexualizing its users.