- Meta’s AI tool Imagine has been criticized for struggling to create images of interracial couples or friends
- Similar to the Gemini incident, Imagine struggles to depict interracial relationships accurately
- Meta acknowledges the problem and promises to address bias in AI systems through user feedback and improvement
- The ethical concerns surrounding AI rendering tools highlight the need for tech giants to prioritize transparency and effectively address racial bias
The AI industry is in turmoil between Meta’s AI creation tool Imagine and Meta AI, Sora, and DALL-E 3. According to recent reports, Meta’s AI tool has been criticized for having trouble creating images of interracial couples or friends.
In tests performed by CNN, the tool produced images of same-race couples instead of creating images of interracial couples. This is similar to the Gemini incident we shared with you before. Gemini also came up with a similar problem and received a great reaction.
Here’s everything you need to know…
Imagine with Meta AI cannot generate images of people of different races
With the advancement of technology, artificial intelligence has become widespread in every sector, including image-creation tools.
Going into the details of the tests, the AI rendering tool reportedly struggled to render images of Asian-White, Black-White, and Asian-Black pairs.
Meta acknowledged the problem and linked to a blog post published in September about the responsible creation of efficient AI features. The company’s blog post said:
“We are taking steps to reduce bias. Addressing potential bias in productive AI systems is a new area of research. As with other AI models, having more people use the features and share feedback can help us refine our approach.”
Tech giants need to do more to eliminate racial bias in AI models.
Meanwhile, Sora, OpenAI’s video production tool, has released a new music video. I can say that it really did an incredible job. On the other hand, YouTube CEO Neal Mohan warned OpenAI about Sora’s training. The warning came after OpenAI CTO Mira Murati failed to answer the question, “Was Sora trained with YouTube videos?” Joanna Stern of the Wall Street Journal asked last month.
You can watch the music video produced by Sora below:
While AI image-making tools are exciting for their possibilities, they also raise important ethical issues, such as racial bias. Tech giants need to do more to address these issues and move forward transparently.
Featured image credit: Dima Solomin / Unsplash