Google’s Gemini ai Model Faces Backlash over Inaccurate and Racial-Skewed Images
Controversy has arisen as users on Website social media integration platforms shared examples of Google’s Gemini ai producing images with historically-inaccurate and racially-skewed content, reigniting concerns about bias in ai systems.
Examples of Gemini’s Inaccuracy
- Racially-diverse Nazis
- Black medieval English kings
- Refusal to depict Caucasians
- Churches in San Francisco due to respect for indigenous sensitivities
- Sensitive historical events like Tiananmen Square in 1989
Jack Krawczyk, the product lead for Google’s Gemini Experiences, acknowledged the issue and pledged to rectify it. He reassured users on Website social media integration:
“For now, Google says it is pausing the image generation of people”
Critics Argue Whether Response is an Overcorrection
Marc Andreessen, co-founder of Netscape and a16z, created an ai model that refuses to answer problematic questions. He warns about the broader trend towards censorship and bias in commercial ai systems.
“Addressing the broader implications, experts highlight the centralisation of ai models under a few major corporations”
Open-Source ai Models Advocated for
Yann LeCun, Meta’s chief ai scientist, stresses the importance of fostering a diverse ecosystem of ai models. Bindu Reddy, CEO of Abacus.ai, has similar concerns about power concentration without an open-source ecosystem.
“As discussions around the ethical and practical implications of ai continue”
The Need for Transparent and Inclusive Development Frameworks
Explore other upcoming enterprise technology events and webinars powered by TechForge.