Google has implemented a new policy for its Artificial Intelligence (AI) chatbot, Gemini, limiting the types of election-related questions it will answer. This decision, detailed in a recent blog post, reflects the tech giant’s cautious approach to handling sensitive election information amidst global elections, including in India, the US, the UK, and South Africa.
A Google spokesperson elaborated on the strategy, noting that the move aligns with previously announced election readiness efforts. “In light of the upcoming elections worldwide in 2024 and to err on the side of caution, we’re modifying the types of election-related queries Gemini will respond to,” they stated. This approach aims to mitigate potential controversies involving AI technologies.
When tested, Gemini refrained from answering questions about upcoming US, UK, and South African elections, advising users to consult Google Search instead. Yet, upon further questioning, it provided more comprehensive answers about Indian political parties.
This adjustment comes amid growing global scrutiny over generative AI’s role in spreading misinformation, prompting calls for stricter regulation. India, for instance, recently mandated tech firms to obtain approval before deploying “unreliable” or experimental AI tools.
Google’s proactive measures also follow a misstep involving its AI image generator, which inaccurately portrayed historical figures, leading to an apology from the company and a temporary halt of the service. These incidents underscore the challenges and responsibilities facing tech companies as they navigate the evolving landscape of AI technology and public information.
For more tech news and insights, visit Rwanda Tech News, and explore similar topics and trends in the world of technology.