Google apologizes for Gemini AI’s unreliable results on PM Modi queries.

Google’s AI tool Gemini did provide inaccurate results when people searched for Prime Minister Narendra Modi, for which the company has apologised. Furthermore, India’s IT Ministry recently issued notice to all generative AI companies not to generate conclusions that violate Indian law through their services.

Search giant Google quickly responded to the Gemini problem, drawing the ire of the Center as they saw a biased answer given to questions regarding Prime Minister Narendra Modi. MoS for IT and Electronics Rajeev Chandrasekhar told The Times of India that Google apologized after they received a letter asking them to explain “unsubstantiated results.”

Last week, India’s IT Ministry issued an advisory to companies like Google and OpenAI operating generative artificial intelligence platforms, as well as similar platforms, advising them that their services must not provide replies that violate Indian laws.

Indian users must be informed that platforms providing “under-testing/unreliable” AI systems or large language models must first seek approval from the Center, clearly noting any output that may be “fallible” or “unreliable.” MoS for IT and Electronics Rajeev Chandrasekhar told The Times of India that after receiving an official letter from the Centre requiring them to account for “unsubstantiated results”, one company apologized and issued an official retraction.

Last week, India’s Information and Technology Ministry issued an advisory to Google and OpenAI as well as those operating similar platforms using generative Artificial Intelligence systems that provide replies that violate Indian laws, telling them that their services must not deliver replies that violate India’s legal framework.

Platforms offering AI systems or large language models that have yet to be tested, or which may not meet Indian users’ reliability expectations, must obtain specific approval from the Centre and disclose potential failure or unreliability in their output.

The government has requested these platforms implement a traceability requirement by including an identifiable identifier in each piece of content they produce, so it can be linked back to its source – either an individual who instructed their service to create misinformation or deepfakes, or backwards through time and space.

Leave a Reply

Your email address will not be published. Required fields are marked *