The Indian government has issued an advisory requiring tech companies developing new artificial intelligence (AI) tools to obtain government approval before their release. Released by the Indian IT ministry on March 1, the advisory stipulates that AI tools deemed “unreliable” or still in a trial phase must be approved prior to public release and labeled as potentially providing inaccurate answers. Furthermore, platforms must ensure that their tools do not compromise the integrity of the electoral process, especially with general elections anticipated this summer.
This advisory follows recent criticism of Google and its AI tool Gemini for delivering inaccurate or biased responses. Rajeev Chandrasekhar, India’s deputy IT minister, emphasized that platform safety and trust are legal obligations, indicating that disclaimers like “Sorry Unreliable” do not exempt platforms from compliance with the law.
In November, India announced plans for regulations to combat the spread of AI-generated deepfakes ahead of its upcoming elections, aligning with similar efforts in the United States. However, the latest advisory has faced pushback from the tech community, with concerns that excessive regulation could hinder India’s leadership in the tech space.
Chandrasekhar responded to these concerns, stating that the advisory aims to provide guidance to those deploying lab-level or under-tested AI platforms on the public internet, ensuring compliance with Indian laws and protecting users. He emphasized India’s commitment to AI and a safe and trusted internet, highlighting recent partnerships such as Microsoft’s collaboration with Indian AI startup Sarvam to bring Indic-voice large language models to Azure AI infrastructure, aimed at expanding access for users in the Indian subcontinent.