A government directive has been issued days after Google’s Gemini landed in hot water over controversial remarks about Modi
Big Tech companies will now have to receive the Indian government’s permission to release artificial intelligence models that have not been thoroughly tested or are not reliable in the country.
The new rule, however, which was announced by the Ministry of Electronics and Information Technology on Friday, will only apply to “significant platforms” and not startups in India developing their own AI models, IT Minister Rajeev Chandrasekhar said on Monday, after the move was criticized by the industry.
Taking to X (formerly Twitter), the minister argued that the process of “seeking permission” could be an “insurance policy” for platforms that could otherwise be sued by consumers. He did not, however, clarify what kind of companies are considered ‘significant platforms’.
Chandrasekhar earlier told ANI news agency that if an AI model is still being tested but has been released by the developer to the public, it does not “absolve them from the consequences of the law, especially criminal law.” The new rule, he argued, will help platforms be “a lot more disciplined” about taking their AI models and platforms “from the lab directly to the market.”
Read more
New Delhi to hold tech giants accountable for deepfakes – minister
The development comes on the heels of a controversy involving Google’s new Gemini AI chatbot and Indian Prime Minister Narendra Modi. The Indian government accused the US tech giant of violating the IT Act and several provisions of the criminal code after the chatbot appeared to link the Modi-led Bharatiya Janata Party to ”fascism” in an answer to a question from a user.
Amid the backlash, Google claimed it was working “quickly” to address the issue and said the platform was “unreliable.” On Monday, several Indian news outlets reported that Google apologized to Modi for Gemini’s controversial response.
The Indian government’s latest directive also comes amid mounting scrutiny of AI platforms operating in the country, especially due to the upcoming national election.
Chandrasekhar recently announced that the federal government is working on an AI regulation framework that is set to be out in June or July 2024. Discussions over the need for this framework intensified after several instances of AI-generated deepfakes, including a morphed video of a popular Bollywood actress, went viral, triggering a nationwide outcry.
READ MORE: Google chatbot’s Modi remark sparks controversy
In December, India mandated that digital and social media platforms should communicate content prohibited under IT rules “clearly and precisely” to users. Platforms could lose ‘safe harbor immunity’ and be liable to criminal and judicial proceedings if they fail to implement the prescribed measures, New Delhi warned.
India has also been clear about its intent to harness the potential of AI, primarily to advance education and entrepreneurship in rural areas. At the Global Partnership on Artificial Intelligence (GPAI) Summit 2023 held in New Delhi last year, Modi called for the “responsible” and “ethical” use of AI in the country.