
OpenAI tells Delhi court removal of ChatGPT training data will breach US law: Report
OpenAI also challenged Delhi high court jurisdiction to hear the copyright breach case filed by local news agency ANI since US-based firm has no presence in country
In a copyright breach case the Microsoft-backed Open AI is facing in India, the firm told an Indian court that it cannot remove training data used for ChatGPT since it will go against the US laws.
Moreover, it also challenged the jurisdiction of the Delhi high court to hear the case filed by local news agency ANI, as the US-based AI firm has no presence in the country, said a report by Reuters.
This case has not been reported about earlier said the report. But it is similar to the closely-watched lawsuit in the US brought against Open AI by New York Times.
Also read: New York Times sues OpenAI, Microsoft for using reports to train chatbots
ANI sues OpenAI
In India, the US-based OpenAI has been sued by news agency ANI in Delhi in November.
ANI has accused OpenAI of using the news agency's published content without permission to train its generative AI tool, ChatGPT and wants the ANI data already stored by ChatGPT to be deleted.
According to Reuters, OpenAI has responded to the lawsuit by filing an 86-page rejoinder at the Delhi high court on January 10.
In its submission, OpenAI has said that ANI’s claims were not subject to the processes of Indian courts and this was beyond their jurisdiction.
The company has "no office or permanent establishment in India ... the servers on which (ChatGPT) stores its training data are similarly situated outside of India".
OpenAI faces similar suit in the US
Notably, OpenAI is facing similar charges from The New York Times — as well as The New York Daily News and the Center for Investigative Reporting, who have filed their own lawsuits against OpenAI and Microsoft in the United States. They have claimed that OpenAI and Microsoft used the publishers’ content to train their large language models powering their generative AI chatbots.
OpenAI's "unlawful use of The Times's work to create artificial intelligence products that compete with it threatens The Times's ability to provide that service," argued the Times.
However, OpenAI has constantly refuted these allegations saying in its defence that its AI systems make fair use of publicly available data.
Also read: Musk, Altman spar over Trump-supported $500-bn Stargate AI project
Deleting data
Responding to ANI’s plea that its data stored in ChatGPT systems be completed removed, OpenAI had told the Indian courts in a submission on January 10, that it is currently defending litigation in the United States over the data on which its models have been trained.
And the US laws require it to keep all the data while hearings are pending. OpenAI "is therefore under a legal obligation, under the laws of the United States to preserve, and not delete, the said training data", it said.
Earlier, OpenAI had said that it would not use ANI's content anymore, said the Reuters report, adding that the news agency shot that down pointing out that its published works were stored in ChatGPT's memory and should be deleted.
ANI has also said it is concerned about the unfair competition that it faces since OpenAI has entered into commercial partnerships with other news organisations. According to ANI, in response to user prompts, ChatGPT reproduced verbatim or substantially similar extracts of ANI's works.
However, in its rebuttal submission, OpenAI argued that ANI "has sought to use verbatim extracts of its own article as a prompt, in an attempt to manipulate ChatGPT".
According to the report, OpenAI did not respond to a request for comment.
ANI to file response
Meanwhile, ANI, in a statement said that the Delhi court has jurisdiction over the issue and it would file a detailed response.
The New Delhi court is due to hear the case on January 28.
Significantly, Open AI has signed deals with Time magazine, the Financial Times, Business Insider-owner Axel Springer, France's Le Monde and Spain's Prisa Media to display content.