AI, artificial intelligence, regulatory sandbox
x
Artificial Intelligence presents concerns over biases, privacy compromises and disinformation. Representational image

High time India went for a regulatory sandbox for Artificial Intelligence


Artificial Intelligence (AI), apparently the holy grail of technology, is a general purpose technology which has applications transcending disciplines. Like electricity. With the advent of the Foundation Models, i.e., models like ChatGPT-3, which are trained on humongous volumes of data at massive scales and thereafter become adaptable to a wide range of tasks, the potential of AI has increased manifold. With it have increased the problems of biases, privacy and disinformation.

In India, however, the instances of bias have not come to light, but that could be due to lack of awareness, higher tolerance levels, especially when daily survival is the first priority. Fake news remains a larger problem while privacy issues are relatively less known about.

So, to put things in perspective, we are talking about an issue which is yet to become a major problem since the adoption of AI in India is in a relatively nascent stage. Why do we want to address this problem now itself? This is because, given the scale at which this adoption is likely, and development of the technology, if the mitigation measures for addressing bias, privacy and disinformation are not undertaken at this stage itself, the impending AI tsunami could engulf us with manifestations of these problems and push the country into turmoil. Remember what Cambridge Analytica did?

Also read: With 1.2 billion internet users, India to become largest connected nation: Chandrasekhar

Need for regulatory sandbox

So what needs to be done to address the issue? Something needs to be done that should not discourage innovation, penalise coders and AI companies, and at the same time, should incentivise them to inculcate measures to counter bias, secure privacy and tackle disinformation. Further, each of these three problems mentioned may have different solutions. A one-size-fits-all approach may not serve the purpose. This is where a regulatory sandbox comes in.

A regulatory sandbox is a tool that enables companies to test innovative technologies and identify potentially costly mistakes in a controlled environment, where they have access to live environment and real-time data. Here they can experiment with new products and services but with access to a live network and real-time data of a large number of participating consumers. Basically, think of it as a beta launch, but supervised by the government. The regulatory sandbox provides a common platform for all stakeholders to interact and exchange feedback to improve the overall quality of the product including, hopefully, removing biases, disinformation and securing privacy.

Regulatory sandboxes are not new to the world or India. The fintech sector has seen the maximum adoption of sandboxes, with Australia, Bahrain, Singapore and the UK using them since 2015. Columbia, Saudi Arabia and Bahrain also use them for non-fintech sectors like communication, emerging technologies and even Islamic finance. However, as far as AI is concerned, only Norway and the UK have sandboxes presently.

Watch: Will ChatGPT make teachers redundant in the near future?

India has three regulatory sandboxes operated by the RBI (Reserve Bank of India), SEBI (Securities and Exchange Board of India) and IRDAI (Insurance Regulatory and Development Authority of India) for the fintech and insurance sectors, since 2019. Presently, the TRAI (Telecom Regulatory Authority of India), too, has realised the potential for a regulatory sandbox for the digital communication sector. To this effect, it issued a consultation paper asking for suggestions by August 1, 2023.

Regulatory sandboxes require to be broad-based with participants from the industry, legal experts, activists, consumer forums, NGOs and representatives from various ministries. These stakeholders would evaluate the AI product being brought into the sandbox and operated, for biases, privacy compromises and disinformation.

Also read: ChatGPT is confronting, but humans have always adapted to new technology

Ideally, the sandbox would operate under the AI and Emerging Technologies Division of MeitY (Ministry of Electronics and Information Technology) in a quasi-governmental manner. Companies would benefit joining the sandbox in lieu of access to market data and feedback regarding the AI product, thereby preventing potential losses and getting the permit to join the market.

Costs to operating sandboxes

There are costs to operating sandboxes. First, there will be a cost and time delay in compliance which the participating companies will have to bear. Second, since each of the AI applications is rather unique, judgments will have to be discerning, and by nature will eventually become subjective, controversial and time-consuming.

Third, there will be losses to the companies due to potential rejections during the sandboxing process, leading to litigation. Unintended consequences would include domination of the sandbox with the bigger organisations, potentially excluding startups or smaller players. It could even result in concentration of power and limited representation of diverse perspectives within the sandbox. Bigger companies may resort to data hoarding to the point where compliance is disincentivised.

Also read: Using artificial intelligence to red-flag emerging pandemics

Finally, sandboxes are best suited to test high impact AI software, for example, related to face recognition, military usage, banking and so on. It will help in reduction of biases, result in better integration within the industry and help develop a faster OODA (Observe, Orient, Decide, Act) loop and feedback.

Since all AI is not biased or creating misinformation, the sandbox will only apply to the appropriate AI technology. A regulatory sandbox is only one method of addressing some of the ills of society that manifest in the code. However, it is a first step, whose time has come, to commence the process to institutionally address some of the pitfalls of unregulated AI.

(Jaideep Chanda is an alumnus of the Takshashila Institution, and a technophile.)

(The Federal seeks to present views and opinions from all sides of the spectrum. The information, ideas or opinions in the articles are of the author’s and do not necessarily reflect the views of The Federal.)

Read More
Next Story