By Sandeep Parekh

Fast on the heels of the development of artificial intelligence (AI), which has already resulted in transformations across industries and professions, the European Union (EU) is the first jurisdiction to introduce legislation that regulates AI. Broadly, AI which poses an unacceptable risk would be prohibited, whereas minimal-risk AI systems would be unregulated. The former category would include AI that affects people’s rights, biometric categorisation systems, scraping of facial images from sources to create a facial recognition database, social scoring, predictive policing, and one that manipulates human behaviour or exploits people’s vulnerabilities. Additionally, the legislation establishes the EU AI authority, which will be the nodal agency for implementation and enforcement of the AI Act, which, like the general data protection regulation, has an element of extraterritorial jurisdiction.

What is actually regulated is AI that falls into neither unacceptable risk nor the minimal-risk category. These are high-risk and limited-risk AI systems. High-risk AI systems pose a threat as their uses include deployment in critical infrastructure, education, essential services, law enforcement, dispensation of justice, governance, etc. Such systems would inter alia be required to register with the relevant EU database, assess and mitigate risks, and ensure transparency, accuracy, and, more importantly, human oversight. Moreover, people would have a right to submit complaints about AI systems and would be entitled to explanations about decisions related to high-risk AI systems that affect the right of the aggrieved.

Also Read

“O Canada”– A Distant Dream for International Students?

Food systems under Modi 3.0

Real or Shadow War: Iran over Israel

Data privacy beyond compliance: Unlocking data potential with Privacy Enhancing Technologies (PETs)

The securities market will not be, or rather is not, an exception to the infiltration of AI. Although recent breakthroughs have been largely in the space of generative AI, gigantic steps are being taken to equip AI with further abilities, as well as expand the data it has access to.

Also Read

Apple security alert: Mind your language

Till three decades ago, activities related to trading in the securities market, such as research and placement of orders, hardly incorporated technology. The focus on use of technology began with some seriousness after the introduction of dematerialised shares. From then to now, India has taken the lead in introducing a T+0 settlement cycle.

With the introduction of AI, the securities market is set to witness another transformation. However, one of the pertinent concerns in this regard would be around data privacy. As AI and AI-generated algorithms permeate various sectors, including the securities market, regulators face the challenge of crafting laws that govern these technologies effectively.

Algo trading and robo advisory

Currently, algo trading is defined as trading carried out through automated means. Recent suggestions by Sebi to regulate algo trading have received mixed reviews, with the regulator’s approach towards meeting that end facing some criticism. It should be remembered that AI can effectively write codes based on instructions fed to it. Thus, creating an algo, in the near future, may not be the exclusive domain of a trained IT professional. In a situation where the deployment of an AI-created algo results in violation of securities laws, the question arises on the extent of culpability to be ascribed to the person who used AI to create the algo. The principle that the developer of an AI, or the human behind the ‘machine’, is responsible exists; however, with the advancements in AI, which are unpredictable at this stage, it may have to be revisited. As AI changes the landscape around us, our laws must keep pace to ensure that rights and obligations of the parties concerned are laid down in advance.

Additionally, robo advisory is presumably going to take centre stage in the distant future. For instance, with the vast amounts of data points analysed in a few moments and investment strategies being created in seconds instead of weeks, it is not too far-fetched to presume a considerable shift in the manner in which investment advisory service is carried out today, which may warrant a revision of the extant regulatory framework, requiring both strengthening and rationalisation of regulations.

Grievance redress and enforcement

AI may eventually be used for dispensation of justice by appropriately (and safely) integrating it in our judicial systems. In fact, it was recently suggested that AI may be used to resolve minor traffic challans, to begin with, after adequately building up such capability. Similarly, the securities regulator may consider initiating the process to develop AI that can effectively monitor, supervise, and assist in the enforcement of securities laws.

Additionally, with the recent focus on online alternative dispute resolution mechanisms for the securities market, minor issues, depending on the complexity, quantum of money/assets, and the nature of the dispute, AI can serve as an arbiter or mediator.

Pattern recognition and predictive analysis

In a potential game changer for regulators, developing AI models are becoming increasingly efficient at recognising patterns, and thus predicting the ‘future’, depending on the data points that the AI has access to, and how it is ‘coded’ to ‘think’. While algos are already deployed by regulators around the world to identify and/or track suspicious activity, AI can be of immeasurable assistance in this regard. For instance, Sebi has, in the recent past, issued circulars introducing the use of blockchain to verify information, and to ensure transparency among intermediaries and entities. The integration of AI in such systems can lead to predicting any defaults or preventing violations, thus safeguarding investor interest. However, any such technology should be used with caution, and strict safeguards should be built around such systems to prevent any misuse.

Also Read

China stifles its own debate: While Sinophobia is not to be condoned, China does face serious growth challenges

Thus, while the adoption of AI in the securities market would lead to increased efficiency, reduced costs, and enhanced decision-making capabilities for market participants, it raises significant concerns regarding market manipulation, algorithmic biases, data privacy, and systemic risks, which warrant regulatory scrutiny and the need for comprehensive legal frameworks to address issues emanating out of using AI. As AI continues to evolve and reshape the landscape of securities trading, regulatory authorities must remain vigilant, adaptive, and forward-thinking in their approach, and strike a balance between innovation and regulation, thereby navigating the complexities arising out of the intersection of AI and the securities market. It would be better to delay introducing too many regulations till the dust has settled on the field.

The author is a Managing partner, Finsec Law Advisors. Co-authored with Parker Karia, senior associate, Finsec Law Advisors.

Disclaimer: Views expressed are personal and do not reflect the official position or policy of Financial Express Online. Reproducing this content without permission is prohibited.

By Sandeep Parekh

Fast on the heels of the development of artificial intelligence (AI), which has already resulted in transformations across industries and professions, the European Union (EU) is the first jurisdiction to introduce legislation that regulates AI. Broadly, AI which poses an unacceptable risk would be prohibited, whereas minimal-risk AI systems would be unregulated. The former category would include AI that affects people’s rights, biometric categorisation systems, scraping of facial images from sources to create a facial recognition database, social scoring, predictive policing, and one that manipulates human behaviour or exploits people’s vulnerabilities. Additionally, the legislation establishes the EU AI authority, which will be the nodal agency for implementation and enforcement of the AI Act, which, like the general data protection regulation, has an element of extraterritorial jurisdiction.

What is actually regulated is AI that falls into neither unacceptable risk nor the minimal-risk category. These are high-risk and limited-risk AI systems. High-risk AI systems pose a threat as their uses include deployment in critical infrastructure, education, essential services, law enforcement, dispensation of justice, governance, etc. Such systems would inter alia be required to register with the relevant EU database, assess and mitigate risks, and ensure transparency, accuracy, and, more importantly, human oversight. Moreover, people would have a right to submit complaints about AI systems and would be entitled to explanations about decisions related to high-risk AI systems that affect the right of the aggrieved.

The securities market will not be, or rather is not, an exception to the infiltration of AI. Although recent breakthroughs have been largely in the space of generative AI, gigantic steps are being taken to equip AI with further abilities, as well as expand the data it has access to.

Till three decades ago, activities related to trading in the securities market, such as research and placement of orders, hardly incorporated technology. The focus on use of technology began with some seriousness after the introduction of dematerialised shares. From then to now, India has taken the lead in introducing a T+0 settlement cycle.

With the introduction of AI, the securities market is set to witness another transformation. However, one of the pertinent concerns in this regard would be around data privacy. As AI and AI-generated algorithms permeate various sectors, including the securities market, regulators face the challenge of crafting laws that govern these technologies effectively.

Currently, algo trading is defined as trading carried out through automated means. Recent suggestions by Sebi to regulate algo trading have received mixed reviews, with the regulator’s approach towards meeting that end facing some criticism. It should be remembered that AI can effectively write codes based on instructions fed to it. Thus, creating an algo, in the near future, may not be the exclusive domain of a trained IT professional. In a situation where the deployment of an AI-created algo results in violation of securities laws, the question arises on the extent of culpability to be ascribed to the person who used AI to create the algo. The principle that the developer of an AI, or the human behind the ‘machine’, is responsible exists; however, with the advancements in AI, which are unpredictable at this stage, it may have to be revisited. As AI changes the landscape around us, our laws must keep pace to ensure that rights and obligations of the parties concerned are laid down in advance.

Additionally, robo advisory is presumably going to take centre stage in the distant future. For instance, with the vast amounts of data points analysed in a few moments and investment strategies being created in seconds instead of weeks, it is not too far-fetched to presume a considerable shift in the manner in which investment advisory service is carried out today, which may warrant a revision of the extant regulatory framework, requiring both strengthening and rationalisation of regulations.

AI may eventually be used for dispensation of justice by appropriately (and safely) integrating it in our judicial systems. In fact, it was recently suggested that AI may be used to resolve minor traffic challans, to begin with, after adequately building up such capability. Similarly, the securities regulator may consider initiating the process to develop AI that can effectively monitor, supervise, and assist in the enforcement of securities laws.

Additionally, with the recent focus on online alternative dispute resolution mechanisms for the securities market, minor issues, depending on the complexity, quantum of money/assets, and the nature of the dispute, AI can serve as an arbiter or mediator.

In a potential game changer for regulators, developing AI models are becoming increasingly efficient at recognising patterns, and thus predicting the ‘future’, depending on the data points that the AI has access to, and how it is ‘coded’ to ‘think’. While algos are already deployed by regulators around the world to identify and/or track suspicious activity, AI can be of immeasurable assistance in this regard. For instance, Sebi has, in the recent past, issued circulars introducing the use of blockchain to verify information, and to ensure transparency among intermediaries and entities. The integration of AI in such systems can lead to predicting any defaults or preventing violations, thus safeguarding investor interest. However, any such technology should be used with caution, and strict safeguards should be built around such systems to prevent any misuse.

Thus, while the adoption of AI in the securities market would lead to increased efficiency, reduced costs, and enhanced decision-making capabilities for market participants, it raises significant concerns regarding market manipulation, algorithmic biases, data privacy, and systemic risks, which warrant regulatory scrutiny and the need for comprehensive legal frameworks to address issues emanating out of using AI. As AI continues to evolve and reshape the landscape of securities trading, regulatory authorities must remain vigilant, adaptive, and forward-thinking in their approach, and strike a balance between innovation and regulation, thereby navigating the complexities arising out of the intersection of AI and the securities market. It would be better to delay introducing too many regulations till the dust has settled on the field.

The author is a Managing partner, Finsec Law Advisors. Co-authored with Parker Karia, senior associate, Finsec Law Advisors.

Disclaimer: Views expressed are personal and do not reflect the official position or policy of Financial Express Online. Reproducing this content without permission is prohibited.

Get live Share Market updates, Stock Market Quotes, and the latest India News and business news on Financial Express. Download the Financial Express App for the latest finance news.

QOSHE - Regulating AI - Guest
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

Regulating AI

15 1
18.04.2024

By Sandeep Parekh

Fast on the heels of the development of artificial intelligence (AI), which has already resulted in transformations across industries and professions, the European Union (EU) is the first jurisdiction to introduce legislation that regulates AI. Broadly, AI which poses an unacceptable risk would be prohibited, whereas minimal-risk AI systems would be unregulated. The former category would include AI that affects people’s rights, biometric categorisation systems, scraping of facial images from sources to create a facial recognition database, social scoring, predictive policing, and one that manipulates human behaviour or exploits people’s vulnerabilities. Additionally, the legislation establishes the EU AI authority, which will be the nodal agency for implementation and enforcement of the AI Act, which, like the general data protection regulation, has an element of extraterritorial jurisdiction.

What is actually regulated is AI that falls into neither unacceptable risk nor the minimal-risk category. These are high-risk and limited-risk AI systems. High-risk AI systems pose a threat as their uses include deployment in critical infrastructure, education, essential services, law enforcement, dispensation of justice, governance, etc. Such systems would inter alia be required to register with the relevant EU database, assess and mitigate risks, and ensure transparency, accuracy, and, more importantly, human oversight. Moreover, people would have a right to submit complaints about AI systems and would be entitled to explanations about decisions related to high-risk AI systems that affect the right of the aggrieved.

Also Read

“O Canada”– A Distant Dream for International Students?

Food systems under Modi 3.0

Real or Shadow War: Iran over Israel

Data privacy beyond compliance: Unlocking data potential with Privacy Enhancing Technologies (PETs)

The securities market will not be, or rather is not, an exception to the infiltration of AI. Although recent breakthroughs have been largely in the space of generative AI, gigantic steps are being taken to equip AI with further abilities, as well as expand the data it has access to.

Also Read

Apple security alert: Mind your language

Till three decades ago, activities related to trading in the securities market, such as research and placement of orders, hardly incorporated technology. The focus on use of technology began with some seriousness after the introduction of dematerialised shares. From then to now, India has taken the lead in introducing a T 0 settlement cycle.

With the introduction of AI, the securities market is set to witness another transformation. However, one of the pertinent concerns in this regard would be around data privacy. As AI and AI-generated algorithms permeate various sectors, including the securities market, regulators face the challenge of crafting laws that govern these technologies effectively.

Algo trading and robo advisory

Currently, algo trading is defined as trading carried out through automated means. Recent suggestions by Sebi to regulate algo trading have received mixed reviews, with the regulator’s approach towards meeting that end facing some criticism. It should be remembered that AI can effectively write codes based on instructions fed to it. Thus, creating an algo, in the near future, may not be the exclusive domain of a trained IT professional. In a situation where the deployment of an AI-created algo results in violation of securities laws, the question arises on the extent of culpability........

© The Financial Express


Get it on Google Play