top of page

AI WASHING: REGULATORY GAPS AND RISKS IN THIS BOOMING MARKET

Updated: 15 hours ago




[Kartik Mehta and Jatin Yadav are students at Hidayatullah National law University Raipur]


INTRODUCTION

In a recent case, the U.S. Securities and Exchange Commission, or SEC, fined two investment advisers, Delphia Inc. and Global Predictions Inc. for $400,000 for misrepresenting the use of artificial intelligence. It is an example of "AI washing", a concerning trend wherein firms overstate their application of AI to attract investors. Like "greenwashing", where a company misleads its consumers or the public at large that it is more environment friendly than it actually is, AI washing relies on the popularity of an idea, in this case, the use of AI to convey technological expertise.

This is an example of "AI washing," a troubling trend where companies exaggerate their use of AI to appeal to investors. Similar to "greenwashing," where businesses deceive consumers or the public by presenting themselves as more environmentally friendly than they truly are, AI washing leverages the widespread appeal of AI to project an image of advanced technological expertise.

Generally, there is a lack of AI literacy for investors, and companies thrive on this knowledge gap. Such terms as "AI-powered" pop up frequently without clear explications, leaving consumers and investors to decide the vague promise of a step-up in technology. This article discusses the consequences of AI washing, recent regulatory responses, the absence of specific safeguards in India, and possible regulatory approaches inspired by international standards towards this emerging issue.


UNDERSTANDING THE GENESIS OF AI WASHING

AI washing leads to misinformation, further diffusing the perceived value of AI technologies. By marketing products to customers under the pretence of something new and AI-based, AI washing creates an environment of a misleading culture wherein marketing rather than true innovation makes a difference in the product.

Such misleading market practices are likely to increase the systemic risk factors, especially when they involve a very high technology-based industry like finance. When other companies use the same underlying technology but brand it under different names, then the whole business stands susceptible to a global financial disaster in case those models give out false predictions. For individual consumers, AI washing may make them perceive the product wrongly, leading to disappointment or even financial loss when the technology doesn't live up to the hype. Above and beyond skewing consumer trust, lofty claims regarding AI advantages cause a diminution of market fairness. Companies that practice AI washing gain an advantage over incorrect competitors who portray their products rightly, thus disturbing the levelled nature of business. Therefore, the real inventors are confined to repeating the same thing for promotion and the general lowering of the standards.


REGULATORY ATTENTION: SEC SANCTIONS AND THE HINDENBURG REPORT

Delphia overstated its usage of machine learning and data integration, while Global Predictions falsely claimed to be the "first regulated AI financial advisor." These cases exemplify how the SEC steps in to safeguard investors from misleading narratives that exploit the hype surrounding AI. The SEC's proactive approach reflects its commitment to ensuring market integrity and protecting stakeholders from potential financial harm stemming from deceptive claims.

Adding to this, Hindenburg Research published a report on Equinix, the largest data centre operator, with a market capitalisation of $80 billion, highlighting the operational risks of AI washing. Hindenburg alleged that Equinix engaged in AI washing by overstating its AI capabilities while facing infrastructure challenges, particularly power constraints that could limit its ability to service AI-driven businesses effectively. Former employees have raised scepticism over such claims, stating that there has been over-promising AI readiness but without adequate infrastructure. An investigation like this points to the scrutiny of companies rushing in to benefit from AI innovations, a matter that will influence valuations and credibility. The SEC’s actions and Hindenburg’s investigation underscore the need for enhanced regulatory measures to address AI washing.


INDIA CONTEXT: ABSENCE OF REGULATORY SAFETY NETS

Already, several incidents have come into view where companies are criticised for not keeping the promises they had initially made. For instance, one, Seven Dreamers Laboratories, promised an AI-powered product that could change their style according to the user's habits but shut down in the end because it couldn't deliver on the big vision that it presented.

The current regulatory framework of India makes no specific provisions against AI washing. Although Sec 28 of SEBI's Investment Advisors Regulations covers the issue of false or misleading information from investment advisors, exaggerated AI claims are not directly covered under this, AI washing typically falls into a grey area in which companies may not be making false claims outright but are inflating the importance or technological uniqueness of their AI products. This ambiguity allows companies to use ambiguous or hyperbolic language to create a false impression without actually violating the existing law. India has witnessed several regulations related to AI, although only some of the initiatives approach governance. Recent measures through circulars issued by SEBI back in 2019 instructed specific AI usages among Market Infrastructure Institutions and Mutual Funds to get reported; yet, what can be noted is mainly report usage instead of addressing correctness when making claims. Even fewer are the risks being exercised for non-conforming activities with no liability as a means of enforcement.

Under the Consumer Protection Act 2019 (CPA), AI washing could be viewed as an unfair trade practice since it misleads consumers with exaggerated AI capabilities. While the CPA includes penalties for misleading advertisements, proving AI washing is challenging. Companies may argue that disparities between advertised and actual AI capabilities are due to ongoing development, making it difficult to establish clear evidence of deceptive intent.


COMPETITION ISSUES AND IMPACTS ON THE CONSUMER

This distorts market dynamics in making false claims. The Competition Act of 2002 empowers the Competition Commission of India (CCI) to ensure fair competition practice, which has been reflected under Section 18 of the act. Though AI washing does affect competition, it doesn't fall within traditional anti-competitive behaviour, like price-fixing or monopolising the market. Therefore, CCI may be handicapped in dealing with AI washing under its mandate currently, as these primarily revolve around consumer deception and not market manipulation per se. This might not only cause consumers to overpay for underperforming products but also lead to a general mistrust toward AI technologies.

The repeated exposure to low-quality AI products would make customers sceptical about other genuinely innovative AI products that work, reducing the adoption of transformative technologies in general. AI washing does not only affect the consumer's trust but might reduce the general growth of the industry by discouraging the ultimate market the companies are aiming for.


TACKLING AI WASHING: LESSONS FROM INTERNATIONAL GUIDANCE

India could draw a lesson from the European Union's Artificial Intelligence Act, which would be implemented on AI claims such that it is transparent and verifiable. It is a bill that has established a system of penalties in case the companies are not compliant, thereby not encouraging them to make false claims. This also considers the demand for data minimisation, it should admit only as much information as is strictly necessary for regulatory inspections to protect business secrets and IP.

A similar regulatory approach in India would define what an "AI-powered" application is and what are the minimum requirements to qualify as an AI solution. Regulations could mandate companies to specify in particular detail how AI is applied within their offerings so that the consumer and investor know about the product's real capacity. A system of penalties devised on the lines of SEBI enforcement mechanisms about financial disclosures could discourage firms from using ambiguous or superlative terms for their AI features. India may just adopt data protection standards but allow partial sharing of data when complete disclosure may compromise trade secrets. In cases where the technical data of a product needs to remain confidential, the companies concerned could submit essential information through protected practices similar to those contemplated in the EU AI Act. Furthermore, the registration process would make it transparent how only those products accredited by the regulatory authority could carry labels of "AI-powered". The system would ensure that through the regulatory authority, indeed, AI plays a substantive role in the functioning of the product. The system could check such misuse of terms such as "AI-driven", "AI-enabled," or "AI-powered" without credible backing.


CONCLUSION

As the AI industry grows, so do the risks of AI washing and, consequently, the erosion of trust and misallocation of resources from legitimate AI innovations. As of now, India's regulatory approach does not provide clear guidelines to address this issue and allows companies to exploit the growing enthusiasm around AI without substantiating claims. A regulatory framework for AI washing should draw upon international examples, emphasising clear definitions, transparent disclosures, and strict penalties for misrepresentations. Such a framework will bring responsible AI governance to the fore in India and facilitate innovation while letting the sector grow with integrity. Then, regulations that ensure transparency by mandating disclosures or setting up a supervising body for the claims made by a company would allow consumers and investors protection through a competitive, ethical, and sustainable AI ecosystem- a place where genuine advancement occurs instead of just marketing talk. Moreover, Industry-led initiatives, such as certification standards for AI capabilities, could complement regulatory efforts by promoting transparency and accountability. Ultimately, fostering a credible and ethical AI ecosystem will require a coordinated approach involving regulators, industry stakeholders, and independent oversight to curb deceptive practices and protect investor trust. The EU AI Act could serve as a good inspiration for the same, where the law imposes certain obligations regarding transparency, documentation, information and monitoring of the AI system. This will give India the right atmosphere wherein the true contribution of AI innovations is valued and not overstated claims.


34 views0 comments

Recent Posts

See All

Comments


Thanks for submitting!

  • LinkedIn
  • Instagram
  • Twitter

©2020 by The Competition and Commercial Law Review.

bottom of page