Debasis Mohapatra
Bengaluru, 10 August 2024
UK’s antitrust regulator last week started an investigation into AI startup, Anthropic’s ties with Amazon, which recently completed a $4 billion investment in the company.
Earlier, Anthropic’s deal with Google came under the scanner. Google so far has already invested around $2.3 billion in the AI startup.
San Francisco-based Anthropic, which was founded in 2021, has established itself as a public benefit corporation. Like its peer Open AI, it develops LLM (large language models) and has already launched a chatbot named ‘Claude’. In its three-year history, the company has so far raised around $10 billion.
But the question comes as to why regulators in the US and Europe have started probing AI startup, Anthropic. The reasons are many:
- Quasi-merger: AI and GenAI have taken the technology world by storm. Especially, OpenAI’s ChatGPT has made the world look at generative AI space more seriously. Against this backdrop, a wild rush has been seen among tech giants like Microsoft, Google, Amazon, and others to take a stake in promising AI startups. Microsoft’s investment in OpenAI can be seen in this context.
- However, regulators are concerned that these big companies are doing creeping acquisitions to avoid the regulatory scrutiny that comes with a full-fledged M&A deal. These companies are making strategic investments in these young startups or hiring their founders or technical talent to exert influence in those companies without taking formal board seats or related things.
- Such influence without formal legal obligation restricts competition watchdogs across the world from evaluating its impact on the competition landscape arising out of such investments. Therefore, both US, UK, and European regulators are keen to prevent such practices through initiating probe into such deals.
- Globally, regulators are trying to gauge the impact of AI on society as they want to mitigate risks coming from various AI-powered applications like impact on jobs, fake audio and videos, and unknown outcomes that are out of the control of innovators, etc.