Antitrust investigations in AI

Antitrust Investigations in AI Foster Commoditization and Interoperability

Picture of Josh Brandau
Josh Brandau

Investigations into Meta, Google, Amazon and Apple set a modern precedent and understanding of what’s at stake in splitting up big tech. In a nutshell, the investigations are hinged around monopolistic business practices and how much influence, dominance and power they possess.

Now, antitrust investigations are taking place for a new crop of key players advancing AI solutions – specifically Microsoft, OpenAI and Nvidia. The reality is, all of these companies are on the forefront of the budding $638.23B industry, and heavily influence the state of innovation, use cases and adoption. But what impact will these investigations actually have on the broader tech industry and general users alike?

State of Play

Recent reporting from Axios states the Justice Department is investigating if Nvidia, a leading AI chipmaker, for potential antitrust law violation. Additionally, the FTC will examine OpenAI and Microsoft’s AI partnership. Microsoft is also being reviewed by the FTC, to see if the company structured its investment in Inflection AI to avoid a government antitrust review of the transaction.

According to IOT Analytics, OpenAI and Microsoft alone account for 69 percent of all model and platform market share; and Nvidia accounts for 92 percent of data center GPU market share. Brands and general users have experienced this type of power dynamic play out in the tech ecosystem for years. It’s common for users to move towards the largest players out of necessity, and therein lies why regulators are looking into these AI proponents. 

There have always been industries with a high cost-of-entry (e.g. space exploration, deep sea mining, cloud computing, utilities, commercial aviation, etc). Just by their nature, and the costs associated with bringing complex concepts to reality, only a few people or entities could start businesses in those industries. Because of this, the best way to encourage competition and establish a fair state-of-play is for regulators to monitor M&A activity. By regulating that, smaller companies and startups have a chance at success via partnerships and access to capital – and that’s exactly what we’re seeing now. 

As the market expands we’re seeing the AI community work together to problem solve, and they’re actually laying a foundation for strong Open Source solutions. With this in mind, Statista research found the total number of open source projects taken up was about 3.9 million in 2023. It’s clear the Open Source AI community is growing more robust every day, and the models being developed are near parity when compared to private models – however they’re exceeding output parameters across a lot of fine tuned use cases. 

A few areas where the Open Source AI community excels are faster innovation (based on the wide range of developers working to solve the same issues), trust and transparency (users can inspect the code and ensure there is no bias or nefarious functionalities), and customization (developers can modify AI tools to their specific needs).  

The current wave of antitrust explorations are right on time and will actually benefit users. They will highlight the open source community and how strong their LLMs are, and they’ll also showcase the new and innovative strategies developers use to manage complex tasks. 

Interoperability and Accessibility Lead to New Strategies

With antitrust investigations on the horizon, Open Source communities are pushing forward to bring heightened levels of accessibility and new workflows. Hugging Face stands as a strong example of this; it’s a platform focused on empowering the machine learning community to collaborate on AI models, datasets and applications. The platform actually spearheads interoperability between AI applications at the individual level and the larger enterprise level. 

When it comes to the evolving antitrust investigation into AI platforms, Hugging Face represents an interesting reality – it effectively mitigates the risk leading AI players have in regards to compute accessibility by offering complete interoperability between small and large, open and private, fine-tuned and variably weighted solutions…the list goes on. Users simply need to switch to another model and instance on another server, taking their data sets and prompt engineering with them. All these changes on the backend can happen quickly, ensuring the front end user would have no negative impact on experience.

As interoperability continues to evolve, a new middle ground forms between ‘one model to rule it all’ and the complete commoditization of foundational models. It’s called Model Chaining, and it’s used to mitigate cost and latency by leveraging different API endpoints from different models. In practice, developers link multiple machine learning models together to tackle a complex task. Imagine an assembly line in a factory, but instead of building a car, you’re building an intelligent outcome. For example, a basic model can do language translations, while Model Chaining is an ideal approach for multimedia content creation. 

As Model Chaining becomes commonplace, larger companies will move towards commoditization at the general-user level. As we move forward from the novel use cases to real world applications, AI will become a critical component of work. Much like data centers, cloud computing and telecommunications before it, the largest players will become committed to their respective solutions, and the application of the technology will be a matter of implementation. By creating a more competitive landscape, these antitrust investigations will lead to a stronger open-source community and more innovative implementation strategies. In turn, brands, publishers and users will be able to innovate faster, have more control, and experience more impactful AI applications.

About Nota

Founded in 2022, Nota is committed to empowering newsrooms and media companies with AI tools essential for maintaining a well-informed society.

Related Articles

Building Sustainable Growth Models for Digital Media in an AI-Driven Environment
Media companies are adapting their business models to leverage AI technologies for growth and engagement, but must balance innovation with sustainability by addressing ethical challenges, leveraging AI for personalized content, exploring new revenue streams, investing in infrastructure and talent, and enhancing skills through reskilling programs.
Picture of Nota Staff
Nota Staff
The Implications of NotebookLM on AI and Human Interaction
NotebookLM, Google's new conversational AI, offers impressive accuracy in mimicking human dialogue, but users must remain vigilant for minor inaccuracies, and regulators must consider regulations to protect users and uphold journalistic integrity.
Picture of Nota Staff
Nota Staff

Request a demo

Your demo request was successful!