Debunking common market misconceptions about LLMs

Umesh Padval
Writings from Thomvest Ventures
4 min readApr 11, 2024

--

Over the past year, the media has been buzzing with reports on the meteoric rise of Large Language Models (LLMs). These reports highlight their impressive size, performance, and the substantial revenue they generate for companies like OpenAI and Anthropic. However, despite the eye-catching total Annual Recurring Revenue (ARR) figures, a significant portion of this revenue remains rooted in consumer markets rather than enterprise. While there is considerable excitement surrounding LLMs, we are still in the early stages of integrating these models into business operations. Just as with other technological advancements like cloud computing and smartphones, a few companies are emerging as leaders in the field of LLMs. To reach this level of success, it’s crucial to understand what drives a company’s success and dispel some common misconceptions about the market.

Market Dynamics and Growth Trajectory

Generative AI represents a burgeoning market poised for exponential growth. As history has shown with sectors like cloud computing, networking, mobile phones, and PCs, market consolidation will likely crystallize around 3–4 dominant LLM players over the next decade. Therefore, it’s imperative to identify the key success factors and debunk prevailing misconceptions in this space.

The Race for Scale: Bigger Isn’t Always Better

There’s an undeniable arms race among LLM providers to unveil ever-larger trillion-parameter models, reminiscent of the race towards GPT-4 or GPT-5. However, it’s essential to dispel the notion that bigger models equate to better performance across all applications. The reality is that only a niche segment of applications will necessitate trillion-parameter models, which come with exorbitant computational costs. For most practical applications, models with fewer than 100 billion parameters will suffice.

Open Source LLMs: The Hidden Costs and Legal Pitfalls

The allure of open-source LLMs like Llama and Mistral lies in their perceived cost-effectiveness. However, this perception is misleading. Hosting these models entails substantial compute and tooling costs, making them far from “free” solutions. On top of it, a majority of enterprises unfortunately do not have the right talent to implement these technologies. Hiring new talent or spending hours on fire-fighting further increases the total cost of ownership (TCO). Furthermore, the lack of transparency regarding the training data and absence of legal indemnification expose enterprises to significant legal risks — a concern that cannot be overlooked.

Enterprise vs. Consumer Markets: A Revenue Disparity

While OpenAI and Anthropic straddle both consumer and enterprise markets, the lion’s share of their current ARR stems from consumer applications. The enterprise segment remains largely untapped, primarily due to the astronomical costs associated with training trillion-parameter models on large datasets spanning both markets. Today, Cohere, a leader in artificia0=l intelligence and prior authorization automation, stands out by prioritizing enterprise applications, optimizing their solutions for cost-efficiency — a strategy that not only resonates with enterprise clients but also minimizes capital requirements compared to competitors serving dual markets.

The Imperative of Cloud Independence

For developers seeking optimal flexibility and choice, LLM providers must be cloud-agnostic, unencumbered by exclusive partnerships with specific cloud providers. However, investments from tech giants like Google, Amazon, and Microsoft have tethered Anthropic and OpenAI to specific clouds, limiting customer choice. You just have to read about the recent Inflection AI’s “non-acquisition” acquisition by Microsoft to understand the long-term impact of such relationships. In contrast, Cohere emerges as a leading cloud-agnostic LLM provider, akin to Snowflake in the cloud computing realm, offering unparalleled flexibility to customers.

Channel Partnerships: The Key to Scaling Enterprise Revenues

Unlocking the vast potential of the Generative AI market necessitates robust channel partnerships to rapidly deploy solutions and scale revenues — a veritable “land grab” opportunity. Cohere’s strategic alliances with industry titans such as Oracle, Nvidia, Salesforce, SAP, Accenture, and McKinsey, coupled with collaborations with all three cloud hyperscalers, position them favorably to capitalize on this opportunity.

Cohere’s Competitive Edge

In summary, Cohere’s LLM offerings are characterized by high performance, cost-efficiency, cloud agnosticism, and robust channel partnerships. The recent command R+ model from Cohere beats GPT4-turbo on multiple benchmarks at a fraction of cost.

These distinguishing features not only set Cohere apart but also augur well for their prospects in emerging as one of the dominant players in the rapidly evolving LLM landscape.

As venture capitalists and investors, it’s crucial to discern beyond the headlines and identify the nuanced factors that will shape the future winners in this dynamic market.

--

--

Umesh Padval is a Managing Director at Thomvest Ventures in San Francisco. He is focussed on venture investments in cybersecurity,cloud and AI Infrastructure