Source URL: https://lethain.com/wardley-llm-ecosystem/
Source: Irrational Exuberance
Title: Wardley mapping the LLM ecosystem.
Feedly Summary: In How should you adopt LLMs?, we explore how a theoretical ride sharing company,
Theoretical Ride Sharing, should adopt Large Language Models (LLMs).
Part of that strategy’s diagnosis depends on understanding the expected evolution of
the LLM ecosystem, which we’ve build a Wardley map to better explore.
This map of the LLM space is interested in how product companies should address the
proliferation of model providers such as Anthropic, Google and OpenAI,
as well as the proliferation of LLM product patterns like agentic workflows, Retrieval Augmented Generation (RAG),
and running evals to maintain performance as models change.
This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book.
As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts.
Reading this document
To quickly understand the analysis within this Wardley Map,
read from top to bottom to understand this analysis.
If you want to understand how this map was written, then you should
read section by section from the bottom up, starting with Users, then Value Chains, and so on.
More detail on this structure in Refining strategy with Wardley Mapping.
How things work today
If Retrieval Augmented Generation (RAG) was the trending LLM pattern of 2023,
and you could reasonably argue that agents–or agentic workflows–are the pattern of 2024,
then it’s hard to guess what the patterns of tomorrow will be, but it’s a safe guess
that there are more, new patterns coming our way.
LLMs are a proven platform today, and now are being applied widely to discover new patterns.
It’s a safe bet that validating these patterns will continue to drive product companies to support additional
infrastructure components (e.g. search indexes to support RAG).
This proliferation of patterns has created a significant cost for these product companies,
a problem which market forces are likely to address as offerings evolve.
Transition to future state
Looking at the evolution of the LLM ecosystem, there are two questions
that I believe will define the evolution of the space:
Will LLM framework platforms for agents, RAG, and so on, remain bundled with
model providers such as OpenAI and Anthropic?
Or will they, instead, split with models and platforms being offered separately?
Which elements of LLM frameworks will be productizable in the short-term?
For example, running evals seems like a straightforward opportunity for bundling,
as would providing some degree of agent support.
Conversely, bundling RAG might seem straightforward but most production usecases would
require real-time updates, incurring the full complexity of operating scaled search clusters.
Depending on the answers to those questions, you might draw a very different map.
This map answers the first question by imagining that LLM platforms will decouple from model providers, while
also allowing you to license with that platform for model access rather than needing
to individually negotiate with each model provider.
It answers the second question by imagine that most non-RAG functionality will move into a bundled
platform provider. Given the richness of investment in the current space, it
seems safe to believe that every plausible combination will exist to some degree
until the ecosystem eventually stabilizes in one dominant configuration.
The key drivers of this configuration is that the LLM ecosystem is investing
new patterns every year, and companies are spinning up haphazard interim internal solutions
to validate those patterns, but ultimately few product companies are able to effectively fund these
sorts of internal solutions in the long run.
If this map is correct, then it means eventual headwinds for both model providers (who are inherently
limited to providing their own subset of models) as well as narrow LLM platform providers (who
can only service a subset of LLM patterns). The likely best bet for a product company in this future
is adopting the broadest LLM pattern platforms today, and to explicitly decouple pattern platform from model provider.
User & Value Chains
The LLM landscape is evolving rapidly, with some techniques getting introduced and reaching wide-spread adoption
within a single calendar year.
Sometimes those widely adopted techniques are actually being adopted, and other times it’s closer to “conference-talk driven development”
where folks with broad platforms inflate the maturity of industry adoption.
The three primary users attempting to navigate that dynamism are:
Product Engineers are looking for faster, easier solutions to deploying LLMs across
the many, evolving parameters: new models, support for agents, solutions to offload the search
dimensions of retrieval-augmented-generation (RAG), and so on.
Machine Learning Infrastructure team is responsible for the effective usage of the mechanisms,
and steering product developers towards effective adoption of these tools.
They are also, in tandem with other infrastructure engineering teams, responsible for supporting
common elements for LLM solutions, such as search indexes to power RAG implementations.
Security and Compliance – how to ensure models are hosted safely and securely,
and that we’re only sending approved information?
how do stay in alignment with rapidly evolving AI risks and requirements?
To keep the map focused on evolution rather than organizational dynamics,
I’ve consolidated a number of teams in slightly artificial ways,
and omitted a few teams that are certainly worth considering.
Finance needs to understand the cost and usage
of LLM usage. Security and Compliance are really different teams, with both overlapping and distinct requirements between them.
Machine Learning Infrastructure could be split into two distinct teams with somewhat conflicting perspectives
on who should own things like search infrastructure.
Depending on what you want to learn from the map, you might prefer to combine, split and introduce
a different set of combinations than I’ve selected here.
AI Summary and Description: Yes
**Summary:**
The text discusses the strategic adoption of Large Language Models (LLMs) by a hypothetical ride-sharing company, highlighting the evolution of the LLM ecosystem. It emphasizes market dynamics, emerging patterns like Retrieval Augmented Generation (RAG), and the importance of addressing security and compliance amidst rapid AI development. The author’s insights are framed through a Wardley map, providing a visual and analytical tool for understanding LLM strategies and their implications for product development.
**Detailed Description:**
The provided text serves as an exploratory chapter on how a theoretical ride-sharing company can effectively integrate Large Language Models (LLMs) into its operational strategy. It examines the complexities of the LLM ecosystem and discusses the future trajectory of LLM technology and its implications for various stakeholders. Here are the major points detailed in the text:
– **Adoption Strategy for LLMs:**
– The chapter outlines how organizations can adopt LLMs, particularly in understanding the future of the ecosystem.
– It utilizes a Wardley map to visualize these dynamics, facilitating strategic decision-making.
– **Evolution of LLM Ecosystem:**
– Identifies **current trends** such as Retrieval Augmented Generation (RAG) and agentic workflows, suggesting that these will influence application patterns in the near future.
– Predicts a volatile landscape with continuous introduction of new patterns that will affect product companies.
– **Market Dynamics:**
– As model providers (e.g., OpenAI, Anthropic) proliferate, companies must adapt their strategies and possibly decouple LLM frameworks from model offerings.
– The relationship between model providers and platform frameworks is critical for understanding future offerings.
– **Key Questions Facing LLM Adoption:**
– Will LLM frameworks remain bundled with model providers or will they split apart?
– Which LLM functionalities can be commoditized or productized soon?
– **Challenges for Product Companies:**
– Proliferation of LLM patterns increases complexity and costs for organizations trying to stay competitive.
– Internal solutions for validating new patterns might not be sustainable long-term due to funding limitations.
– **Key Stakeholders and Their Roles:**
– **Product Engineers:** Focused on efficient deployment of LLMs amidst evolving parameters.
– **Machine Learning Infrastructure Teams:** Tasked with effective utilization of LLM mechanisms and supporting the deployment through common infrastructure.
– **Security and Compliance:** A critical focus on ensuring safe hosting of models and adherence to compliance standards as AI risks evolve.
– **Convergence of Teams:**
– The complexity of LLM implementation may require collaboration across varied teams like Finance, Security, Compliance, and Infrastructure Engineering, though team dynamics and responsibilities can be overlapping and conflictual.
Overall, the text informs professionals about the strategic considerations in LLM adoption, highlighting the intersection of product development, market adaptation, and the necessity for robust security and compliance measures as AI technologies advance. The insights provided can serve as guidance for security and compliance professionals who must navigate the challenges posed by rapidly changing AI landscapes.