May 10, 2023
Portfolio
Unusual

Why the future of AI-native infrastructure will be open

Wei Lien Dang
No items found.
Why the future of AI-native infrastructure will be openWhy the future of AI-native infrastructure will be open
All posts
Editor's note: 

We’re not the only ones who’ve noticed. Thought leaders such as Chris Ré at Stanford have put forth that AI is having its Linux moment. We believe that what’s happening in AI right now is similar to what happened with Linux but at warp speed. Consider the following:

  • The growth in the AI developer community and AI end users far outpaces the early days of Linux adoption. More than 10,000 organizations use Hugging Face today
  • The rate of corporate investment and venture funding, and active participation of large technology companies, which builds credibility, far exceeds perhaps anything before. More than $10B of venture funding has been invested in AI in just a couple years. A substantial portion of that has gone toward open-source companies. By comparison, some forget that it was a big deal when IBM invested $1B behind Linux in 1999.
  • Open source is now a well-established foundation for building successful infrastructure companies with several proven business models; Linux was created at a time when no one knew how to monetize it.
  • Today AI innovation is happening up and down the entire infrastructure stack rather than just at one layer.

To get an idea of that last point, we’ve put together this market map of proprietary and open source AI infrastructure options across various categories. 

Market map: The open vs. closed-source AI infrastructure landscape

What stands out about the open-source AI landscape?

Foundational models

GPT-3.5, DALL-E, and Midjourney showed millions what was possible with high-quality foundation models. To understand the impact of open source, it’s worth looking at foundation models in different areas, namely language and image generation.

Large language models (LLMs): GPT-3.5 and GPT-4 have been at the forefront so far, but we’ve seen a number of new LLMs get released in the last couple months, including LLaMA and StableLM. To date, the big knock on open-source LLMs is that they haven’t come close to rivaling the performance of GPT-3.5. 

Even this latest batch of models does not have truly competitive performance. The only model that is nearly as good is LLaMA, which is not licensed for commercial use. But this is going to change. RedPajama recreates the LLaMA dataset, so it’s likely just a matter of time until an open-source (and commercially usable) replica of LLaMA is released, providing something that can more widely compete with GPT-3.5. The public preview of OpenLLaMA is one example of the work happening specifically in this area. Even then, we’re still a long way from an open-source model that rivals GPT-4. But does that matter a whole lot? Maybe not. There are many applications for which GPT-3.5-equivalent quality could prove more than sufficient right now. The closed options may still be best-in-class but most people might just be looking for “good enough” right now. We expect to see open-source LLMs make significant inroads in adoption relative to closed LLMs.

Image generation: The gaps between open source and closed text-to-image diffusion models don’t appear as great as they do with LLMs. Midjourney and DALL-E have experienced early success and likely will for the foreseeable future, but Stable Diffusion is a credible competitor today with a growing user base that will continue to rapidly advance its progress.

Training libraries, datasets, fine-tuned models

We’ve seen impressive contributions across libraries and substantial datasets that have translated into more performant models. This is valuable because it shows the extensibility of foundation models while also highlighting the possibilities and constraints of their training corpuses. 

Inference

Outside of training, as more organizations deploy models in production, we expect inference to come into greater focus given its role in driving compute needs and costs. While early players were closed solutions, we’ve seen new open-source inference engines emerge in the last several months.

Vector databases

Nearly every widely adopted modern database started as open source, and we expect that trend to continue with vector databases. Vector databases will serve a critical role in the AI-native stack — the pattern of ensuring fast retrieval of embeddings generated from LLMs to serve different application use cases is one that will continue to grow. The last year has seen a proliferation of open-source vector databases to address this need.

Developer frameworks, agents, and chatbots

There has been an explosion in demand for developer tools that allow users to adopt new AI solutions. For example, tools for prompt engineering and chaining are on the rise. Open source has been at the core of this new movement. In a space evolving this quickly, it’s nearly impossible to compete with the speed of innovation and feedback enabled by open source solutions. 

Early efforts to build agentic AI are mostly just tech demonstrations so far, but the most notable contributions have been in public repositories for projects such as AutoGPT and BabyAGI. We expect a coming wave of AI agents, and the most popular frameworks will likely continue to be open source. Similarly, efforts to democratize chatbots — the most familiar interface for working with LLMs right now — with new open, community-based efforts are worth paying attention to.

Why is open source impacting every layer of the AI stack?

Given the above, how do we explain why open source has become such an integral part of advanced AI-native infrastructure when proprietary solutions have the early lead overall capabilities? There are several reasons, not dissimilar from why open source is valuable to end users generally.

  • Trust and accountability
    The power of AI-native technologies comes with significant risks, many not yet well understood. Building in the open and providing transparency into how these technologies work is an important part of fostering trust and confidence for users to adopt them. Big ethical questions need to be answered. Accountability is necessary and open source arguably provides a better means to enable it for AI. We expect that open standards will be established over time through a combination of AI experts, companies, and government.
  • Speed of innovation
    Open source can enable fast development by leveraging a broader community. This effect is even more prominent in rapidly evolving spaces, where the needs of developers can change every week. We’ve seen this happen with models such as Stable Diffusion with many contributors.
  • Commoditization
    Open-source efforts have proven effective in previous markets where proprietary approaches were prohibitively expensive. Open-source alternatives can provide a more cost-effective approach, even if they don’t match closed solutions on all features.
  • On-premise use cases
    There are many related factors that fall under this, which have to do with why organizations would want to run their own AI-native infrastructure rather than relying on hosted APIs. Enterprises often value extensibility and customizability to meet their specific requirements; developers can find that hosted APIs lack flexibility and standardization. Many companies also have data security and privacy concerns, despite assurances from OpenAI and other providers. Additionally, many organizations may not want to rely on external vendors for AI in the event that they go out of business.

What it means

We’re in the early days of AI infrastructure, for sure, but here’s what we believe is very likely. 

1. Proprietary and open options across the AI-native stack will exist in healthy competition with each other.

Open source especially will make meaningful gains in foundation model, namely LLM, performance. This is good for users and for the industry as a whole. That by itself won’t necessarily translate into significant implementation of open-source AI-native technologies. Why? Because performance and accuracy is just one part of the equation. The proprietary solutions offer ease of use. Today, most people can’t deploy and run the AI-native stack outlined above — it’s too hard. There’s a need for more and better tooling that makes it easier to use and manage open-source AI components. (If you’re building something, we’d love to chat with you!) So we expect to see a new generation of tools emerge in the next 12 months that enable developers to utilize AI-native infrastructure.

2. The world will be made up of a great number of smaller models rather than one model (or a few models) to rule them all.
Again, why? Because organizations will have all sorts of reasons to fine-tune their own models. These will be the most common reasons:

  • Utilizing proprietary data
  • Decreasing latency
  • Reducing inference costs 

Will proprietary solutions continue to thrive? Absolutely. But the advantage of open source has always been rooted in the collective potential of community and every day the community is growing. The speed of open-source innovation in AI-native infrastructure is quickening and we’re excited to see its future unfold.

If you’re building in AI-native infrastructure, we’d love to talk with you! Email us at wei@unusual.vc

Check out fireside chat on AI-native infrastructure and open source


More about AI and open source

DevTools for language models — predicting the future

How startup founders can shape the future of generative AI

Generative AI is blowing up. What does this mean for cybersecurity?

Open Source Field Guide: a 5-part series for founders building a new open-source software company The pace of innovation throughout the AI-native stack, from foundation models to new developer tools, is astounding. Every week brings new announcements in AI infrastructure software, and one pattern has become abundantly clear: Open source is now a significant, if not the most significant, driver in advancing the AI-native stack forward. 

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

All posts
May 10, 2023
Portfolio
Unusual

Why the future of AI-native infrastructure will be open

Wei Lien Dang
No items found.
Why the future of AI-native infrastructure will be openWhy the future of AI-native infrastructure will be open
Editor's note: 

We’re not the only ones who’ve noticed. Thought leaders such as Chris Ré at Stanford have put forth that AI is having its Linux moment. We believe that what’s happening in AI right now is similar to what happened with Linux but at warp speed. Consider the following:

  • The growth in the AI developer community and AI end users far outpaces the early days of Linux adoption. More than 10,000 organizations use Hugging Face today
  • The rate of corporate investment and venture funding, and active participation of large technology companies, which builds credibility, far exceeds perhaps anything before. More than $10B of venture funding has been invested in AI in just a couple years. A substantial portion of that has gone toward open-source companies. By comparison, some forget that it was a big deal when IBM invested $1B behind Linux in 1999.
  • Open source is now a well-established foundation for building successful infrastructure companies with several proven business models; Linux was created at a time when no one knew how to monetize it.
  • Today AI innovation is happening up and down the entire infrastructure stack rather than just at one layer.

To get an idea of that last point, we’ve put together this market map of proprietary and open source AI infrastructure options across various categories. 

Market map: The open vs. closed-source AI infrastructure landscape

What stands out about the open-source AI landscape?

Foundational models

GPT-3.5, DALL-E, and Midjourney showed millions what was possible with high-quality foundation models. To understand the impact of open source, it’s worth looking at foundation models in different areas, namely language and image generation.

Large language models (LLMs): GPT-3.5 and GPT-4 have been at the forefront so far, but we’ve seen a number of new LLMs get released in the last couple months, including LLaMA and StableLM. To date, the big knock on open-source LLMs is that they haven’t come close to rivaling the performance of GPT-3.5. 

Even this latest batch of models does not have truly competitive performance. The only model that is nearly as good is LLaMA, which is not licensed for commercial use. But this is going to change. RedPajama recreates the LLaMA dataset, so it’s likely just a matter of time until an open-source (and commercially usable) replica of LLaMA is released, providing something that can more widely compete with GPT-3.5. The public preview of OpenLLaMA is one example of the work happening specifically in this area. Even then, we’re still a long way from an open-source model that rivals GPT-4. But does that matter a whole lot? Maybe not. There are many applications for which GPT-3.5-equivalent quality could prove more than sufficient right now. The closed options may still be best-in-class but most people might just be looking for “good enough” right now. We expect to see open-source LLMs make significant inroads in adoption relative to closed LLMs.

Image generation: The gaps between open source and closed text-to-image diffusion models don’t appear as great as they do with LLMs. Midjourney and DALL-E have experienced early success and likely will for the foreseeable future, but Stable Diffusion is a credible competitor today with a growing user base that will continue to rapidly advance its progress.

Training libraries, datasets, fine-tuned models

We’ve seen impressive contributions across libraries and substantial datasets that have translated into more performant models. This is valuable because it shows the extensibility of foundation models while also highlighting the possibilities and constraints of their training corpuses. 

Inference

Outside of training, as more organizations deploy models in production, we expect inference to come into greater focus given its role in driving compute needs and costs. While early players were closed solutions, we’ve seen new open-source inference engines emerge in the last several months.

Vector databases

Nearly every widely adopted modern database started as open source, and we expect that trend to continue with vector databases. Vector databases will serve a critical role in the AI-native stack — the pattern of ensuring fast retrieval of embeddings generated from LLMs to serve different application use cases is one that will continue to grow. The last year has seen a proliferation of open-source vector databases to address this need.

Developer frameworks, agents, and chatbots

There has been an explosion in demand for developer tools that allow users to adopt new AI solutions. For example, tools for prompt engineering and chaining are on the rise. Open source has been at the core of this new movement. In a space evolving this quickly, it’s nearly impossible to compete with the speed of innovation and feedback enabled by open source solutions. 

Early efforts to build agentic AI are mostly just tech demonstrations so far, but the most notable contributions have been in public repositories for projects such as AutoGPT and BabyAGI. We expect a coming wave of AI agents, and the most popular frameworks will likely continue to be open source. Similarly, efforts to democratize chatbots — the most familiar interface for working with LLMs right now — with new open, community-based efforts are worth paying attention to.

Why is open source impacting every layer of the AI stack?

Given the above, how do we explain why open source has become such an integral part of advanced AI-native infrastructure when proprietary solutions have the early lead overall capabilities? There are several reasons, not dissimilar from why open source is valuable to end users generally.

  • Trust and accountability
    The power of AI-native technologies comes with significant risks, many not yet well understood. Building in the open and providing transparency into how these technologies work is an important part of fostering trust and confidence for users to adopt them. Big ethical questions need to be answered. Accountability is necessary and open source arguably provides a better means to enable it for AI. We expect that open standards will be established over time through a combination of AI experts, companies, and government.
  • Speed of innovation
    Open source can enable fast development by leveraging a broader community. This effect is even more prominent in rapidly evolving spaces, where the needs of developers can change every week. We’ve seen this happen with models such as Stable Diffusion with many contributors.
  • Commoditization
    Open-source efforts have proven effective in previous markets where proprietary approaches were prohibitively expensive. Open-source alternatives can provide a more cost-effective approach, even if they don’t match closed solutions on all features.
  • On-premise use cases
    There are many related factors that fall under this, which have to do with why organizations would want to run their own AI-native infrastructure rather than relying on hosted APIs. Enterprises often value extensibility and customizability to meet their specific requirements; developers can find that hosted APIs lack flexibility and standardization. Many companies also have data security and privacy concerns, despite assurances from OpenAI and other providers. Additionally, many organizations may not want to rely on external vendors for AI in the event that they go out of business.

What it means

We’re in the early days of AI infrastructure, for sure, but here’s what we believe is very likely. 

1. Proprietary and open options across the AI-native stack will exist in healthy competition with each other.

Open source especially will make meaningful gains in foundation model, namely LLM, performance. This is good for users and for the industry as a whole. That by itself won’t necessarily translate into significant implementation of open-source AI-native technologies. Why? Because performance and accuracy is just one part of the equation. The proprietary solutions offer ease of use. Today, most people can’t deploy and run the AI-native stack outlined above — it’s too hard. There’s a need for more and better tooling that makes it easier to use and manage open-source AI components. (If you’re building something, we’d love to chat with you!) So we expect to see a new generation of tools emerge in the next 12 months that enable developers to utilize AI-native infrastructure.

2. The world will be made up of a great number of smaller models rather than one model (or a few models) to rule them all.
Again, why? Because organizations will have all sorts of reasons to fine-tune their own models. These will be the most common reasons:

  • Utilizing proprietary data
  • Decreasing latency
  • Reducing inference costs 

Will proprietary solutions continue to thrive? Absolutely. But the advantage of open source has always been rooted in the collective potential of community and every day the community is growing. The speed of open-source innovation in AI-native infrastructure is quickening and we’re excited to see its future unfold.

If you’re building in AI-native infrastructure, we’d love to talk with you! Email us at wei@unusual.vc

Check out fireside chat on AI-native infrastructure and open source


More about AI and open source

DevTools for language models — predicting the future

How startup founders can shape the future of generative AI

Generative AI is blowing up. What does this mean for cybersecurity?

Open Source Field Guide: a 5-part series for founders building a new open-source software company The pace of innovation throughout the AI-native stack, from foundation models to new developer tools, is astounding. Every week brings new announcements in AI infrastructure software, and one pattern has become abundantly clear: Open source is now a significant, if not the most significant, driver in advancing the AI-native stack forward. 

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.