Leveraging Large Language Models to build a defensible startup

Tanay Jaipuria
Author
No items found.
Learn how your startup can leverage the power of large language models (LLMs) to build defensible moats, establish market dominance and create a sustainable path to success.

As large language models (LLMs) have exploded over the past year, numerous startups have begun to build AI-native applications to disrupt industries. During discussions on moats and defensibility, there's been a lot of talk about how many of these startups are simply “wrappers on OpenAI.”

If you’re a startup building AI-native applications with LLMs, there are a few considerations that broadly relate to product approach. In general, you’ll want to think about how to build defensibility in relationship to the models and incumbent applications.

In this article, we’ll take a deep dive into these primary concerns, and discuss how you can strategically build your competitive advantage.

Consideration 1: Does your startup provide enough value (on top of the model layer) to not get commoditized by it?

How much "AI value" does your product or service provide, on top of the foundation models?

The vendors and models that most new AI-native startups (and incumbents) use for their application are the same — namely OpenAI, Cohere and Anthropic. However, your startup may end up using these models in slightly different ways.

At a high level, the range of options for startups is (from easiest to hardest):

  • Prompt engineering only: Focus on improving the model output by engineering the prompts used in the models, and by potentially selecting different vendors for various prompts.
  • Fine-tuning: Improve the model by fine-tuning it with feedback and input/output data from a dataset or real usage.
  • Train your own models: Train highly specialized models for specific use cases using all the data collected from the application in production.
Prompts are important but may diminish in importance over time.

It's very possible that for horizontal use cases (e.g., text generation for marketing, sales, regular content, business writing, etc.), the value of prompt engineering and fine-tuning is minimal in the medium term, and that owned/trained models will also get beat out by the LLMs, especially the newer versions.

For example, GPT-4 might render the fine-tuned models companies have developed for sales email writing and blog posts irrelevant. This would make it easy for another startup that is just a wrapper to GPT-4 to offer the same quality your startup has built up — making your product harder to defend.

On vertical use cases (e.g., contract writing in legal, financial analysis, etc.), there may be more lasting value in either heavily fine-tuned models or training your own models, but that is also dependent on how good new tools like GPT-4 are out of the box.

How important is private data/customer-specific data to the use case?

If all the data needed for a use case is largely public, that limits some of the value your startup can provide.

For example, to generate relatively basic explainer essays, Instagram captions, marketing content, etc., you don’t need much proprietary data, and it's hard to go beyond what GPT-4 will produce.

However, if all the data isn't public and it needs your startup to connect to a customer's warehouse or other applications, you’re unlikely to face commoditization from startups building "model wrappers," since their models won't get access to specific customers’ data. An application is necessary to do that.

For example, in customer support, private data is very important. There's only so much the AI can do without access to the company's knowledge base, FAQs, past tickets, etc.

Zendesk customer tickets are an important source of data in customer support to use AI well

In image and text generation used in marketing, private data is only somewhat important. Your startup can feed the LLM data about the company's tone, formats, templates and other guidelines to improve the quality of the output. The higher the improvement with the private data, the less risk of the value being captured by all the model vendors — since the startup is the one getting access to the private data.

In categories where private data is needed, there is less risk of the models directly capturing all the value.

Even with AI applications that require private data, many vendors may pop up in the same space, but there will be workflow/switching costs for the customers they serve. This cost will (at the very least) prevent the applications from being fully commoditized by the models.

This is less about the difficulty of using LLMs with private data (companies like Pinecone, Langchain and others are making that easier over time). By definition, you have to win the customer and integrate their data to actually provide the full value, which the model vendors are unlikely to do at scale by themselves. That means there will be an application vendor needed to create value.

Consideration 2: How does the startup compete with other applications, including incumbents?

What happens if/when incumbents add generative AI to their product?

Depending on the category of your new startup, many of the incumbents in your space are likely to be at least somewhat adaptable. If they haven’t already, they will clearly see the impact AI can have in the market.

Because of these trends, coupled with the relative ease of using these models directly via an AI — especially when it comes to standard tasks such as text/image generation — you’ll likely see your incumbent competition at least lightly integrating generative AI into their products.

We've already seen many examples of incumbents reacting across categories:

  • Design: Canva has added image generation functionality into their broader design product.
  • Customer support: Intercom has added LLM-powered AI customer support features like summarization, composing and rephrasing.
  • GTM: Outreach has added an AI-powered smart assist feature. Walnut has launched Walnut Ace, an OpenAI API-powered personalized demo product. Salesforce has previewed the launch of EinsteinGPT, which helps generate leads and close deals.
  • Productivity: Microsoft has said they expect to add generative AI into the entire Office suite, Notion has launched Notion AI and Google is expected to do the same in products like Google Docs and Gmail.

 

Canva’s generative AI features

Given that incumbents are likely to integrate generative AI, at least at a basic level, you’ll need to consider how you can add value beyond what the big players can easily do. If you don’t, the incumbents might snag a lot of the AI value in your category.

If you’re building a product from the ground up in a category that can allow for much deeper and better use of AI than simply adding it on as a feature/new product line, you might be well positioned in the marketplace.

Should you bring AI into an existing workflow or redo the workflow with AI?

This question is tied closely with the ones above, and it’s more applicable to some categories than others.

In some cases, you may decide the best approach is to focus on doing the AI part better than your competitors, while fitting into the workflows/applications companies already use. 

For example:

  • Diagram is building an AI-powered Figma plugin that acts as a co-pilot for design.
  • Arcwise is building a plugin for Google Sheets to add AI features like formula suggestions and construction.

In these cases, the companies are bringing AI into the existing tool/workflow, rather than creating a new tool from the ground up. While sometimes these businesses seem a bit niche, this approach can serve as a wedge that companies can use to expand further.

In addition, there have been numerous examples of large companies whose products are add-on tools to other products. The product for think-cell, for example, is a plugin for PowerPoint.

The risk you face if you take this approach is that the incumbents may integrate these tools indirectly — in which case, you will always need to be multiple times better to stay relevant or maintain your lead.

The other option is to use AI to reimagine a workflow from the ground up as part of a broader product offering. In these cases, you’ll want to think about what the workflow might look like if AI was deeply integrated.

For example:

  • AI-first storytelling tool Tome is rebuilding slide software from the ground up with AI at its core, rather than doing something like building a plugin for Google Slides.
  • Companies like Mem and Lex are reimagining knowledgebase and text editing apps, respectively. This reimagination is happening from the ground up, with AI at the core of its products, rather than serving as a way to get the AI output to export into another primary tool.

In these cases, the issues are slightly different — such as requiring changes in workflows from customers, and a more ambitious and broader product build beyond the AI features.

Keep in mind that this decision is more of a spectrum than a hard choice. There isn’t one single right answer.

For example, in the UI design space, Diagram is building a plugin within Figma. Galileo is building a standalone application that uses AI to generate an interface design that can be edited in Figma. And Uizard is building a standalone AI-powered design product that can replace Figma for some designers.

Similarly, products like Jasper started off as standalone products in the existing workflow. Users could generate the text in Jasper, make edits, and copy and paste the text wherever they needed it — for example, into content management systems.

Now, Jasper is moving towards being more deeply integrated, but in a way that also changes the workflow. Jasper is where writing happens and then gets scheduled into the CMS. Over time, the platform can suggest the right cadence based on your goals.

The Art of Defensibility: Mastering Moats with LLMs

To capitalize on the strengths of LLMs, it is crucial for startups to identify specific opportunities within their domain where they can leverage the power of AI to create unique value propositions.

Here are the four approaches your new AI-native startup might consider, when you’re looking to build a defensible product:

  1. Add AI value on top of the LLMs for your use case through prompt engineering and fine-tuning — but think about whether that value will still matter once the models improve.
  2. Incorporate private data/customer data in the model context to improve outputs.
  3. Assume that incumbents in your space will adopt surface-level generative AI features and think about how you can go beyond their ideas.
  4. Think about the right insertion point for your product and try to go deep into workflows while minimizing disruptions — while still focusing on bringing out the full value of AI.

To get more information on how you build and sustain defensibility while harnessing the potential of LLMs, subscribe to my free Substack newsletter.

Wing Logo
Thanks for signing up!
Form error, try again.