Being long on innovation – part 3: the (generative) AI and LLM race has just started

The_gen-AI_race_has_started

Should we be concerned about the dominance of Big Tech and its effect on the development of the AI industry?  This is quite a big question but with the launch of ChatGPT in November last year and the flurry of activities that quickly followed, with various alternatives, rivals, imitations, variants and refinements and extensions on and to it including from the vast majority of big SaaS incumbents, including Einstein GPT from Salesforce, Charlotte AI from Crowdstrike, ChatSpot from HubSpot, and Firefly from Adobe (and then there are the versions from the well-funded start-ups, including Anthropic AI’s Claude – here’s the lowdown – and a few others topping the list), as well as the just-released DALL-e 3 that is the text-to-image engine based on ChatGPT, with the concomitant debates around Generative AI and Large Language Models (“LLMs”), the question has acquired a new and timely relevance, even if within this specific context (our VC colleagues at a16z have highlighted how incumbents have adapted faster and that the shift to AI is playing out faster than the shift to cloud and SaaS, i.e. – 6 months vs ~5 years).

It is worth highlighting, for a start, that tech giants have amassed the largest data sets, leveraged massive balance sheets to finance the human and financial capital-intensive work of training models – which can cost millions of dollars for each run – and amassed years of experience working with large data sets (here’s Google on its internal dataset of 300M images that are labeled with 18291 categories, called JFT-300M, and its views on the question whether there are diminishing returns to model performance, in relation to image-based machine learning (“ML”).

A second reason for concern is another edge these incumbents have: their distribution.  Microsoft Office’s customer base spans more than 1 million companies and 350 million paid users, and can upsell new machine learning (“ML”) features across their product suite at a premium – just like Premium Microsoft Teams.

It seems likely that startups, and young high-growth companies more generally, will have to innovate both in the application of the technology and also in its distribution, in order to have some chance of success: finding new applications, new customer segments, novel sales and marketing strategies that might help them stay out of the focus and acumen of the incumbent giants, at least for long enough.

They will get some tail wind, at some point: large language models (“LLM”) will reach the limits of “large”: the “infrastructure” nature of any “foundational” AI model is likely to mean that “open source” will always act as some counterbalance against any proprietary model (we might wish to be reminded that Google’s Chrome browser, launched in 2008, was based on the open source Chromium [see here and here for a brief history).

This is especially relevant when there is now an increasing recognition that giant models are slowing us down and that large models may not be more capable in the long run if we can iterate faster on small models (and data quality scales better than data volume [partly because there is always a diminishing returns of more data to model performance – the amount of data needed will depend on the variance in the data you are applying ML to and the complexity of the model, e.g. the number of parameters the model can learn from; here’s some pointers to the size and quality of a data-set]).  The upshot is that the race has really only just started, which has a few implications:

  • Given that we are early in the race, the questions to ask are probably “what does the course look like?” and “what are the rules of the race going to be?” and “how do we enter the race?”.
  • The race has a few “courses” and at least a few categories: will there be a “winner” across all these or smaller, built-to-fit models for specific use cases?
    • There might be a smaller “winning” model tailored for helping developers to code, another smaller model for new compounds for drug discovery ….
    • There’s the leading image-based model (with Stability AI) even as Anthropic and Open AI lead the language model race ….
  • The winner(s) will probably take time to emerge: who will be today’s or tomorrow’s Google who will come up with the Chrome (which overtook Explorer that in turn had overtaken Netscape) or the Android LLMs?! 

For the “clean and smart” investor, we are always interested in learning curves and welcome the next wave of enabling technologies.

(Part 2 “Thriving under the shadows of giants” is here.)

Pin on PinterestEmail this to someoneShare on FacebookTweet about this on TwitterShare on LinkedIn

Leave a Reply