As the generative AI boom continues, startups building business models around the tech are beginning to delineate along two clear lines.
Some, convinced that a proprietary and closed source approach will give them an advantage over the swarms of competitors, are choosing to keep their AI models and infrastructure in-house, shielded from public view. Others are open sourcing their models, methods and datasets, embracing a more community-led path to growth.
Is there a right choice? Perhaps not. But every investor seems to have an opinion.
Dave Munichiello, a general partner at GV, an investment arm of Alphabet, makes the case that open source AI innovation can foster a sense of trust in customers through transparency. By contrast, closed source models — though potentially more performant, given the lightened documentation and publishing workload on teams — are inherently less explainable and thus a harder sell to “boards and executives,” he argues.
Ganesh Bell, the managing director at Insight Partners, generally agrees with Munichiello’s point of view. But he asserts that open source projects are often less polished than their cloud-sourced counterparts, with front ends that are “less consistent” and “harder to maintain and integrate.”
Depending on who you ask, the choice in developmental direction — closed source vs. open source — matters less for startups than the overarching go-to-market strategy, at least in the earliest stages.
Christian Noske, a partner at NGP capital, says that startups should focus more on applying the outputs of their models, open source or not, to “business logic” and ultimately proving a return on investment for their customers.
But many customers don’t care about the underlying model and whether it’s open source, Ian Lane, a partner at Cambridge Innovation Capital, points out. They’re looking for ways to solve a business problem, and startups recognizing this will have a leg up in the overcrowded field for AI.
Now, what about regulation? Could it affect how startups grow and scale their businesses and even how they publish their models and supporting tooling? Possibly.
Noske sees regulation potentially adding cost to the product development cycle, strengthening the position of Big Tech companies and incumbents at the expense of small AI vendors. But, he says that more regulation is needed — particularly policies that outline the “clear” and “responsible” use of data in AI, labor market considerations and the many ways in which AI can be weaponized.
Bell, on the other hand, sees regulation as a potentially lucrative market. Companies building tools and frameworks to help AI vendors comply with regulations could be in for a windfall — and in the process “contribute to building trust in AI technologies,” he says.
Open source versus closed source, business model and regulation are just a handful of topics covered here. The respondents also spoke to the pros and cons of transitioning from an open source to a closed source company, the possible security benefits, and dangers of open source development and the risks associated with relying on API-based AI models.
Read on to hear from:
Dave Munichiello, general partner, GV
Christian Noske, partner, NGP Capital
Ganesh Bell, managing director, Insight Partners
Ian Lane, partner, Cambridge Innovation Capital
Ting-Ting Liu, investor, Prosus Ventures
The responses have been edited for length and clarity.
Dave Munichiello, general partner, GV
What are some key advantages for open source AI models over their closed source competitors? Do the same trade-offs apply to UI elements like AI front ends?
Innovation in public (via open source) creates a dynamic where developers have a sense that the models they’re deploying have been deeply assessed by others, probed by the community, and that the organizations behind them are willing to connect their reputations to the quality of the model.
Academia and enterprise R&D were the sources of AI innovation for the past several decades. The OS community and products associated with OS make an effort to engage that critical part of the ecosystem whose incentives vary from profit-seeking businesses.
Closed source models may be more highly performant (perhaps have a technical lead by 12 to 18 months?) but will be less explainable. Further boards and executives will trust them less, unless they are strongly endorsed by a brand-name tech company willing to put its brand on the line to certify quality.
Is open sourcing potentially dangerous depending on the type of AI in question? The ways in which Stable Diffusion has been abused come to mind.
Yes, everything could be potentially dangerous if used and deployed in a dangerous way. Long-tail OS models may, in a rush to market, be less scrutinized than closed source competitors whose bar for quality and safety must be higher. As such, I would differentiate OS models with high usage and popularity from long-tail OS models.