Exploring the potential of open(-source) LLMs

We explore why companies initially opt for closed-source LLMs like GPT-4 and discuss the growing shift towards open LLMs.

In a recent discussion, our AI Lead, Bernardo, and CEO, Aaro, delved into the potential of open-source large language models (LLMs) and compared them to their closed-source counterparts. This blog post summarizes their insights, providing a comprehensive look at the current landscape of LLMs, their benefits, challenges, and considerations for companies deciding between open and closed models.

The prevalence of closed-source LLMs

Why companies opt for closed-source models

The default choice for many companies has been closed-source models like GPT-4 or GPT-3.5. These models are powerful and easy to use, performing well across various tasks and domains. Their ease of deployment through APIs makes them particularly attractive for prototyping phases. For example, GPT-4's performance has been extensively documented in research, such as in OpenAI's technical report here.

Initial adoption and tutorials

Most tutorials and educational materials focus on closed-source LLMs, which naturally leads companies to continue using them. This trend is compounded by the powerful performance and accessibility of these models during the early stages of development.

Transitioning to open-source LLMs

Moving beyond prototyping

As companies progress beyond prototyping, it's essential to evaluate whether they need to continue with closed-source models or transition to open LLMs. Closed-source models, while comprehensive, may not always be the best fit for specific use cases, particularly when considering factors like cost, performance, and intellectual property.

Defining open LLMs

The term "open LLM" is distinct from traditional open-source software. For an LLM to be fully open-source, the training data, training code, and model weights should be accessible. Many models, like Meta's Llama series, only partially meet these criteria. Open LLMs typically provide the model weights and architecture, but not always the full training data or recipes.

Advantages of open LLMs

Customization and flexibility

Open LLMs offer significant advantages in customization. Companies with specific use cases can fine-tune these models to better suit their needs, often resulting in smaller, more efficient models that are faster and cheaper to run than their larger closed-source counterparts. Our article on improving LLM systems with A/B testing highlights how tailored models can outperform more generalized ones.

Proprietary data utilization

Utilizing proprietary data to fine-tune open LLMs allows companies to develop unique models that give them a competitive edge. This customization ensures that competitors using closed-source models do not have access to the same tailored solutions.

Reduced latency

Deploying smaller, specialized models in-house can significantly reduce latency, providing faster response times compared to relying on closed-source models hosted externally.

Domain adaptation

Open LLMs can be adapted to specific domains, outperforming general-purpose models in specialized tasks. This domain-specific tuning enhances performance and relevancy.

Challenges and considerations

Resource requirements

Implementing open LLMs requires more engineering resources, particularly for data preparation and model fine-tuning. While companies like Hugging Face are providing support, the upfront investment in terms of resources is higher compared to using closed-source models. For more details on dataset preparation, refer to our post on dataset engineering for LLM finetuning.

Guardrails and safety

Closed-source LLM vendors invest heavily in creating robust guardrails to prevent harmful content generation, biases, and data leaks. When using open LLMs, companies need to implement their own guardrails, adding to the complexity and resource requirements. For a deeper understanding, check out this article on why closed-source LLMs are still dominant due to safety concerns.

Cost implications

In the short term, using closed-source models may be cheaper due to economies of scale enjoyed by vendors like OpenAI and Anthropic. However, in the long term, open LLMs can be more cost-effective, especially when running smaller, specialized models on in-house servers. DataScienceDojo elaborates on the cost dynamics between open and closed LLMs, emphasizing long-term scalability and control​ (Data Science Dojo)​.

Intellectual property

Creating and owning a customized LLM can significantly enhance a company's intellectual property portfolio, providing a strategic advantage that outweighs the initial higher costs and resource investments.

Fine-tuning and model merging

Fine-tuning techniques

Fine-tuning open LLMs using methods like LoRa can be cost-effective and less computationally intensive. Companies can start with a base model and gradually improve it as they collect more data.

Model merging

Model merging is an emerging technique that allows for combining different models into a single, more capable model. This approach, which will be explored in detail in future discussions, offers promising results without requiring extensive computational resources.

Conclusion

The discussion highlights the evolving landscape of LLMs and the strategic considerations companies must evaluate when choosing between open and closed models. Open LLMs offer significant customization, flexibility, and long-term cost benefits, but require a higher upfront investment in resources and safety measures. As the field continues to evolve, techniques like model merging will further enhance the capabilities and accessibility of open LLMs, making them an increasingly viable option for innovative companies.

For more detailed insights on these topics, feel free to explore our related articles.

July 28, 2024

Bernardo García del Río

Aaro Isosaari