The AI industry is fundamentally split on a single question: should the most powerful models be open or closed?
**The closed camp (OpenAI, Anthropic, Google):**
Arguments for keeping models closed:
- Safety: powerful models shouldn't be in the hands of bad actors
- Competitive moat: the model is the product
- Quality control: prevent misuse and reputational damage
- Alignment risk: an uncensored 70B model is a weapon
**The open camp (Meta, Mistral, 01.AI, Alibaba, Microsoft-backed):**
Arguments for releasing model weights:
- Innovation: the entire ecosystem builds on open models
- Trust: you can verify what an open model does
- Competition: prevents monopoly by a handful of companies
- Democratization: developing countries and researchers can participate
- The cat is out of the bag: if someone will open-source it, better to have safety-focused labs do it first
**Current state of open source AI:**
Meta's Llama 3.1 405B is arguably close to GPT-4 level. Mistral, Qwen (Alibaba), DeepSeek, and Phi (Microsoft) are strong open models. The gap between open and closed has shrunk dramatically.
**The hybrid approach**: 'Open weights' isn't the same as 'open source.' Llama has restrictions on commercial use above certain sizes. True open source (Apache 2.0, MIT) includes Mistral 7B, Phi-3, and others.
**What this means for you**: If you're building a product, open models let you run locally (zero API costs, full privacy). Closed models offer better performance and simpler integration. Most companies use both.
**Key takeaway:** Open source AI (Llama, Mistral) has nearly caught closed models (GPT-4, Claude) in capability — the gap that once justified the closed approach is closing fast.