History of Large Language Models
2 bite-size cards · 60 seconds each

What is the History of Large Language Models?
Large language models didn't appear overnight. They emerged from decades of NLP research, with breakthroughs in 2017 (transformers), 2018 (BERT/GPT), 2020 (GPT-3), and 2022 (ChatGPT) each pushing capabilities dramatically forward. Understanding this arc helps make sense of where LLMs are heading next.

The Scaling Laws That Shaped LLM Development
Between 2020 and 2024, LLM capabilities grew predictably with model size, training data, and compute — relationships formalized as scaling laws. These laws guided billions in AI investment, and their apparent limits in 2024–2026 triggered the shift to reasoning models that scale inference compute instead.
Keep going
Sign up free to get a personalised feed that adapts to your interests as you swipe.
Start for free