Running on CPU Upgrade Featured 3.01k The Smol Training Playbook 📚 3.01k The secrets to building world-class LLMs
Running Featured 1.3k FineWeb: decanting the web for the finest text data at scale 🍷 1.3k Generate a curated web‑text dataset for LLM training
Running 3.7k The Ultra-Scale Playbook 🌌 3.7k The ultimate guide to training LLM on large GPU Clusters
view article Article DualPipe Explained: A Comprehensive Guide to DualPipe That Anyone Can Understand—Even Without a Distributed Training Background Feb 28, 2025 • 16
Zephyr ORPO Collection Models and datasets to align LLMs with Odds Ratio Preference Optimisation (ORPO). Recipes here: https://github.com/huggingface/alignment-handbook • 3 items • Updated Apr 12, 2024 • 18