AI & ML interests

None defined yet.

Recent Activity

danielhanchenΒ 
posted an update 1 day ago
danielhanchenΒ 
posted an update 4 days ago
danielhanchenΒ 
posted an update 11 days ago
view post
Post
3396
100,000+ models trained with Unsloth have now been open-sourced on πŸ€—Hugging Face! πŸ¦₯

Here are the most popular ones you can run local:
1. TeichAI - GLM-4.7-Flash distilled from Claude 4.5 Opus (high)
2. Zed - Qwen Coder 7B fine-tuned for stronger coding
3. DavidAU - Llama-3.3-8B distilled from Claude 4.5 Opus (high)
4. huihui - gpt-oss made β€œabliberated”

Links to models:
1. TeichAI: TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF
2. Zed: zed-industries/zeta
3. DavidAU: DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning
4. huihui: huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated

See all the 100K latest models fine-tuned with Unsloth here: https://huggingface.co/models?other=u
  • 2 replies
Β·
danielhanchenΒ 
posted an update 15 days ago
danielhanchenΒ 
posted an update 18 days ago
view post
Post
5651
You can now run Qwen3.5 locally! πŸ’œ
Qwen3.5-397B-A17B is an open MoE vision reasoning LLM for agentic coding & chat. It performs on par with Gemini 3 Pro, Claude Opus 4.5 & GPT-5.2.

GGUF: unsloth/Qwen3.5-397B-A17B-GGUF
Run Dynamic 3-bit on a 192GB Mac for 20 tokens/s.

Guide: https://unsloth.ai/docs/models/qwen3.5
Β·
danielhanchenΒ 
posted an update 19 days ago
danielhanchenΒ 
posted an update 24 days ago
view post
Post
5173
We collaborated with Hugging Face to enable you to train MoE models 12Γ— faster with 35% less VRAM via our new Triton kernels (no accuracy loss). πŸ€—

Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe
  • 1 reply
Β·
danielhanchenΒ 
posted an update 29 days ago
view post
Post
3698
We created a tool-calling guide for local LLMs!

Learn how to use any open model like Qwen3-Coder-Next and GLM-4.7-Flash for function calling.

Guide: https://unsloth.ai/docs/basics/tool-calling-guide-for-local-llms

We provide hands-on examples for: story writing, Python execution, terminal tool calls, maths and more.
Β·
danielhanchenΒ 
posted an update about 1 month ago
danielhanchenΒ 
posted an update about 1 month ago
danielhanchenΒ 
posted an update about 1 month ago
view post
Post
2627
You can now fine-tune embedding models in our free Unsloth notebook! πŸ€—

Fine-tuning embedding models improves retrieval & RAG by aligning vectors to your domain-specific notion of similarity, improving search, clustering, and recommendations on your data.

⭐ Blog + Notebooks: https://unsloth.ai/docs/new/embedding-finetuning

Unsloth trains embedding models 1.8-3.3x faster with 20% less VRAM, 2x longer context & no accuracy loss vs. FA2 setups.

We'd like to thank Hugging Face and Unsloth contributor: electroglyph for making this possible!
Β·
danielhanchenΒ 
posted an update about 2 months ago
danielhanchenΒ 
posted an update about 2 months ago
view post
Post
2868
You can now do reinforcement learning training with 7Γ— longer context and no accuracy loss, via our new batching algorithms.

Long reasoning chains in RL are costly, but now we enable you to train gpt-oss with GRPO & reach 380K context on a 192GB GPU.

Blog: https://unsloth.ai/docs/new/grpo-long-context