AI & ML interests

None defined yet.

Recent Activity

FlameF0X  updated a Space about 4 hours ago
i3-lab/README
FlameF0X  new activity about 13 hours ago
i3-lab/i3-80m:Training
FlameF0X  updated a collection 4 days ago
i3
View all activity

Welcome to i3-lab

"Chase the SOTA pipeline, not the MMLU slop."

i3-lab is dedicated to extreme efficiency in LLM architecture. We develop the i3 model family—state-of-the-art architectures designed to reach high performance levels in hours on consumer-grade hardware (like the NVIDIA Quadro P100) that typically require days on massive GPU clusters.


i3: High-Efficiency Training

We specialize in hybrid architectures, specifically RWKV-Attention, to bypass the quadratic scaling bottlenecks of traditional Transformers.

  • Fast Iteration: Trainable in hours, not weeks.
  • Accessible SOTA: High performance on legacy/mid-range hardware.
  • Open Research: Push the boundaries of what is possible with limited compute.

Quick Links


Roadmap / TODO

We are currently scaling our architecture through the following milestones:

  • i3-Ethan-it — Specialized instruction-tuned variant.
  • i3-1B — Our first major scale-up.
  • i3-7B-A1.6B — Mixture of Experts / Sparsity testing.

Usage & Attribution

The open-i3 codebase is licensed under Apache 2.0. We believe in open-source, but we value attribution.

If you use our architecture (RWKV-Attention) or our weights, you are required per Section 4(b) and 4(d) to:

  1. Carry prominent notices of any modifications.
  2. Include a readable copy of the attribution notices from our NOTICE file.

You must include the attribution link found in the open-i3 GitHub in your documentation or model card.


Made with ❤️ and DETERMINATION by Daniel.

datasets 0

None public yet