Gen-UI-Lang: Generate UIs using LLMs with ~65% fewer tokens

Community Article Published December 21, 2025

Workflow

Demo: demo

LLMs can generate UIs—but asking for HTML/JSX often produces outputs that are verbose, brittle, and easy to break. Gen-UI-Lang fixes that by making the model output a compact, predictable code instead of raw markup.

  • Write (or generate) a short genui(...) expression
  • Render it deterministically to HTML and other UI targets
  • Iterate quickly without rewriting walls of markup

If this sounds useful, start here:

The “UI intent” layer

Gen-UI-Lang represents a UI as a tree of node factories (genui, row, col, text, btn, chart, …). That structure is explicit, shallow, and consistent—exactly what LLMs are good at producing reliably.

Example

genui(
    row(
        text("Sales Overview"),
        btn("Load", on_load=lambda: get_graph(2001, 2002))
    ),
    chart(type="line", data="sales_q4"),
)

One small expression. Clean structure. Easy to render.

Result: big token savings

In the paper’s representative measurement:

  • HTML code: 897 tokens
  • Gen-UI-Lang code: 311 tokens

That’s 586 tokens saved (a 65.3% reduction). In the demo environment, this corresponded to roughly similar reductions in output-side latency and cost (exact numbers vary by model/provider/runtime).

Want the deeper dive (design + methods + discussion)?

Want to run the code and see the code→UI rendering end-to-end?

Community

Sign up or log in to comment