A curated gallery of interactive demos from our work on diffusion language models, speculative decoding, in-place editing, and vision-language-action systems.
A chat interface backed by a diffusion language model — history, settings, and streaming-style generation in a clean conversational UI.
Block-diffusion speculative decoding — a visual walkthrough of how diffusion-style drafts accelerate autoregressive inference, with live accept/reject animation.
Diffusion LLM × parallel tool calls — shows how a non-autoregressive decoder emits multiple tool invocations in one pass, instead of sequentially.
Document-level diffusion edits with global consistency — local edits, conclusion→intro propagation, and multi-span infilling, animated step by step.
Side-by-side VLA rollouts — autoregressive vs diffusion action policies on the same tasks, rendered from real robot trajectories.