flowdown

Every AI chat app re-parses the entire conversation on every token. That's O(n²). Flowdown processes each token exactly once. That's O(n).

~2.5KB gzipped zero dependencies framework agnostic viewport virtualization 579 downloads/month
579 monthly npm downloads before launch

Hosted plans

Flowdown stays MIT. Paid plans are for teams that want private model support, hosted previews, or server-side rendering for chat transcripts.

Open SourceMIT
Free

The package for apps that render streamed markdown in the browser.

  • Core and React packages
  • Viewport virtualization
  • Static HTML rendering
Install package
Cloud Renderserver render
$199/mo

A render API for chat UIs with long transcripts or backend-first flows.

  • Cloud rendering endpoint
  • Cached transcript HTML
  • Priority integration support
Start Cloud Render
Tokens
0
marked (total)
Flowdown (total)
Speedup
marked DOM
0
Flowdown DOM
0
marked (re-parse every token)
Time:0ms
Flowdown (incremental)
Time:0ms

Benchmark Results

BenchmarkFlowdownmarkedmarkdown-it
String output (9KB) 0.77ms1.00ms0.87ms
DOM output (9KB) 7.3ms13.4ms12.9ms
Streaming (369 tokens) 1.65ms521ms
Streaming (2,079 tokens) 7.8ms16,765ms
Bundle size (gzipped) 2.5KB12KB51KB