01 · Sorting Race
Six algorithms · same array · same starting state · who finishes first?
What you're watching
Each panel runs the same input array through a different sorting algorithm. The bar coloured red is the comparison or swap currently being executed. The number above each panel counts the comparison + write operations performed so far — a more honest metric than wall-clock time, since wall-clock conflates algorithm complexity with browser scheduling.
yields a state snapshot at every comparison and
swap. The animation loop pulls the next state from each generator
(advancing all six in lockstep, modulated by the speed slider) and
re-renders the bars on a single canvas per panel.
See sorting.js for the
complete source — it's about 250 lines of plain JavaScript.
The six algorithms
Bubble sort O(n²)
Repeatedly walk through the array and swap any adjacent pair that's out of order. After each pass, the largest unsorted element is guaranteed to have "bubbled" to the end. Almost never used in practice — but it's the simplest correct sort, and a useful baseline.
Insertion sort O(n²) · O(n) on nearly-sorted input
Build the sorted prefix one element at a time, sliding each new element backwards until it lands in the right slot. Surprisingly fast on small or nearly-sorted arrays — many "high-performance" implementations of merge / quick sort fall back to insertion sort for sub-arrays smaller than ~10 elements.
Selection sort O(n²)
Repeatedly find the minimum of the unsorted region and swap it to the boundary. Always exactly n²/2 comparisons regardless of input — no best/worst-case asymmetry — but very few writes (n swaps total), which makes it interesting on hardware where writes are expensive.
Merge sort O(n log n) · stable
Recursively split, sort each half, then merge. Predictable n log n in the worst case, and stable (preserves the order of equal elements). Uses extra memory for the merge buffer; that cost is why in-place quicksort is often preferred for primitive arrays.
Quick sort O(n log n) avg · O(n²) worst
Pick a pivot, partition smaller-vs-larger around it, recurse on each side. With a randomised or median-of-three pivot the worst case becomes vanishingly rare; in practice it's the fastest comparison sort for random data, which is why most language standard libraries use a quicksort variant (often Tim sort or introsort).
Heap sort O(n log n) worst-case
Build a max-heap in O(n), then repeatedly swap the root with the last element and sift down. Same complexity class as merge / quick, but in-place and with a guaranteed worst case. Slightly slower than quick sort in practice due to poorer cache behaviour.
What I learned writing this
-
Generators are the perfect abstraction for visualizing iteration.
You write the algorithm in its natural form and just sprinkle
yieldat the points you want the UI to pause. No state machines, no manual step counters, no weird coroutines — justfunction*. - Wall-clock comparisons mislead. On a 60-element array, all six finish in < 50 ms; the visual race only works because we artificially slow each step. The "winner" of the visual race is whichever algorithm has the fewest state-changing operations, which is a different question from "fastest in production."
-
Canvas beats SVG for this.
For 6 panels × 200 bars × 60 fps re-renders, SVG would be a DOM nightmare.
A single
<canvas>per panel and afillRectloop is buttery smooth.