07 · Heap & Priority Queue
Binary max-heap · same data, two views (tree above, array below) · plus heapsort vs quicksort.
Heapsort vs Quicksort
Same starting array (28 random values), two algorithms in their own panels. Both finish in O(n log n) on average — but heapsort gives a worst-case guarantee, while quicksort is faster on typical inputs at the cost of an O(n²) worst case. The "ops" counter is comparison + swap events; it's a fair efficiency proxy that doesn't depend on wall-clock or browser scheduling.
What you're watching
A binary heap is a complete binary tree in which every parent is
≥ both children (max-heap). The clever bit: because the tree is
complete, you can store it in an array — node i's
children sit at 2i + 1 and 2i + 2, its
parent sits at (i − 1) / 2. The two views above are
the same data, just drawn differently.
Insert (sift-up) and extract-max (sift-down)
Insert O(log n)
Append the new value at the end of the array (= bottom-right of the tree). Then "sift up": compare it to its parent; if it's larger, swap; repeat until it's in place or it reaches the root. At most ⌈log₂ n⌉ swaps because each swap moves the value up one level.
Extract-max O(log n)
The root holds the max. Remove it; move the very last element into the root slot; "sift down": compare to children, swap with the larger one if it's bigger than the parent; repeat until the heap property holds again.
Heapsort step-by-step
- Heapify the input array in O(n) (the build-heap step you don't see directly — happens before the first visual swap).
- Swap arr[0] (the max) with arr[n − 1]. Now the largest is in its final position; the heap excludes that index.
- Sift-down arr[0] within the now-smaller heap. This restores the max-heap property and the new root is the next-largest.
- Repeat steps 2-3 with progressively smaller heap range. After n − 1 iterations, the array is sorted in place.
What I learned writing this
- The dual array+tree visualisation pays for itself. The tree view is what makes "sift up swaps with parent" visceral; the array view is what makes "the heap is just an array, you haven't allocated any tree" obvious. Either alone is incomplete.
-
Generators clean up the animation logic dramatically.
yield { type: "compare", a, b }from inside the natural algorithm body, and a single tick loop drives the highlighting. No state machines, no manual pause-and-resume. - Floyd's bottom-up heapify is one of those "trust the math" moments. You'd guess O(n log n) by inspection (n calls × log n each). The actual bound is O(n) by carefully summing height × count-at-height across all levels. The visual version here uses the synchronous form during "+10 random" — too fast to watch but verifiably correct.
- "Worst-case O(n log n)" matters in surprising places. For a one-off sort on random data, the choice between heapsort and quicksort is invisible. But anywhere a malicious input could force pathological quicksort behaviour (e.g., handling untrusted input, real-time guarantees), heapsort's bound is the safety net.