← back to stream

Big O notation

Big O describes how an algorithm's cost scales with input size, ignoring constants and lower-order terms. The useful thing isn't the math — it's learning to classify at a glance: O(1) "doesn't matter how big n gets" (hash lookup), O(log n) "barely grows" (binary search, B-tree index), O(n) "read the input once" (linear scan), O(n log n) "sort it" (quicksort, mergesort), O(n²) "nested loop over the input" (bubble sort, naive pair comparisons). The pragma: for n=10 everything is fast, for n=1M the difference between O(n log n) and O(n²) is 20ms vs 6 hours. You don't really care about exact complexity — you care about the class, because that's what decides whether your code survives production.