Has anyone had a "real world" need to evaluate, classify, and validate an algorithm's complexity using Big O notation?
After 22+ years of software development across various tech stacks and platforms throughout a wide range of application architectures, I've not once been faced with a situation where I needed to evaluate the performance tradeoffs with accuracy between the explicit Big Os of two algorithms, like say... O(n^2) and O(n log n). Nor have I had to explicitly qualify an algorithm as O(n) vs O(n^2) to know which of the two was more optimized. (Thank goodness for that.) That's not to say I've not evaluated algorithms to optimize them. I've just never needed to go all Big O to make that determination. So... outside of academia, I'm curious if anyone has had a real world need to evaluate, classify, and validate an algorithm's complexity using Big O notation?