Sequential scans are terribly underrated given today's architectures. If the datum is small and it's like 1000 elements max it's going to be hard to beat a simple scan of an array for speed and memory efficiency.
But trees et al also make a lot of sense as a default. If you need the performance to scale in a roughly linear manner you need to be smart about it. This is what sinks a lot of software, usability wise.
Author here. This was almost worth a mention in the article. Every once in a while I'd have a friend take a look at a piece of my algorithm and they'd suggest swapping out a sort() function for a priority queue etc. (without looking at the performance data) and find no change or sometimes even a slightly worse result! This is why having performance data is so critical, the performance of a complex algorithm can be counter-intuitive and most performance "gotchas" are just straight up mistakes or recalculations rather than fundamental algorithm issues (remember the GTA5 bug that caused the loading screen to take an extra 5 seconds, and was simple enough for a random dev to patch? that is the most common type of performance issue in development!)
But trees et al also make a lot of sense as a default. If you need the performance to scale in a roughly linear manner you need to be smart about it. This is what sinks a lot of software, usability wise.