When is optimisation premature?
Usually. The exception is perhaps in your design, or in well encapsulated code that is heavily used. In the past I’ve worked on some time critical code (an RSA implementation) where looking at the assembler that the compiler produced and removing a single unnecessary instruction in an inner loop gave a 30% speedup. But, the speedup from using more sophisticated algorithms was orders of magnitude more than that. Another question to ask yourself when optimising is “am I doing the equivalent of optimising for a 300 baud modem here?”. In other words, will Moore’s law make your optimisation irrelevant before too long. Many problems of scaling can be solved just by throwing more hardware at the problem. Last but not least it’s premature to optimise before the program is going too slowly. If it’s web application you’re talking about, you can run it under load to see where the bottlenecks are – but the likelihood is that you will have the same scaling problems as most other sites, and the same