A common situation I run into is that I have some testcase, I profile it, then I optimize one of the things that looks slow a bit and reprofile. If I'm lucky, then the fraction of time it takes drops from B to A (A < B of course, with B standing for "before" and A for "after"). What does that mean for the overall time of the testcase?
Obviously, what it means depends on both A and B. If we assume that the time taken for the part that wasn't optimized hasn't changed, then the overall speedup is (1-A)/(1-B).
So B - A = 5% would mean a 2x speedup if B is 95% and A is 90%, and only a 1.05x speedup or so if B is 5% and A is 0%. If B is something like 53% and A something like 48%, then you get a 1.11x speedup.
All of which is to say that focusing on the hotspots is _really_ the way to go, if at all possible.Posted by bzbarsky at February 4, 2010 3:21 PM