A common situation I run into is that I have some testcase, I profile it, then I optimize one of the things that looks slow a bit and reprofile. If I'm lucky, then the fraction of time it takes drops from B to A (A < B of course, with B standing for "before" and A for "after"). What does that mean for the overall time of the testcase?

Obviously, what it means depends on both A and B. If we assume that the time taken for the part that wasn't optimized hasn't changed, then the overall speedup is (1-A)/(1-B).

So B - A = 5% would mean a 2x speedup if B is 95% and A is 90%, and only a 1.05x speedup or so if B is 5% and A is 0%. If B is something like 53% and A something like 48%, then you get a 1.11x speedup.

All of which is to say that focusing on the hotspots is _really_ the way to go, if at all possible.

Posted by bzbarsky at February 4, 2010 3:21 PM | TrackBackComments

Honestly bz, your example itself is incomprehensible. How can B -A be a percentage ? Based on what ?

Now the logic is obvious. You can make a bit of code 10 times faster, if you are using it only 5% of the time, you will just gain 4.5% of time. You can win only 20% inside an inner loop, and make your whole app 10% faster if it's constantly running that inner loop, so that it's 50% of all the CPU you are using.

> How can B -A be a percentage

B is a fraction (typically shown as a percentage by your profiler). A is a fraction. Their difference is a fraction. 5% is a fraction (0.05 to be exact).

I think the key here is that even if the profiler is not showing much of an improvement in the _fraction_ of time a function takes you might still be doing ok if the fraction was big enough to start with.

Posted by: Boris on February 5, 2010 9:14 AMhttp://en.wikipedia.org/wiki/Amdahl%27s_law#Speedup_in_a_sequential_program

> All of which is to say that focusing on the hotspots is _really_ the way to go, if at all possible.

Er, yeah. You only just realised this?

Posted by: njn on February 5, 2010 9:49 PMNo, but I finally sat down and convinced myself that going from a function taking 65% of the time to it taking 55% of the time really was giving me the much more than 10% speedup that I was seeing. It's just something that I have to keep reminding myself of is all.

Posted by: Boris on February 5, 2010 9:55 PMThanks for pointing that out. The numbers are deceptive.

It may be helpful for people to look at it in arbitrary units of time instead of %. If B is 19 seconds and the rest of the program takes 1 second, that's 20 seconds total. If B is reduced from 19 seconds to 9 seconds, the total is now only 10 seconds, even though you have just gone from 95% to 90%!

Posted by: VanillaMozilla on February 10, 2010 1:21 PMIt's a lot clearer now, both what you mean and that it can be confusing, yes.

You must make the calculation based on the value that stayed constant, the part of time spent doing other things, not on the value that varies, the fraction of time spent in the function that became a smaller part of a total that itself became smaller.

The fraction of time spent elsewhere OB = 1-B, OA = 1-A changed when expressed as a fraction, but still has the same duration, which gives TB*OB = TA*OA, which leads to TB = TA *(1-A)/(1-B).

So with B=0.65 and A=0.55, everything is 1,285 faster.

Post a comment