We've had a few bug reports about Github's search boxes looking all wrong on trunk. The reason for this is broken browser-sniffing masquerading as object-detection on Github.
In particular, Github assumes that only browsers whose input elements have an "onsearch" property (which seems to be some sort of Webkit proprietary extension) support the "placeholder" attribute on inputs. Of course Gecko 2.0 supports the "placeholder" attribute but its inputs do not have an "onsearch" property.
The right way to do this sniffing is to check whether an input has a "placeholder" property; if it does, assuming that the "placeholder" attribute is supported is reasonable. But using detection of one object as a proxy for the presence or absence of another one is just broken.
If someone has contacts at Github, please let them know about this problem?
The default value should probably be "As needed", though I can see someone making a case for one of the other values.
Update: To be clear, the "ask every time" pref would ask per potential garbage collection opportunity, just like the cookie pref asks per cookie and the image pref asks per image. All power to the users!
The Peacekeeper benchmark runs its tests by doing the operation 10,000 times, then dividing one million by the time spent to determine a runs/second number. Unfortunately, the accuracy of this approach is terrible: the operations they're timing don't take that long to do, so they're measuring times in the 0-20ms range for those 10,000 iterations. Given the millisecond accuracy of JS timers, that introduces a lot of noise; furthermore some browsers don't actually update their Date.now() expeditiously; those would look better than they really are in this benchmark.
It looks like the futuremark folks had some code to try to run for 2s instead of 10,000 iterations (only in IE, although the comment says in non-IE), but they messed up the scoping so that the code is a no-op.
On a separate note, for their Array.splice benchmark even 10,000 iterations is nonsense. The benchmark starts with an array of 100,000 elements and removes 20 elements for each call. After 5000 calls, the calls become no-ops, and the benchmark then times those no-ops.
If anyone knows how to contact futuremark about this benchmark, I'd really appreciate it. I have yet to find useful contact info for that.