What constitutes a floating point operation (a FLOP)?
Obviously, any operation on a floating-point value, right? Well, yes and no. Computing the sin(x) is really done by computing a series expansion, so does sin(x) count as one FLOP or as however many addition/subtraction/multiplication/division operations are in the series expansion used? The opposite effect occurs with things like the “Cray FLOP” measure: an absolute value operation on an old Cray was implemented by a three-instruction sequence, so it was 3 FLOPs; however, everybody knows all you have to do is zero the sign bit, which takes only a single (integer) bitwise-AND instruction — no FLOPs at all. How you count can make a huge difference. If your code only does addition and multiplication, there is general agreement on how you count those… but even a subtract causes ambiguity about one subtraction versus an addition of the negative. The Top500 list essentially defines FLOPs by a formula that counts additions and multiplications assuming a particular algorithm, but in the pas