a bit offtopic - float vs. double performance?
hello, i have looked around and could not find an answer to my question.
Why don't you experiment and see?
That's a highly technical discussion, but to cut several corners: the speed difference is negligible for 99% of all applications.
I've seen too many people being proud of gaining 5% speed in a rarely used method by doing some time consuming floating point math optimizations, while at the same time they were reading data from disk instead of memory in an inner loop.
Start with doubles because they've got more precision. On x86 the FPU treats floats & doubles the same internally, so there is no speed difference (aside from the extra memory bandwidth for double loads/stores and larger cache footprint). If profiling suggests that memory bandwidth is a problem, consider switching to floats.
What about fixed point integer math ?
To answer the original question floats are usually 2 to 4 times faster than doubles.
Julian: what architecture are you on? (I assume a Pentium-class system, but just checking).
Also did a small floating point test using Delphi 6:
Not all languages or implementations of languages actually make a distinction in the storing of floats or doubles, they tend to use the 80bits that the processor uses. The distinctions they make are in the presentation of those numbers, which is actually more likely to give you rounding errors.
The answer to this question is platform dependant. Assuming you are working with a recent crop of the x86 processors, anything from Pentium II onwards. It really makes little difference. floats an integers are both treated similarly. If you realy are shooting for performance and do not care about precission, then you can tell the coprocessor to work in low-precission mode and use floats instead of doubles. A tiny chunk of assembler (that I don't seem to find right now ;) will do that for you.
Fog Creek Home