Hi,
kutkloon7 wrote:
Why are the floating point functions inplemented in such a weird way? Why aren't the FPU opcodes the standard way? And even if there is a reason to not use the FPU, wouldn't it be better to let the OS provide the floating point functions (as you wouldn't need to do a lot of checks that way)?
As far as I can tell, the problem is precision.
The first step of computing "sin(x)" is to find "x % (2 * PI)". For FPU, both PI itself and the result of "x % (2 * PI)" can't be more precise than an 80-bit floating point register, which means that (especially for large values of 'x') the result of "(x % (2 * PI))" isn't as precise, and therefore the result of "sin(x % (2 * PI))" isn't as precise.
For doing it in software, you can use as much precision as you like for "x % (2 * PI)" (and also for "sin(x % (2 * PI)"). Basically, you're not limited to the precision of 80-bit floating point for the intermediate steps, so the final "80-bit precision" result is more precise.
For performance (instead of precision) your compiler might have some sort of "fast math" option (e.g. "-ffast-math" for GCC). If this is enabled I'd expect it to generate an "fsin" instruction with no library call at all.
Cheers,
Brendan