probably floating point precision annoyances. disabling optimisations means results get forced to 32bit when stored instead of retaining 80bit precision.
the difference can result in prediction errors when different compilers are used (like linux servers and windows clients).