I have tried to run this with both eclipse(CDT)+MinGW and Cygwin+GCC
[code lang=”c”] int main() {puts("The range of ");
printf("tlong double is [%Le, %Le]∪[%Le, %Le]n", -LDBL_MAX, -LDBL_MIN, LDBL_MIN, LDBL_MAX);
return EXIT_SUCCESS;
}
[/code]
but got different results:
- In eclipse(CDT)+MinGW
The range of
long double is [-1.#QNAN0e+000, 3.237810e-319]∪[6.953674e-310, 0.000000e+000] - In Cygwin+GCC
The range of
long double is [-1.189731e+4932, -3.362103e-4932]∪[3.362103e-4932, 1.189731e+4932]
This is weird, and I googled it, then just found this http://www.thescripts.com/forum/thread498535.html
The LDBL_MAX of long double is machine-dependent, but why it like this in same machine? I guess it’s the problem with MinGW. Anyone hv any idea?
http://groups.google.com/group/comp.lang.c/browse_thread/thread/4f81864ab1ca1b92/794fa914463b18c0?lnk=raot#794fa914463b18c0
this is another test~