Maybe I'm losing my mind...
If I write
double x;
void fn0(void) {
x = ...;
}
where x is a global, I'll get expected results. But if I write
void fn0(void) {
double x;
x = ...;
}
where x is a local variable, the debugger and the compiler disagree about where x is located, so the displayed value for x in the debugger has the wrong value. I've single stepped through the assembly code and seen what location gets put into the registers that access the variable; it's not the same as the one attributed to the variable by the debugger.
A similar problem occurs with values passed as function arguments, e.g.,
void fn1(double t) {
... = t...;
}
My code is based on uTasker SP 7 -- in particular, in the ColdFire Processor configuration panel, Parameter Passing is set to Register, Integers are 4 bytes, A6 Stack Frames are turned off, the CPU is a 52233 (so Floating Point is automatically selected to be Software), both the Code and Data Models are set to Far (32 bit). I believe all the other choices are the usual ColdFire ones.
I'm using CodeWarrior 7.1, build 14. I've made the recommended additions and changes to use floating point variables --
1) include (and link) the following files in my project:
fp_coldfire.a in C:\Program Files\Freescale\CodeWarrior for ColdFire V7.1\ColdFire_Support\Libraries
C_4i_CF_RegABI_Runtime.a in C:\Program Files\Freescale\CodeWarrior for ColdFire V7.1\ColdFire_Support\Runtime
C_4i_CF_RegABI_SZ_MSL.a in C:\Program Files\Freescale\CodeWarrior for ColdFire V7.1\ColdFire_Support\msl\MSL_C\MSL_ColdFire\Lib
2) modify ansi_prefix.CF.size.h in C:\Program Files\Freescale\CodeWarrior for ColdFire V7.1\ColdFire_Support\msl\MSL_C\MSL_ColdFire\Include by changing a couple of #defines to give
#if !((__COLDFIRE__ == __MCF5475__ || __COLDFIRE__ == __MCF5485__) && !__option(fp_library))
#define _MSL_FLOATING_POINT 1 // was 0
#undef _MSL_NO_MATH_LIB // was define
#endif
Unlike Neil's suggestion (topic 399, message 1646), I did not change _MSL_FLOATING_POINT_IO from 0 to 1, because I'm not trying to format or print floating point numbers.
With the exception of this debugging problem, floating point code seems to work just fine with this setup.
My questions are these:
1) has anyone else observed this problem? and
2) is there some setting I missed that would make the debugger correctly recognize the location of local variables of type double?
and, of course,
3) am I losing my mind?
Cheers,
Richard