C vs C ++(CLI)中调试器中的浮点表示 [英] Floating Point Representation in Debugger in C vs C++(CLI)
问题描述
背景知识:我正在做一些数据从 C
到 C#
通过使用 C ++ / CLI
中间层,我注意到调试器显示 floats
和<$的方式很特殊c $ c> double ,取决于代码在哪个dll中执行(请参见下面的代码和图像)。起初,我认为这与托管/非托管差异有关,但是后来我意识到,如果完全将 C#
层排除在外,而仅使用非托管数据类型,表现出相同的行为。
测试用例:为了进一步探讨该问题,我创建了一个隔离的测试用例以清楚地识别奇怪的行为。我假设可能正在测试此代码的任何人都已经具有有效的解决方案和 dllimport
/ dllexport
/
C ++ / CLI中的调试器- floatTestCPP()
考虑 C ++ / CLI中的调试器本身不一定用C,C#或C ++编码。
MS库支持 R格式:可以四舍五入的字符串跳到相同的号码。我怀疑是这种情况还是使用了 g
格式。
没有MS源代码,以下只是一个很好的假设:
调试输出足以区分 double
与附近的其他 double
。因此,代码无需打印 42.420000000000002
,而是 42.42
足够使用任何格式。
42.42作为IEEE double
大约为 42.4200000000000017053025658242404460906982 ...
,调试器当然不需要打印确切的值。
电位;类似的C代码
int main(void){
puts( 12.34567890123456);
double d = 42.42;
printf(%。16g\n,nextafter(d,0));
printf(%。16g\n,d);
printf(%。17g\n,d);
printf(%。16g\n,nextafter(d,2 * d));
d = 1 / 3.0f;
printf(%。9g\n,nextafterf(d,0));
printf(%。9g\n,d);
printf(%。9g\n,nextafterf(d,2 * d));
d = 1 / 3.0f;
printf(%。16g\n,nextafter(d,0));
printf(%。16g\n,d);
printf(%。16g\n,nextafter(d,2 * d));
}
输出
12.34567890123456
42.41999999999999
42.42
42.420000000000002 //不需要此精度级别。
42.42000000000001
0.333333313
0.333333343
0.333333373
0.3333333432674407
0.3333333432674408
0.3333333432674409
为您的代码将 double
转换为具有足够文本精度的文本,然后返回 double
以往返数字,请参见 Printf宽度说明符以保持浮点值的精度。
A little background: I was working on some data conversion from C
to C#
by using a C++/CLI
midlayer, and I noticed a peculiarity with the way the debugger shows floats
and doubles
, depending on which dll the code is executing in (see code and images below). At first I thought it had something to do with managed/unmanaged differences, but then I realized that if I completely left the C#
layer out of it and only used unmanaged data types, the same behaviour was exhibited.
Test Case: To further explore the issue, I created an isolated test case to clearly identify the strange behaviour. I am assuming that anyone who may be testing this code already has a working Solution and dllimport
/dllexport
/ macros set up. Mine is called DLL_EXPORT
. If you need a minimal working header file, let me know. Here the main application is in C
and calling a function from a C++/CLI
dll. I am using Visual Studio 2015 and both assemblies are 32 bit
.
I am a bit concerned, as I am not sure if this is something I need to worry about or it's just something the debugger is doing (I am leaning towards the latter). And to be quite honest, I am just outright curious as to what's happening here.
Question: Can anyone explain the observed behaviour or at least point me in the right direction?
C - Calling Function
void floatTest()
{
float floatValC = 42.42f;
double doubleValC = 42.42;
//even if passing the address, behaviour is same as all others.
float retFloat = 42.42f;
double retDouble = 42.42;
int sizeOfFloatC = sizeof(float);
int sizeOfDoubleC = sizeof(double);
floatTestCPP(floatValC, doubleValC, &retFloat, &retDouble);
//do some dummy math to make compiler happy (i.e. no unsused variable warnings)
sizeOfFloatC = sizeOfFloatC + sizeOfDoubleC;//break point here
}
C++/CLI Header
DLL_EXPORT void floatTestCPP(float floatVal, double doubleVal,
float *floatRet, double *doubleRet);
C++/CLI Source
//as you can see, there are no managed types in this function
void floatTestCPP(float floatVal, double doubleVal, float *floatRet, double *doubleRet)
{
float floatLocal = floatVal;
double doubleLocal = doubleVal;
int sizeOfFloatCPP = sizeof(float);
int sizeOfDoubleCPP = sizeof(double);
*floatRet = 42.42f;
*doubleRet = 42.42;
//do some dummy math to make compiler happy (no warnings)
floatLocal = (float)doubleLocal;//break point here
sizeOfDoubleCPP = sizeOfFloatCPP;
}
Debugger in C - break point on last line of floatTest()
Debugger in C++/CLI - break point on the second to last line of floatTestCPP()
Consider Debugger in C++/CLI itself is not necessarily coded in C, C# or C++.
MS libraries support the "R" format: A string that can round-trip to an identical number. I suspect this or a g
format was used.
Without MS source code, the following is only a good supposition:
The debug output is enough to distinguish the double
from other nearby double
. So code need not print "42.420000000000002"
, but "42.42"
is sufficient - whatever format is used.
42.42 as an IEEE double
is about 42.4200000000000017053025658242404460906982...
and the debugger certainly need not print the exact value.
Potential; similar C code
int main(void) {
puts("12.34567890123456");
double d = 42.42;
printf("%.16g\n", nextafter(d,0));
printf("%.16g\n", d);
printf("%.17g\n", d);
printf("%.16g\n", nextafter(d,2*d));
d = 1 / 3.0f;
printf("%.9g\n", nextafterf(d,0));
printf("%.9g\n", d);
printf("%.9g\n", nextafterf(d,2*d));
d = 1 / 3.0f;
printf("%.16g\n", nextafter(d,0));
printf("%.16g\n", d);
printf("%.16g\n", nextafter(d,2*d));
}
output
12.34567890123456
42.41999999999999
42.42
42.420000000000002 // this level of precision not needed.
42.42000000000001
0.333333313
0.333333343
0.333333373
0.3333333432674407
0.3333333432674408
0.3333333432674409
For your code to convert a double
to text with sufficient textual precision and back to double
to "round-trip" the number, see Printf width specifier to maintain precision of floating-point value.
这篇关于C vs C ++(CLI)中调试器中的浮点表示的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!