难道整数溢出会导致不确定的,因为内存损坏行为? [英] Does integer overflow cause undefined behavior because of memory corruption?
问题描述
我最近读到,在C和C ++符号的整数溢出导致未定义行为:
如果一个前pression的评估过程中,结果不是数学定义或没有针对其类型重新presentable值的范围,行为是不确定的。
块引用>我目前想了解这里的不确定的行为的原因。我想是因为整数开始自己周围操纵内存,当它变得太大,以适应基础类型出现在这里未定义的行为。
所以我决定在Visual Studio 2015年写的一个小测试程序来测试这一理论具有以下code:
的#include<&stdio.h中GT;
#包括LT&;&limits.h中GT;结构TestStruct
{
焦炭PAD1 [50];
名为testVal INT;
焦炭PAD2 [50];
};诠释的main()
{
TestStruct试验;
memset的(安培;测试,0,sizeof的(试验)); 为(test.testVal = 0; test.testVal ++)
{
如果(test.testVal == INT_MAX)
的printf(溢出\\ r \\ n);
} 返回0;
}我使用的结构在这里prevent Visual Studio中的任何保护事宜调试模式像堆栈变量等临时填充。
之外的任何后果
死循环中应引起test.testVal
若干溢出,它的确,虽然没有超过溢出本身。我看了一下内存转储,同时运行,结果如下(
test.testVal
溢出测试了的内存地址0x001CFAFC
)0x001CFAE5 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x001CFAFC 94 53 CA D8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00正如你所见,周围不断溢出的int内存仍然是完好无损。我用类似的输出测试几次。决不是周围的任何内存溢出的INT损坏。
在这里是什么情况?为什么没有做内存周围的变量伤害
test.testVal
?这个事业如何未定义行为?我想了解我的错误,为什么会存在一个整数溢出过程中进行无内存损坏。
解决方案您误解了不确定的行为的原因。原因是不绕整数存储器讹误 - 这将总是占据该整数占据同样大小 - 但底层算术
由于是EN codeD 2的补符号整数不是必需的,不能有什么打算时,他们溢出发生的具体指导。不同的编码或CPU行为可导致溢出的不同结果,包括,例如,程序杀死由于陷阱。
和与所有未定义的行为,即使你的硬件使用2的补其算术和定义溢出的规则,编译器不会受其约束。例如,长时间GCC优化掉,这将只在一个二进制补码的环境来实现任何检查。例如,
如果(X> X + 1)F()
将被从优化code删除,因为符号溢出是未定义的行为,绝不意味着它发生(从编译器的观点,方案不会包含code产生不确定的行为),这意味着X
永远大于X + 1
。I recently read that signed integer overflow in C and C++ causes undefined behavior:
If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for its type, the behavior is undefined.
I am currently trying to understand the reason of the undefined behavior here. I thought undefined behavior occurs here because the integer starts manipulating the memory around itself when it gets too big to fit the underlying type.
So I decided to write a little test program in Visual Studio 2015 to test that theory with the following code:
#include <stdio.h> #include <limits.h> struct TestStruct { char pad1[50]; int testVal; char pad2[50]; }; int main() { TestStruct test; memset(&test, 0, sizeof(test)); for (test.testVal = 0; ; test.testVal++) { if (test.testVal == INT_MAX) printf("Overflowing\r\n"); } return 0; }
I used a structure here to prevent any protective matters of Visual Studio in debugging mode like the temporary padding of stack variables and so on. The endless loop should cause several overflows of
test.testVal
, and it does indeed, though without any consequences other than the overflow itself.I took a look at the memory dump while running the overflow tests with the following result (
test.testVal
had a memory address of0x001CFAFC
):0x001CFAE5 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x001CFAFC 94 53 ca d8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
As you see, the memory around the int that is continuously overflowing remained "undamaged". I tested this several times with similar output. Never was any memory around the overflowing int damaged.
What happens here? Why is there no damage done to the memory around the variable
test.testVal
? How can this cause undefined behavior?I am trying to understand my mistake and why there is no memory corruption done during an integer overflow.
解决方案You misunderstand the reason for undefined behavior. The reason is not memory corruption around the integer - it will always occupy the same size which integers occupy - but the underlying arithmetics.
Since signed integers are not required to be encoded in 2's complement, there can not be specific guidance on what is going to happen when they overflow. Different encoding or CPU behavior can cause different outcomes of overflow, including, for example, program kills due to traps.
And as with all undefined behavior, even if your hardware uses 2's complement for its arithmetic and has defined rules for overflow, compilers are not bound by them. For example, for a long time GCC optimized away any checks which would only come true in a 2's-complement environment. For instance,
if (x > x + 1) f()
is going to be removed from optimized code, as signed overflow is undefined behavior, meaning it never happens (from compiler's view, programs never contain code producing undefined behavior), meaningx
can never be greater thanx + 1
.这篇关于难道整数溢出会导致不确定的,因为内存损坏行为?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!