为什么C ++程序会为局部变量分配更多的内存,而不是在最坏的情况下需要? [英] Why would a C++ program allocate more memory for local variables than it would need in the worst case?

查看:165
本文介绍了为什么C ++程序会为局部变量分配更多的内存,而不是在最坏的情况下需要?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

此问题的启发。



显然,下面的代码:

  #include< Windows.h> 

int _tmain(int argc,_TCHAR * argv [])
{
if(GetTickCount()> 1){
char buffer [500 * 1024] ;
SecureZeroMemory(buffer,sizeof(buffer));
} else {
char buffer [700 * 1024];
SecureZeroMemory(buffer,sizeof(buffer));
}
return 0;
}

使用默认堆栈大小(1兆字节) (/ O2)发生堆栈溢出,因为程序试图在堆栈上分配1200千字节。



上面的代码当然稍微夸张了,堆栈在一个相当蠢的方式。然而在实际情况下,堆栈大小可以更小(如256千字节),并且可能有更多的分支与更小的对象,这将导致足够的分配大小溢出堆栈。



这没有意义。最坏的情况是700千字节 - 这将是构建一组具有最大总大小的局部变量的代码路径。在编译期间检测该路径应该不是问题。



因此,编译器生成一个程序,试图分配比最坏的情况更多的内存。根据此答案 LLVM也是这样。



这可能是编译器的一个缺陷,或者可能有一些真正的原因。我的意思是说,我只是不明白编译器设计中的东西,这将解释为什么这样的分配是必要的。



为什么编译器要一个程序分配更多的内存,下面的代码在使用GCC 4.5.1编译时在 ://ideone.com/w9t0grel =nofollow> ideone 将两个数组放置在同一地址:

  #include< iostream> 

int main()
{
int x;
std :: cin>> X;

if(x%2 == 0)
{
char buffer [500 * 1024];
std :: cout<< static_cast< void *>(buffer)<< std :: endl;
}

if(x%3 == 0)
{
char buffer [700 * 1024];
std :: cout<< static_cast< void *>(buffer)<< std :: endl;
}
}




/ p>

输出:

0xbf8e9b1c

0xbf8e9b1c


如果你想要这个优化,答案可能是使用另一个编译器。


Inspired by this question.

Apparently in the following code:

#include <Windows.h>

int _tmain(int argc, _TCHAR* argv[])
{
    if( GetTickCount() > 1 ) {
        char buffer[500 * 1024];
        SecureZeroMemory( buffer, sizeof( buffer ) );
    } else {
        char buffer[700 * 1024];
        SecureZeroMemory( buffer, sizeof( buffer ) );
    }
    return 0;
}

compiled with default stack size (1 megabyte) with Visual C++ 10 with optimizations on (/O2) a stack overflow occurs because the program tries to allocate 1200 kilobytes on stack.

The code above is of course slightly exaggerated to show the problem - uses lots of stack in a rather dumb way. Yet in real scenarios stack size can be smaller (like 256 kilobytes) and there could be more branches with smaller objects that would induce a total allocation size enough to overflow the stack.

That makes no sense. The worst case would be 700 kilobytes - it would be the codepath that constructs the set of local variables with the largest total size along the way. Detecting that path during compilation should not be a problem.

So the compiler produces a program that tries to allocate even more memory than the worst case. According to this answer LLVM does the same.

That could be a deficiency in the compiler or there could be some real reason for doing it this way. I mean maybe I just don't understand something in compilers design that would explain why doing allocation this way is necessary.

Why would the compiler want a program allocate more memory than the code needs in the worst case?

解决方案

The following code when compiled using GCC 4.5.1 on ideone places the two arrays at the same address:

#include <iostream>

int main()
{
  int x;
  std::cin >> x;

  if (x % 2 == 0)
  {
    char buffer[500 * 1024]; 
    std::cout << static_cast<void*>(buffer) << std::endl;
  }

  if (x % 3 == 0)
  {
    char buffer[700 * 1024]; 
    std::cout << static_cast<void*>(buffer) << std::endl;
  }
}

input: 6

output:
0xbf8e9b1c
0xbf8e9b1c

The answer is probably "use another compiler" if you want this optimization.

这篇关于为什么C ++程序会为局部变量分配更多的内存,而不是在最坏的情况下需要?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆