如何让 .NET 积极地进行垃圾收集? [英] How do I get .NET to garbage collect aggressively?

查看:29
本文介绍了如何让 .NET 积极地进行垃圾收集?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个用于图像处理的应用程序,我发现自己通常分配 4000x4000 ushort 大小的数组,以及偶尔的浮点数等.目前,.NET 框架在这个应用程序中显然是随机崩溃的,几乎总是出现内存不足错误.32mb 并不是一个很大的声明,但如果 .NET 正在分割内存,那么如此大的连续分配很可能不会按预期运行.

有没有办法告诉垃圾收集器更积极,或者对内存进行碎片整理(如果这是问题的话)?我意识到有 GC.Collect 和 GC.WaitForPendingFinalizers 调用,并且我已经在我的代码中大量地洒了它们,但我仍然遇到错误.这可能是因为我正在调用大量使用本机代码的 dll 例程,但我不确定.我已经检查了那个 C++ 代码,并确保我声明的任何内存都被删除了,但我仍然会遇到这些 C# 崩溃,所以我很确定它不存在.我想知道 C++ 调用是否会干扰 GC,使其留下内存,因为它曾经与本机调用交互——这可能吗?如果是这样,我可以关闭该功能吗?

这里有一些非常具体的代码会导致崩溃.根据 this SO question,我不需要处理 BitmapSource对象在这里.这是原始版本,其中没有 GC.Collects.它通常在撤消过程的第 4 到 10 次迭代时崩溃.此代码替换了空白 WPF 项目中的构造函数,因为我使用的是 WPF.由于我在下面对@dthorpe 的回答中解释的限制以及 这个问题.

public 部分类 Window1 : Window {公共窗口1(){初始化组件();//尝试创建OOM崩溃//为此,模仿图像"(ushort数组)的微小裁剪,然后撤消裁剪int theRows = 4000, currRows;int theColumns = 4000, currCols;int theMaxChange = 30;国际我;列表theList = new List();//撤销/重做堆栈中的图像列表byte[] displayBuffer = null;//用作位图源的缓冲区BitmapSource theSource = null;for (i = 0; i < theMaxChange; i++) {currRows = theRows - i;currCols = theColumns - i;theList.Add(new ushort[(theRows - i) * (theColumns - i)]);displayBuffer = new byte[theList[i].Length];theSource = BitmapSource.Create(currCols, currRows,96, 96, PixelFormats.Gray8, null, displayBuffer,(currCols * PixelFormats.Gray8.BitsPerPixel + 7)/8);System.Console.WriteLine("必须改变" + i.ToString());System.Threading.Thread.Sleep(100);}//应该到这里.如果不是,则MaxChange 太大.//现在,返回撤消堆栈.for (i = theMaxChange - 1; i >= 0; i--) {displayBuffer = new byte[theList[i].Length];theSource = BitmapSource.Create((theColumns - i), (theRows - i),96, 96, PixelFormats.Gray8, null, displayBuffer,((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7)/8);System.Console.WriteLine("必须撤销更改" + i.ToString());System.Threading.Thread.Sleep(100);}}}

现在,如果我明确调用垃圾收集器,我必须将整个代码包装在一个外循环中以导致 OOM 崩溃.对我来说,这往往发生在 x = 50 左右:

public 部分类 Window1 : Window {公共窗口1(){初始化组件();//尝试创建OOM崩溃//为此,模仿图像"(ushort数组)的微小裁剪,然后撤消裁剪for (int x = 0; x <1000; x++){int theRows = 4000, currRows;int theColumns = 4000, currCols;int theMaxChange = 30;国际我;列表theList = new List();//撤销/重做堆栈中的图像列表byte[] displayBuffer = null;//用作位图源的缓冲区BitmapSource theSource = null;for (i = 0; i < theMaxChange; i++) {currRows = theRows - i;currCols = theColumns - i;theList.Add(new ushort[(theRows - i) * (theColumns - i)]);displayBuffer = new byte[theList[i].Length];theSource = BitmapSource.Create(currCols, currRows,96, 96, PixelFormats.Gray8, null, displayBuffer,(currCols * PixelFormats.Gray8.BitsPerPixel + 7)/8);}//应该到这里.如果不是,则MaxChange 太大.//现在,返回撤消堆栈.for (i = theMaxChange - 1; i >= 0; i--) {displayBuffer = new byte[theList[i].Length];theSource = BitmapSource.Create((theColumns - i), (theRows - i),96, 96, PixelFormats.Gray8, null, displayBuffer,((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7)/8);GC.WaitForPendingFinalizers();//强制gc收集,因为我们在场景2,很多大的随机变化GC.Collect();}System.Console.WriteLine("转到变更列表" + x.ToString());System.Threading.Thread.Sleep(100);}}}

如果我在任何一种情况下都对内存处理不当,如果有什么我应该用分析器发现的地方,请告诉我.这是一个非常简单的例程.

不幸的是,看起来@Kevin 的回答是正确的——这是 .NET 中的一个错误以及 .NET 如何处理大于 85k 的对象.这种情况让我觉得非常奇怪;Powerpoint 是否可以在 .NET 中使用这种限制或任何其他 Office 套件应用程序重写?85k 在我看来并不是很大的空间,而且我还认为任何经常使用所谓的大"分配的程序在使用 .NET 时会在几天到几周内变得不稳定.

EDIT:看起来 Kevin 是对的,这是 .NET 的 GC 的限制.对于那些不想跟踪整个线程的人,.NET 有四个 GC 堆:gen0、gen1、gen2 和 LOH(大对象堆).根据创建时间(从 gen0 到 gen1 再到 gen2 等),85k 或更小的所有内容都在前三个堆之一上.大于 85k 的对象被放置在 LOH 上.LOH 从不 被压缩,因此最终,我正在执行的类型的分配最终将导致 OOM 错误,因为对象分散在该内存空间中.我们发现迁移到 .NET 4.0 确实对问题有所帮助,延迟了异常,但并没有阻止它.老实说,这感觉有点像 640k 的障碍——85k 对任何用户应用程序来说都应该足够了(转述 此视频 讨论了 .NET 中的 GC).作为记录,Java 的 GC 没有表现出这种行为.

解决方案

这里有一些文章详细介绍了大对象堆的问题.这听起来像是您可能遇到的问题.

http://connect.microsoft.com/VisualStudio/feedback/details/521147/large-object-heap-fragmentation-causes-outofmemoryexception

大对象堆的危害:
http://www.simple-talk.com/dotnet/.net-framework/the-dangers-of-the-large-object-heap/

这是有关如何在大对象堆 (LOH) 上收集数据的链接:
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

照此看来,似乎没有办法压缩LOH.我找不到任何更新的内容明确说明如何执行此操作,因此它似乎在 2.0 运行时没有更改:
http://blogs.msdn.com/maoni/archive/2006/04/18/large-object-heap.aspx

处理这个问题的简单方法是尽可能制作小物体.您的另一个选择是只创建几个大对象并一遍又一遍地重用它们.不是一个想法的情况,但它可能比重写对象结构更好.由于您确实说过创建的对象(数组)具有不同的大小,这可能很困难,但可以防止应用程序崩溃.

I have an application that is used in image processing, and I find myself typically allocating arrays in the 4000x4000 ushort size, as well as the occasional float and the like. Currently, the .NET framework tends to crash in this app apparently randomly, almost always with an out of memory error. 32mb is not a huge declaration, but if .NET is fragmenting memory, then it's very possible that such large continuous allocations aren't behaving as expected.

Is there a way to tell the garbage collector to be more aggressive, or to defrag memory (if that's the problem)? I realize that there's the GC.Collect and GC.WaitForPendingFinalizers calls, and I've sprinkled them pretty liberally through my code, but I'm still getting the errors. It may be because I'm calling dll routines that use native code a lot, but I'm not sure. I've gone over that C++ code, and make sure that any memory I declare I delete, but still I get these C# crashes, so I'm pretty sure it's not there. I wonder if the C++ calls could be interfering with the GC, making it leave behind memory because it once interacted with a native call-- is that possible? If so, can I turn that functionality off?

EDIT: Here is some very specific code that will cause the crash. According to this SO question, I do not need to be disposing of the BitmapSource objects here. Here is the naive version, no GC.Collects in it. It generally crashes on iteration 4 to 10 of the undo procedure. This code replaces the constructor in a blank WPF project, since I'm using WPF. I do the wackiness with the bitmapsource because of the limitations I explained in my answer to @dthorpe below as well as the requirements listed in this SO question.

public partial class Window1 : Window {
    public Window1() {
        InitializeComponent();
        //Attempts to create an OOM crash
        //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops
        int theRows = 4000, currRows;
        int theColumns = 4000, currCols;
        int theMaxChange = 30;
        int i;
        List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack
        byte[] displayBuffer = null;//the buffer used as a bitmap source
        BitmapSource theSource = null;
        for (i = 0; i < theMaxChange; i++) {
            currRows = theRows - i;
            currCols = theColumns - i;
            theList.Add(new ushort[(theRows - i) * (theColumns - i)]);
            displayBuffer = new byte[theList[i].Length];
            theSource = BitmapSource.Create(currCols, currRows,
                    96, 96, PixelFormats.Gray8, null, displayBuffer,
                    (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
            System.Console.WriteLine("Got to change " + i.ToString());
            System.Threading.Thread.Sleep(100);
        }
        //should get here.  If not, then theMaxChange is too large.
        //Now, go back up the undo stack.
        for (i = theMaxChange - 1; i >= 0; i--) {
            displayBuffer = new byte[theList[i].Length];
            theSource = BitmapSource.Create((theColumns - i), (theRows - i),
                    96, 96, PixelFormats.Gray8, null, displayBuffer,
                    ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
            System.Console.WriteLine("Got to undo change " + i.ToString());
            System.Threading.Thread.Sleep(100);
        }
    }
}

Now, if I'm explicit in calling the garbage collector, I have to wrap the entire code in an outer loop to cause the OOM crash. For me, this tends to happen around x = 50 or so:

public partial class Window1 : Window {
    public Window1() {
        InitializeComponent();
        //Attempts to create an OOM crash
        //to do so, mimic minute croppings of an 'image' (ushort array), and then undoing the crops
        for (int x = 0; x < 1000; x++){
            int theRows = 4000, currRows;
            int theColumns = 4000, currCols;
            int theMaxChange = 30;
            int i;
            List<ushort[]> theList = new List<ushort[]>();//the list of images in the undo/redo stack
            byte[] displayBuffer = null;//the buffer used as a bitmap source
            BitmapSource theSource = null;
            for (i = 0; i < theMaxChange; i++) {
                currRows = theRows - i;
                currCols = theColumns - i;
                theList.Add(new ushort[(theRows - i) * (theColumns - i)]);
                displayBuffer = new byte[theList[i].Length];
                theSource = BitmapSource.Create(currCols, currRows,
                        96, 96, PixelFormats.Gray8, null, displayBuffer,
                        (currCols * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
            }
            //should get here.  If not, then theMaxChange is too large.
            //Now, go back up the undo stack.
            for (i = theMaxChange - 1; i >= 0; i--) {
                displayBuffer = new byte[theList[i].Length];
                theSource = BitmapSource.Create((theColumns - i), (theRows - i),
                        96, 96, PixelFormats.Gray8, null, displayBuffer,
                        ((theColumns - i) * PixelFormats.Gray8.BitsPerPixel + 7) / 8);
                GC.WaitForPendingFinalizers();//force gc to collect, because we're in scenario 2, lots of large random changes
                GC.Collect();
            }
            System.Console.WriteLine("Got to changelist " + x.ToString());
            System.Threading.Thread.Sleep(100);
        }
    }
}

If I'm mishandling memory in either scenario, if there's something I should spot with a profiler, let me know. That's a pretty simple routine there.

Unfortunately, it looks like @Kevin's answer is right-- this is a bug in .NET and how .NET handles objects larger than 85k. This situation strikes me as exceedingly strange; could Powerpoint be rewritten in .NET with this kind of limitation, or any of the other Office suite applications? 85k does not seem to me to be a whole lot of space, and I'd also think that any program that uses so-called 'large' allocations frequently would become unstable within a matter of days to weeks when using .NET.

EDIT: It looks like Kevin is right, this is a limitation of .NET's GC. For those who don't want to follow the entire thread, .NET has four GC heaps: gen0, gen1, gen2, and LOH (Large Object Heap). Everything that's 85k or smaller goes on one of the first three heaps, depending on creation time (moved from gen0 to gen1 to gen2, etc). Objects larger than 85k get placed on the LOH. The LOH is never compacted, so eventually, allocations of the type I'm doing will eventually cause an OOM error as objects get scattered about that memory space. We've found that moving to .NET 4.0 does help the problem somewhat, delaying the exception, but not preventing it. To be honest, this feels a bit like the 640k barrier-- 85k ought to be enough for any user application (to paraphrase this video of a discussion of the GC in .NET). For the record, Java does not exhibit this behavior with its GC.

解决方案

Here are some articles detailing problems with the Large Object Heap. It sounds like what you might be running into.

http://connect.microsoft.com/VisualStudio/feedback/details/521147/large-object-heap-fragmentation-causes-outofmemoryexception

Dangers of the large object heap:
http://www.simple-talk.com/dotnet/.net-framework/the-dangers-of-the-large-object-heap/

Here is a link on how to collect data on the Large Object Heap (LOH):
http://msdn.microsoft.com/en-us/magazine/cc534993.aspx

According to this, it seems there is no way to compact the LOH. I can't find anything newer that explicitly says how to do it, and so it seems that it hasn't changed in the 2.0 runtime:
http://blogs.msdn.com/maoni/archive/2006/04/18/large-object-heap.aspx

The simple way of handling the issue is to make small objects if at all possible. Your other option to is to create only a few large objects and reuse them over and over. Not an idea situation, but it might be better than re-writing the object structure. Since you did say that the created objects (arrays) are of different sizes, it might be difficult, but it could keep the application from crashing.

这篇关于如何让 .NET 积极地进行垃圾收集?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆