DECOM pressing一个非常大的序列化对象和内存管理 [英] Decompressing a very large serialized object and managing memory

查看:77
本文介绍了DECOM pressing一个非常大的序列化对象和内存管理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个对象,包含吨用于报表数据。为了获得从服务器到客户端这个对象我第一次那么COM preSS它使用.NET的Gzip已流序列化对象的内存流。然后我pssed对象发送融为一体$ P $为一个byte []到客户端。

I have an object that contains tons of data used for reports. In order to get this object from the server to the client I first serialize the object in a memory stream, then compress it using the Gzip stream of .NET. I then send the compressed object as a byte[] to the client.

现在的问题是在某些客户端,当他们得到的byte [],并尝试DECOM preSS和反序列化对象,一个System.OutOfMemory异常。伊夫读,此异常可以通过新()一束物体,或保持在一堆字符串引起。这两种反序列化过程正在发生。

The problem is on some clients, when they get the byte[] and try to decompress and deserialize the object, a System.OutOfMemory exception is thrown. Ive read that this exception can be caused by new() a bunch of objects, or holding on to a bunch of strings. Both of these are happening during the deserialization process.

所以我的问题是:我如何prevent异常(有什么好的策略)?客户需要的所有数据,以及香港专业教育学院下调串的数量一样多,我可以。

So my question is: How do I prevent the exception (any good strategies)? The client needs all of the data, and ive trimmed down the number of strings as much as i can.

编辑:这里是$ C $词现在用的序列化/ COM preSS(实现为扩展方法)

edit: here is the code i am using to serialize/compress (implemented as extension methods)

public static byte[] SerializeObject<T>(this object obj, T serializer) where T: XmlObjectSerializer
{
    Type t = obj.GetType();

    if (!Attribute.IsDefined(t, typeof(DataContractAttribute)))
        return null;

    byte[] initialBytes;

    using (MemoryStream stream = new MemoryStream())
    {
        serializer.WriteObject(stream, obj);
        initialBytes = stream.ToArray();
    }

    return initialBytes;
}

public static byte[] CompressObject<T>(this object obj, T serializer) where T : XmlObjectSerializer
{
    Type t = obj.GetType();

    if(!Attribute.IsDefined(t, typeof(DataContractAttribute)))
        return null;

    byte[] initialBytes = obj.SerializeObject(serializer);

    byte[] compressedBytes;

    using (MemoryStream stream = new MemoryStream(initialBytes))
    {
        using (MemoryStream output = new MemoryStream())
        {
            using (GZipStream zipper = new GZipStream(output, CompressionMode.Compress))
            {
                Pump(stream, zipper);
            }

            compressedBytes = output.ToArray();
        }
    }

    return compressedBytes;
}

internal static void Pump(Stream input, Stream output)
{
    byte[] bytes = new byte[4096];
    int n;
    while ((n = input.Read(bytes, 0, bytes.Length)) != 0)
    {
        output.Write(bytes, 0, n);
    }
}

和这里是我的$ C $下DECOM preSS /反序列化:

And here is my code for decompress/deserialize:

public static T DeSerializeObject<T,TU>(this byte[] serializedObject, TU deserializer) where TU: XmlObjectSerializer
{
    using (MemoryStream stream = new MemoryStream(serializedObject))
    {
        return (T)deserializer.ReadObject(stream);
    }
}

public static T DecompressObject<T, TU>(this byte[] compressedBytes, TU deserializer) where TU: XmlObjectSerializer
{
    byte[] decompressedBytes;

    using(MemoryStream stream = new MemoryStream(compressedBytes))
    {
        using(MemoryStream output = new MemoryStream())
        {
            using(GZipStream zipper = new GZipStream(stream, CompressionMode.Decompress))
            {
                ObjectExtensions.Pump(zipper, output);
            }

            decompressedBytes = output.ToArray();
        }
    }

    return decompressedBytes.DeSerializeObject<T, TU>(deserializer);
}

这是我传递的​​对象是一个包装对象,它只是包含所有保存数据的相关对象。对象的数量可以很多(取决于报告日期范围),但我见过多达25,000串

The object that I am passing is a wrapper object, it just contains all the relevant objects that hold the data. The number of objects can be a lot (depending on the reports date range), but ive seen as many as 25k strings.

有一件事我没有忘记提及是我使用WCF,而由于内对象分别通过其他WCF来电,我现在用的是DataContract序列化,和我所有的物品都标有DataContract属性。

One thing i did forget to mention is I am using WCF, and since the inner objects are passed individually through other WCF calls, I am using the DataContract serializer, and all my objects are marked with the DataContract attribute.

推荐答案

一个开发者,我有遇到过类似的问题,其中用于序列化的大数据流分段内存堆和垃圾收集器无法将其压缩到足以工作让他重新分配内存。

A developer I work with encountered a similar problem, where the large streams used for the serialization fragmented the memory heap and the garbage collector was unable to compact it sufficiently to allow him to reallocate the memory.

如果您序列化对象的许多重复我将分配一个缓冲区,然后每次你完成,而不是处理它,并创建一个新的时间将其清除。这样,你只需要在内存中创建它一次,然后将您的应用程序应继续有效地工作。

If you are serializing many objects repetitively I would allocate a single buffer, and then clear it each time you finish, as opposed to disposing it and creating a new one. That way you only need the memory to create it once and then your app should continue to work effectively.

我也想提一下@yetapb评论说,该数据可能会分页,并写在一个流的方式。这样,你不应该需要一个巨大的缓冲存储器来存储数据。

I'd also mention @yetapb comment that the data might be paged and written in a streamed fashion. That way you shouldn't need an enormous buffer in memory to store the data.

这篇关于DECOM pressing一个非常大的序列化对象和内存管理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆