操作字符串时,C#中的内存管理极差 [英] Memory Management extremely poor in C# when manipulating string..

查看:58
本文介绍了操作字符串时,C#中的内存管理极差的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个相当简单的C#程序,只需要打开一个固定宽度的

文件,将每个记录转换为制表符分隔并将一个字段追加到
$ b $的末尾b。


输入文件介于300M和600M之间。我已经尝试过在转换程序中我知道的所有内存

保护技巧,还有一些我从阅读一些MSDN C#博客中获取了
,但仍然是我的程序最终使用

数百和几百美元的内存。

处理文件也花费了太长时间。 (10到25分钟之间)。此外,随着我​​在同一个程序中处理的每个连续的

文件,性能都会下降,因此通过

第3个文件,程序完全停止,从不完成。


我最终用perl重写了这个过程,只花了几分钟

而且从来没有超过40 M的足迹。


给出了什么?


我注意到我所有程序中的内存处理都非常糟糕需要

做任何事情一种密集的字符串处理。


我有第二个程序,只是实现了LZW解压缩

算法(几乎直接从手册中复制过来。)它在低于100K的
文件上工作得很好,但是如果我尝试在一个只有4.5M

压缩的文件上运行它,它会运行高达200+ Megs足迹然后开始抛出

内存异常。


我想知道是否有人可以看看我已经失去了什么并看到我我不想要b $ b缺少重要的东西吗?我是一名C学校的老学员,所以我可能会做一些不好的事情。


感谢任何人给予的任何帮助。


问候,


Seg

I have a fairly simple C# program that just needs to open up a fixed width
file, convert each record to tab delimited and append a field to the end of
it.

The input files are between 300M and 600M. I''ve tried every memory
conservation trick I know in my conversion program, and a bunch I picked up
from reading some of the MSDN C# blogs, but still my program ends up using
hundreds and hundreds of megs of ram. It is also taking excessively long to
process the files. (between 10 and 25 minutes). Also, with each successive
file I process in the same program, performance goes way down, so that by the
3rd file, the program comes to a complete halt and never completes.

I ended up rewriting the process in perl which takes only a couple minutes
and never really gets above a 40 M footprint.

What gives?

I''m noticing this very poor memory handling in all my programs that need to
do any kind of intensive string processing.

I have a 2nd program that just implements the LZW decompression
algorithm(pretty much copied straight out of the manuals.) It works great on
files less than 100K, but if I try to run it on a file that''s just 4.5M
compressed, it runs up to 200+ Megs footprint and then starts throwing Out of
Memory exceptions.

I was wondering if somebody could look at what I''ve got down and see if I''m
missing something important? I''m an old school C programmer, so I may be
doing something that is bad.

Would appreciate any help anybody can give.

Regards,

Seg

推荐答案

Segfahlt< Se ******@discussions.microsoft.com>写道:
Segfahlt <Se******@discussions.microsoft.com> wrote:
我有一个相当简单的C#程序,只需要打开一个固定宽度的文件,将每个记录转换为制表符分隔并附加一个字段到结尾
它。

输入文件介于300M和600M之间。我已经尝试了我在转换程序中所知道的每一个记忆保护技巧,并且我从阅读一些MSDN C#博客中找到了一堆
,但我的程序最终还是使用了
成百上千的公羊。处理文件也需要花费很长时间。 (10到25分钟之间)。此外,随着我​​在同一个程序中处理的每个连续的文件,性能都会下降,因此通过第三个文件,程序完全停止并且永远不会完成。
而且从来没有超过40米的足迹。

是什么给出的?


如果没有看到任何代码,很难说。这听起来好像你不需要随时将整个文件加载到内存中,

因此内存使用量相对较小(除了开销之外)
框架本身的


我注意到我所有需要进行任何密集字符串处理的程序中的内存处理都很糟糕。

我有第二个程序,它只是实现了LZW解压缩算法(几乎直接从手册中复制过来。)它适用于小于100K的文件,但是如果我尝试在一个只有4.5M压缩的文件上运行它,它会运行高达200+ Megs的足迹,然后开始抛出内存异常。
我想知道是否有人可以看看我已经失去了什么,看看我是否错过了一些重要的东西?我是一名古老的C学校程序员,所以我可能会做一些不好的事情。

非常感谢任何人都可以提供帮助。
I have a fairly simple C# program that just needs to open up a fixed width
file, convert each record to tab delimited and append a field to the end of
it.

The input files are between 300M and 600M. I''ve tried every memory
conservation trick I know in my conversion program, and a bunch I picked up
from reading some of the MSDN C# blogs, but still my program ends up using
hundreds and hundreds of megs of ram. It is also taking excessively long to
process the files. (between 10 and 25 minutes). Also, with each successive
file I process in the same program, performance goes way down, so that by the
3rd file, the program comes to a complete halt and never completes.

I ended up rewriting the process in perl which takes only a couple minutes
and never really gets above a 40 M footprint.

What gives?
It''s very hard to say without seeing any of your code. It sounds like
you don''t actually need to load the whole file into memory at any time,
so the memory usage should be relatively small (aside from the overhead
for the framework itself).
I''m noticing this very poor memory handling in all my programs that need to
do any kind of intensive string processing.

I have a 2nd program that just implements the LZW decompression
algorithm(pretty much copied straight out of the manuals.) It works great on
files less than 100K, but if I try to run it on a file that''s just 4.5M
compressed, it runs up to 200+ Megs footprint and then starts throwing Out of
Memory exceptions.

I was wondering if somebody could look at what I''ve got down and see if I''m
missing something important? I''m an old school C programmer, so I may be
doing something that is bad.

Would appreciate any help anybody can give.




你能发一个简短但完整的程序来演示

的问题吗?


http://www.pobox.com/~skeet/csharp/complete.html 了解详细信息

我的意思是什么。


-

Jon Skeet - < sk *** @ pobox.com>
http://www.pobox.com/~skeet

如果回复该组,请不要给我发邮件



Could you post a short but complete program which demonstrates the
problem?

See http://www.pobox.com/~skeet/csharp/complete.html for details of
what I mean by that.

--
Jon Skeet - <sk***@pobox.com>
http://www.pobox.com/~skeet
If replying to the group, please do not mail me too




" Segfahlt" <硒****** @ discussions.microsoft.com>在消息中写道

新闻:0D ********************************** @ microsof t.com ...

"Segfahlt" <Se******@discussions.microsoft.com> wrote in message
news:0D**********************************@microsof t.com...
我有一个相当简单的C#程序,只需要打开一个固定宽度的文件,将每个记录转换为制表符分隔并将一个字段追加到结尾<



输入文件介于300M和600M之间。我已经尝试了我在转换程序中知道的每一个记忆保护技巧,并且我从阅读一些MSDN C#博客中选择了一些
,但我的程序最终还是使用了
成百上千的公羊。处理文件也花了太长时间。 (10到25分钟之间)。另外,随着我​​在同一个程序中处理的每个
文件,性能都会下降,所以通过
第3个文件,程序完全停止,从不完成。

我最终用perl重写了这个过程,只花了几分钟
而且从来没有真正超过40 M的足迹。

是什么给出的?

我注意到我所有程序中的内存处理都非常糟糕,需要进行任何密集的字符串处理。

我有第二个程序刚刚实现了LZW解压缩算法(几乎直接从手册中复制过来。)它在小于100K的文件上运行得很好,但如果我尝试运行它一个只有4.5M压缩的文件,它运行高达200+ Megs足迹然后开始抛出
内存异常。

我是想知道是否有人可以看看我已经失去了什么,看看是否我错过了什么重要吗?我是一名古老的C学校程序员,所以我可能会做一些不好的事情。

感谢任何人都能给予的帮助。

关心,

Seg
I have a fairly simple C# program that just needs to open up a fixed width
file, convert each record to tab delimited and append a field to the end
of
it.

The input files are between 300M and 600M. I''ve tried every memory
conservation trick I know in my conversion program, and a bunch I picked
up
from reading some of the MSDN C# blogs, but still my program ends up using
hundreds and hundreds of megs of ram. It is also taking excessively long
to
process the files. (between 10 and 25 minutes). Also, with each
successive
file I process in the same program, performance goes way down, so that by
the
3rd file, the program comes to a complete halt and never completes.

I ended up rewriting the process in perl which takes only a couple minutes
and never really gets above a 40 M footprint.

What gives?

I''m noticing this very poor memory handling in all my programs that need
to
do any kind of intensive string processing.

I have a 2nd program that just implements the LZW decompression
algorithm(pretty much copied straight out of the manuals.) It works great
on
files less than 100K, but if I try to run it on a file that''s just 4.5M
compressed, it runs up to 200+ Megs footprint and then starts throwing Out
of
Memory exceptions.

I was wondering if somebody could look at what I''ve got down and see if
I''m
missing something important? I''m an old school C programmer, so I may be
doing something that is bad.

Would appreciate any help anybody can give.

Regards,

Seg




如果没有明确的描述那就很难回答这么广泛的问题

of the使用的算法或通过查看任何代码,所以我不得不猜测:

1.您将整个输入文件读入内存。

2.存储每个修改过的记录到一个字符串数组或一个

StringArray中,并在完成输入文件时将其写入文件。

3. 1 + 2

.....

无论如何,在写入

输出文件之前,你似乎在内存中保留了太多字符串。


威利。



That''s really hard to answer such broad question without a clear description
of the algorithm used or by seeing any code, so I''ll have to guess:
1. You read the whole input file into memory.
2. You store each modified record into an array of strings or into a
StringArray, and write it to the file when done with the input file.
3. 1 + 2
.....
Anyway you seem to hold too much strings in memory before writing to the
output file.

Willy.


这篇关于操作字符串时,C#中的内存管理极差的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆