我如何知道可以分配给定类型的 .net 数组的实际最大元素数? [英] How can I know the ACTUAL maximum number of elements a .net array of a given type can be allocated?

查看:27
本文介绍了我如何知道可以分配给定类型的 .net 数组的实际最大元素数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我知道 .net 中的所有数组都限制为 2 GB,在此前提下,我尽量不要在数组中分配超过 n = ((2^31) - 1)/8 双倍的空间.尽管如此,该数量的元素似乎仍然无效.任何人都知道如何在运行时确定给定 sizeof(T) 的最大元素数?

我知道任何接近该数字的数量都只是很多元素,但出于所有意图和目的,假设我需要它.

注意:我在 64 位环境中,目标平台用于我的 AnyCPU 应用程序,并且 RAM 中至少有 3100 MB 可用空间.

更新:感谢大家的贡献,抱歉我太安静了.对于给您带来的不便,我深表歉意.我无法重新表述我的问题,但我可以补充一点,我正在寻找的是解决这样的问题:

template 数组^allocateAnUsableArrayWithTheMostElementsPossible(){返回 gcnew 数组(...);}

我自己回答的结果有点令人满意,但还不够好.此外,我还没有在另一台机器上测试它(很难找到另一台超过 4 GB 的机器).此外,我自己一直在做一些研究,似乎没有便宜的方法可以在运行时计算它.无论如何,这只是一个加分项,what-I-am-trying-to-accomplish 的任何用户都不能指望在没有能力的情况下使用我正在尝试实现的功能.

所以,换句话说,我只是想了解为什么数组的最大元素数加起来不到 2GB 其他条件不变.我现在只需要一个最高的最大值.

解决方案

更新: 答案完全重写.原始答案包含通过分而治之的方法在任何系统上找到最大可能的可寻址数组的方法,如果您有兴趣,请参阅此答案的历史.新答案试图解释 56 字节的差距.

他自己的答案,AZ 解释说最大阵列大小限制为小于 2GB 上限,并且通过一些反复试验(或其他方法?)发现以下(摘要)):

  • 如果类型的大小为 1、2、4 或 8 个字节,则最大可占用大小为 2GB - 56 个字节;
  • 如果类型的大小为 16 字节,则最大为 2GB - 48 字节;
  • 如果类型的大小为 32 字节,则最大值为 2GB - 32 字节.

我不完全确定 16 字节和 32 字节的情况.如果数组是结构数组或内置类型,则数组的总可用大小可能会有所不同.我将强调 1-8 个字节的类型大小(我也不确定,请参阅结论).

数组的数据布局

要理解为什么 CLR 不完全允许 ​​2GB/IntPtr.Size 元素,我们需要知道数组的结构.一个很好的起点是这篇SO文章,但不幸的是,一些信息似乎是错误的,或者至少是不完整的.这篇深入介绍 .NET CLR 如何创建运行时对象证明是无价的,以及这篇关于 CodeProject 的未记录数组文章.

根据这些文章中的所有信息,可以归结为 32 位系统中数组的以下布局:

<前>单维,内置型SSSSTTTTLLL[...数据...]0000^ 同步块^ 类型句柄^ 长度数组^ 空

每个部分的大小是一个系统DWORD.在 64 位窗口上,这看起来如下:

<前>单维,内置型SSSSSSSSTTTTTTTTLLLLLLL[...数据...]00000000^ 同步块^ 类型句柄^ 长度数组^ 空

当它是一个对象数组(即字符串、类实例)时,布局看起来略有不同.如您所见,添加了数组中对象的类型句柄.

<前>单维,内置型SSSSSSSSTTTTTTTTLLLLLLLLtttttttt[...数据...]00000000^ 同步块^ 类型句柄^ 长度数组^ type 句柄数组元素类型^ 空

进一步看,我们发现内置类型,或者实际上,任何结构类型,都有自己特定的类型处理程序(所有 uint 共享相同的,但 intcode> 有一个不同的数组类型处理程序,然后是 uintbyte).所有对象数组共享相同的类型处理程序,但有一个额外的字段指向对象的类型处理程序.

关于结构类型的说明:可能并不总是应用填充,这可能会使预测结构的实际大小变得困难.

仍然不是 56 字节...

为了计入 AZ 答案的 56 个字节,我必须做出一些假设.我假设:

  1. 同步块和类型句柄计入对象的大小;
  2. 保存数组引用(对象指针)的变量计入对象的大小;
  3. 数组的空终止符计入对象的大小.

在变量指向的地址之前放置一个同步块,这使得它看起来不是对象的一部分.但事实上,我相信它是,它计入内部 2GB 限制.添加所有这些,我们得到,对于 64 位系统:

ObjectRef +同步块 +类型句柄 +长度 +空指针 +--------------40 (5 * 8 字节)

还不是56.也许有人可以在调试的时候用内存视图来检查数组的布局在 64 位窗口下的样子.

我的猜测是这样的(选择、混合和匹配):

  • 2GB 永远不可能,因为那是下一段的一个字节.最大的块应该是 2GB - sizeof(int).但这很愚蠢,因为 mem 索引应该从 0 开始,而不是从 1 开始;

  • 任何大于 85016 字节的对象都将放在 LOH(大对象堆)上.这可能包括一个额外的指针,甚至是一个包含 LOH 信息的 16 字节结构.或许这算到极限;

  • 对齐:假设 objectref 不计数(它在另一个 mem 段中),总间隙为 32 字节.系统很可能更喜欢 32 字节边界.重新审视内存布局.如果起始点需要在 32 字节的边界上,并且需要为它之前的同步块留出空间,那么同步块将在前 32 字节块的末尾结束.像这样:

     XXXXXXXXXXXXXXXXXXXXXXXXSSSSSSSSTTTTTTTTLLLLLLLLtttttttt[...数据...]00000000

    其中 XXX.. 代表跳过的字节.

  • 多维数组:如果您使用具有 1 个或多个维度的 Array.CreateInstance 动态创建数组,将创建一个包含大小和下界的额外 DWORD 的单个暗数组维度(即使您只有一个维度,但前提是下限指定为非零).我发现这极不可能,因为如果您的代码是这种情况,您可能会提到这一点.但这会使总开销达到 56 字节;)

结论

从我在这个小小的研究中收集到的所有信息,我认为 开销 + 对齐 - Objectref 是最有可能和最合适的结论.然而,一个真实的"CLR 专家或许能够对这个特殊的主题进行一些额外的说明.

这些结论都没有解释为什么 16 或 32 字节数据类型分别具有 48 和 32 字节间隙.

感谢一个具有挑战性的主题,在我的学习过程中学到了一些东西.也许有些人在发现这个新答案与问题更相关时可以取消投票(我最初误解了这一点,并为此可能造成的混乱道歉).

I know that all arrays in .net are limited to 2 GB, under this premise, I try not to allocate more that n = ((2^31) - 1) / 8 doubles in an array. Nevertheless, that number of elements still doesn't seem to be valid. Anyone knows how can I determine at run time the maximum number of elements given sizeof(T)?

I know that whatever quantity approaching that number is just a lot of elements but, for all intents and purposes, let's say I need it.

Note: I'm in a 64-bit environment, with a target platform for my application of AnyCPU, and at least 3100 MB free in RAM.

Update: Thank you all for your contributions and sorry I was so quiet. I apologise for the inconvenience. I have not been able to rephrase my question but I can add that, what I am looking for is solving something like this:

template <class T>
array<T>^ allocateAnUsableArrayWithTheMostElementsPossible(){
    return gcnew array<T>( ... );
}

The results in my own answer are kinda satisfactory but not good enough. Furthermore, I haven't test it on another machine (Kind of hard finding another machine with more than 4 GB). Besides, I have been doing some research on my own and it seems there's no cheap way to calculate this at run time. Anyhow, that was just a plus, none of the user of what-I-am-trying-to-accomplish can expect to use the feature I am trying to implement without having the capacity.

So, in other words, I just want to understand why the maximum number of elements of an array don't add up to 2GB ceteris paribus. A top maximum is all I need for now.

解决方案

Update: answer COMPLETELY rewritten. Original answer contained methods to find the largest possible addressable array on any system by divide and conquer, see history of this answer if you're interested. The new answer attempts to explain the 56 bytes gap.

In his own answer, AZ explained that the maximum array size is limited to less then the 2GB cap and with some trial and error (or another method?) finds the following (summary):

  • If the size of the type is 1, 2, 4 or 8 bytes, the maximum occupiable size is 2GB - 56 bytes;
  • If the size of the type is 16 bytes, the max is 2GB - 48 bytes;
  • If the size of the type is 32 bytes, the max is 2GB - 32 bytes.

I'm not entirely sure about the 16 bytes and 32 bytes situations. The total available size for the array might be different if it's an array of structs or a build-in type. I'll emphasize on 1-8 bytes type size (of which I'm not that sure either, see conclusion).

Data layout of an array

To understand why the CLR does not allow exactly 2GB / IntPtr.Size elements we need to know how an array is structured. A good starting point is this SO article, but unfortunately, some of the information seems false, or at least incomplete. This in-depth article on how the .NET CLR creates runtime objects proved invaluable, as well as this Arrays Undocumented article on CodeProject.

Taking all the information in these articles, it comes down to the following layout for an array in 32 bit systems:

Single dimension, built-in type
SSSSTTTTLLLL[...data...]0000
^ sync block
    ^ type handle
        ^ length array
                        ^ NULL 

Each part is one system DWORD in size. On 64 bit windows, this looks as follows:

Single dimension, built-in type
SSSSSSSSTTTTTTTTLLLLLLLL[...data...]00000000
^ sync block
        ^ type handle
                ^ length array
                                    ^ NULL 

The layout looks slightly different when it's an array of objects (i.e., strings, class instances). As you can see, the type handle to the object in the array is added.

Single dimension, built-in type
SSSSSSSSTTTTTTTTLLLLLLLLtttttttt[...data...]00000000
^ sync block
        ^ type handle
                ^ length array
                        ^ type handle array element type
                                            ^ NULL 

Looking further, we find that a built-in type, or actually, any struct type, gets its own specific type handler (all uint share the same, but an int has a different type handler for the array then a uint or byte). All arrays of object share the same type handler, but have an extra field that points to the type handler of the objects.

A note on struct types: padding may not always be applied, which may make it hard to predict the actual size of a struct.

Still not 56 bytes...

To count towards the 56 bytes of the AZ's answer, I have to make a few assumptions. I assume that:

  1. the syncblock and type handle count towards the size of an object;
  2. the variable holding the array reference (object pointer) counts towards the size of an object;
  3. the array's null terminator counts towards the size of an object.

A syncblock is placed before the address the variable points at, which makes it look like it's not part of the object. But in fact, I believe it is and it counts towards the internal 2GB limit. Adding all these, we get, for 64 bit systems:

ObjectRef + 
Syncblock +
Typehandle +
Length +
Null pointer +
--------------
40  (5 * 8 bytes)

Not 56 yet. Perhaps someone can have a look with Memory View during debugging to check how the layout of an array looks like under 64 bits windows.

My guess is something along these lines (take your pick, mix and match):

  • 2GB will never be possible, as that is one byte into the next segment. The largest block should be 2GB - sizeof(int). But this is silly, as mem indexes should start at zero, not one;

  • Any object larger then 85016 bytes will be put on the LOH (large object heap). This may include an extra pointer, or even a 16 byte struct holding LOH information. Perhaps this counts towards the limit;

  • Aligning: assuming the objectref does not count (it is in another mem segment anyway), the total gap is 32 bytes. It's very well possible that the system prefers 32 byte boundaries. Take a new look at the memory layout. If the starting point needs to be on a 32 byte boundary, and it needs room for the syncblock before it, the syncblock will end up in the end of the first 32 bytes block. Something like this:

      XXXXXXXXXXXXXXXXXXXXXXXXSSSSSSSSTTTTTTTTLLLLLLLLtttttttt[...data...]00000000
    

    where XXX.. stands for skipped bytes.

  • multi dimensional arrays: if you create your arrays dynamically with Array.CreateInstance with 1 or more dimensions, a single dim array will be created with two extra DWORDS containing the size and the lowerbound of the dimension (even if you have only one dimension, but only if the lowerbound is specified as non-zero). I find this highly unlikely, as you would probably have mentioned this if this were the case in your code. But it would bring the total to 56 bytes overhead ;).

Conclusion

From all I gathered during this little research, I think that the Overhead + Aligning - Objectref is the most likely and most fitting conclusion. However, a "real" CLR guru might be able to shed some extra light on this peculiar subject.

None of these conclusions explain why 16 or 32 byte datatypes have a 48 and 32 byte gap respectively.

Thanks for a challenging subject, learned something along my way. Perhaps some people can take the downvote off when they find this new answer more related to the question (which I originally misunderstood, and apologies for the clutter this may have caused).

这篇关于我如何知道可以分配给定类型的 .net 数组的实际最大元素数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆