如何绕过“长” 32/64位乱? [英] How to get around the "long" 32/64-bit mess?

查看:58
本文介绍了如何绕过“长” 32/64位乱?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我之前可能会问过一个问题,但我没有通过groups.google.com或任何网页搜索找到任何内容

deflative。


我正在用C ++编写一个进化的AI应用程序。我想使用32位

整数,我希望应用程序能够以便宜的方式将它的状态保存为




显而易见的选择是使用长字母。类型,因为它被定义为

,大小至少为32位。然而,该定义说至少是指至少。而且我相信在某些平台上(gcc / AMD64 ???)长的是64位

整数。换句话说,如果我没有弄错的话,有些情况下sizeof(长)==

sizeof(long long)。


问题是这个:它是一个进化的AI应用程序,所以如果我使用long

作为数据类型并在sizeof(long)== 8的平台上运行它,那么它

将*演变*以利用这一点。然后,如果我保存它的状态,这个

状态将无效或将被截断为32位整数如果我

然后重新加载此状态32位平台,其中sizeof(长)== 4.我想b * b我试图让这个程序*快*,所以我想避免不得不支付
浪费周期做无意义的&的0xffffffff"操作遍布

的地方或其他讨厌的黑客来解决这种类型的疯狂。


我想避免的令人讨厌的解决方案是typedefs.h类型文件

定义了一堆依赖于体系结构的typedef。呸。我是否提到这个C ++代码将被编译成一个库,然后

将与其他应用程序链接?我不希望其他

应用程序必须使用-DARCH_WHATEVER进行编译才能获得此库

以正确链接它的typedefs.h文件nastiness。


所以我想使用标准类型。有没有*任何*方法来做到这一点,或

我是否必须创建一个typedefs.h文件,如每个加密或其他

整数大小敏感的C / C ++程序必须做什么?

解决方案

Adam Ierymenko写道:

我有一个问题可能是之前问过,但我还没有能够通过groups.google.com或任何具有决定性的网络搜索找到任何内容。

我正在编写一个演化的AI应用程序C ++。我想使用32位
整数,我希望应用程序能够以便携的方式保存它的状态。

明显的选择是使用长的类型,因为它被定义为至少32位大小。然而,该定义说至少是指至少。而且我相信在某些平台上(gcc / AMD64 ???)很久就会成为一个64位的整数。换句话说,如果我没有弄错的话,有些情况下sizeof(long)==
sizeof(long long)。

问题在于:它是一个进化的AI应用程序,所以如果我使用long
作为数据类型并在sizeof(long)== 8的平台上运行它,那么它将* * * * * * * * * * * * * * * * * * * * * *然后,如果我保存它的状态,这个
状态将无效或将被截断为32位整数如果我然后在32位平台上重新加载此状态sizeof(long )== 4.我正在努力使这个程序*快*,所以我想避免不必浪费周期做无意义的&&的0xffffffff"整个地方的操作或其他讨厌的黑客来解决这种类型的疯狂。

我试图避免的令人讨厌的解决方案是typedefs.h类型文件
定义一堆依赖于体系结构的typedef。呸。我是否提到将这个C ++代码编译成一个库,然后它将与其他应用程序链接?我不希望其他
应用程序必须使用-DARCH_WHATEVER进行编译才能使这个库正确链接到它的typedefs.h文件nastiness。

所以我想使用标准类型。有没有*任何*方法来做到这一点,或者
我是否必须创建一个typedefs.h文件,就像每个加密或其他整数大小敏感的C / C ++程序必须做的那样?



嗯,你可以做很多事情。例如,保存其状态转换

将值转换为字符串表示形式。或者其他一些技巧。在大多数

32位平台中,int是32位,所以你可以在

假设上使用这种类型。

因为你正在寻找对于保存数据的可移植性,你总是需要*

保存在文本模式中,因为在一个实现中,一个类型的二进制

表示可以与另一个

实现。

然后对加载然后数据进行各种检查,即存储的

值不超过支持范围的限制在当前的

平台上(或者在保存数据的时候放置支票

不超过32位限制)。

或者创建一个类。在数据可移植性的情况下,数据必须以文本模式存储




问候,

Ioannis Vranos

http://www23.brinkster .com / noicys


Ioannis Vranos写道:

Adam Ierymenko写道:


....


你可以做很多事情。例如,保存其状态,将值转换为字符串表示。或者其他一些技巧。在大多数32位平台中,int是32位,所以你可以在这个假设上使用这种类型。

因为你正在寻找保存数据的可移植性,你将*必须始终以文本模式保存,因为在一个实现中,类型的二进制表示可能与另一个
实现的表示不同。


在实践中,您所需要的只是将转换为已知

格式的内容。 ASCII数字串是已知的字符串。格式但它可能是b $ b非常低效。


请参阅:
http:// groups。 google.com/groups?hl=e...iani.ws&rnum=9

有关如何处理字节序问题的示例,但很容易

扩展到处理其他格式。然而,任何人都不需要任何不同的东西是非常不可能的。


然后将各种检查放在加载然后数据上,即存储的/>值不超过当前
平台支持的范围限制(或者在保存数据时不会超过32位限制的数据时进行检查)。

或创建一个类。在数据可移植性的情况下,数据必须以文本模式存储。



....


< blockquote> Gianni Mariani写道:

在实践中,



或更好:理论上。

你所需要的只是将转换为已知格式的东西。 ASCII数字串是已知的字符串。格式但它可能非常低效。

请参阅:
http://groups.google.com/groups?hl= e ... iani.ws& rnum = 9

有关如何处理字节序问题但可以轻松扩展以处理其他格式的示例。然而,任何人都不需要任何不同的东西是非常不可能的。




是的,但在大多数情况下,ASCII就足够了。


但是,如果在特定情况下这是低效的,可以使用其他

格式。例如,可以使用库以XML格式保存他的数据。


问候,


Ioannis Vranos
http://www23.brinkster.com/noicys

I have a question that might have been asked before, but I have not been
able to find anything via groups.google.com or any web search that is
definative.

I am writing an evolutionary AI application in C++. I want to use 32-bit
integers, and I want the application to be able to save it''s state in a
portable fashion.

The obvious choice would be to use the "long" type, as it is deined to be
at least 32 bits in size. However, the definition says "at least" and I
believe that on some platforms (gcc/AMD64???) long ends up being a 64-bit
integer instead. In other words, there are cases where sizeof(long) ==
sizeof(long long) if I''m not mistaken.

The problem is this: it''s an evolutionary AI application, so if I use long
as the data type and run it on a platform where sizeof(long) == 8, then it
will *evolve* to take advantage of this. Then, if I save it''s state, this
state will either be invalid or will get truncated into 32-bit ints if I
then reload this state on a 32-bit platform where sizeof(long) == 4. I
am trying to make this program *fast*, and so I want to avoid having to
waste cycles doing meaningless "& 0xffffffff" operations all over the
place or other nasty hacks to get around this type madness.

The nasty solution that I am trying to avoid is a typedefs.h type file
that defines a bunch of architecture-dependent typedefs. Yuck. Did I
mention that this C++ code is going to be compiled into a library that
will then get linked with other applications? I don''t want the other
applications to have to compile with -DARCH_WHATEVER to get this library
to link correctly with it''s typedefs.h file nastiness.

So I''d like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?

解决方案

Adam Ierymenko wrote:

I have a question that might have been asked before, but I have not been
able to find anything via groups.google.com or any web search that is
definative.

I am writing an evolutionary AI application in C++. I want to use 32-bit
integers, and I want the application to be able to save it''s state in a
portable fashion.

The obvious choice would be to use the "long" type, as it is deined to be
at least 32 bits in size. However, the definition says "at least" and I
believe that on some platforms (gcc/AMD64???) long ends up being a 64-bit
integer instead. In other words, there are cases where sizeof(long) ==
sizeof(long long) if I''m not mistaken.

The problem is this: it''s an evolutionary AI application, so if I use long
as the data type and run it on a platform where sizeof(long) == 8, then it
will *evolve* to take advantage of this. Then, if I save it''s state, this
state will either be invalid or will get truncated into 32-bit ints if I
then reload this state on a 32-bit platform where sizeof(long) == 4. I
am trying to make this program *fast*, and so I want to avoid having to
waste cycles doing meaningless "& 0xffffffff" operations all over the
place or other nasty hacks to get around this type madness.

The nasty solution that I am trying to avoid is a typedefs.h type file
that defines a bunch of architecture-dependent typedefs. Yuck. Did I
mention that this C++ code is going to be compiled into a library that
will then get linked with other applications? I don''t want the other
applications to have to compile with -DARCH_WHATEVER to get this library
to link correctly with it''s typedefs.h file nastiness.

So I''d like to use standard types. Is there *any* way to do this, or
do I have to create a typedefs.h file like every cryptography or other
integer-size-sensitive C/C++ program has to do?



Well you can do many things. For example to save its state converting
the values to string representations. Or some other tricks. In most
32-bit platforms however int is 32-bit so you can use this type on that
assumption.
Since you are looking for saved-data portability, you will *have to*
save in text mode always, since in one implementation the binary
representation of a type can be different from that of another
implementation.
And place the various checks on loading then data, that the stored
values do not exceed the limits of the ranges supported in the current
platform (or place the checks when you save the data that the data do
not exceed the 32-bit limitation).
Or create a class. In an case for data-portability, the data must be
stored in text mode.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys


Ioannis Vranos wrote:

Adam Ierymenko wrote:
....


Well you can do many things. For example to save its state converting
the values to string representations. Or some other tricks. In most
32-bit platforms however int is 32-bit so you can use this type on that
assumption.
Since you are looking for saved-data portability, you will *have to*
save in text mode always, since in one implementation the binary
representation of a type can be different from that of another
implementation.
In practice, all you need is somthing that will convert to a "known"
format. ASCII numeric strings are a "known" format but it could be
terribly inefficient.

See:
http://groups.google.com/groups?hl=e...iani.ws&rnum=9

For an example of how to deal with endianness issues but could easily be
extended to deal with with other formats. However it''s highly unlikely
that anyone will ever need anything different.


And place the various checks on loading then data, that the stored
values do not exceed the limits of the ranges supported in the current
platform (or place the checks when you save the data that the data do
not exceed the 32-bit limitation).
Or create a class. In an case for data-portability, the data must be
stored in text mode.


....


Gianni Mariani wrote:

In practice,

Or better: In theory.
all you need is somthing that will convert to a "known"
format. ASCII numeric strings are a "known" format but it could be
terribly inefficient.

See:
http://groups.google.com/groups?hl=e...iani.ws&rnum=9
For an example of how to deal with endianness issues but could easily be
extended to deal with with other formats. However it''s highly unlikely
that anyone will ever need anything different.



Yes, however in most cases ASCII is sufficient.

However if in a particular case this is inefficient, one can use other
formats. For example, one can use a library to save his data in XML.


Regards,

Ioannis Vranos

http://www23.brinkster.com/noicys


这篇关于如何绕过“长” 32/64位乱?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆