为什么使用short不好 [英] Why is it bad to use short

查看:113
本文介绍了为什么使用short不好的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

即使在开发人员保证变量永远不会超过一个字节甚至两个字节的脚本中也是很常见的;许多人决定对用于表示 0-1 范围内的数字的每个可能变量使用 int 类型.

为什么使用 charshort 会造成如此大的伤害?

我想我听到有人说 int 是更标准的"类型的..类型.这是什么意思.我的问题是数据类型 int 是否比 short(或其他较小的数据类型)有任何明确的优势,由于这些优势,人们过去几乎总是求助于 int?

解决方案

作为一般规则,C 中的大多数算术都使用类型 int(即普通的 int>,不是 shortlong).这是因为 (a) C 的定义是这样说的,这与以下事实有关:(b) 这是许多处理器(至少,C 的设计者心目中的处理器)更喜欢的工作方式.

因此,如果您尝试节省空间"通过使用 short ints 代替,您可以编写类似

short a = 1, b = 2;短 c = a + b;

编译器必须发出代码,实际上将ashort转换为int,转换bshortint,进行加法,然后将总和转换回 short.您可能为 abc 节省了一点存储空间,但您的代码可能会更大(并且更慢).

如果你改写

int a = 1, b = 2;int c = a + b;

您在 abc 中花费了更多的存储空间,但代码可能更小、更快.

这有点过于简单化了,但在您的观察之后,short 类型的使用很少,并且通常建议使用纯 int.基本上,因为它是机器的自然"大小,它被认为是最直接的算术类型,不需要额外的与不太自然的类型之间的转换.这有点像在罗马时,像罗马人那样做".参数,但通常确实使用纯 int 有利.

如果你有很多的不太大的整数要存储,另一方面(它们的大数组,或包含不太大的整数的大结构数组),数据的存储节省可能很大,值得在代码大小(相对较小)增加和潜在速度增加之间进行权衡.

另见之前的SO问题这个 C 常见问题列表条目.


附录:与任何优化问题一样,如果您真的关心数据空间使用情况、代码空间使用情况和代码速度,您将需要使用准确的机器和处理器进行仔细的测量.您的处理器可能最终不需要任何额外的转换指令".毕竟,要在较小的类型之间进行转换,所以使用它们可能不是什么缺点.但与此同时,您或许可以确认,对于孤立变量,使用它们也可能不会产生任何可衡量的优势.


附录 2.这是一个数据点.我试验了代码

extern 短 a, b, c;无效 f(){c = a + b;}

我用两个编译器编译,gcc 和 clang(在 Mac 上为 Intel 处理器编译).然后我将 short 更改为 int 并再次编译.int 使用的代码在 gcc 下小 7 个字节,在 clang 下小 10 个字节.对汇编语言输出的检查表明,不同之处在于截断结果以将其存储在 c 中;获取 short 而不是 int 似乎不会改变指令数.

然而,我随后尝试调用这两个不同的版本,发现它在运行时间上几乎没有区别,即使在 10000000000 次调用之后也是如此.所以使用 short 可能会使代码更大"部分答案已得到确认,但可能不会并且让它变慢".

It is very common that even in script where the developer have guarantees that the variable will never exceed one byte and sometimes two bytes; Many people decide to use int types for every possible variable used to represent numbers nay in the range of 0-1.

Why does it hurt so much to use char or short instead?

I think I heard someone saying int is "more standard" type of.. type. What does this mean. My question is does the data type int have any defined advantages over short (or other lesser data types), because of which advantages, people used to almost always resort to int?

解决方案

As a general rule, most arithmetic in C is performed using type int (that is, plain int, not short or long). This is because (a) the definition of C says so, which is related to the fact that (b) that's the way many processors (at least, the ones C's designers had in mind) prefer to work.

So if you try to "save space" by using short ints instead, and you write something like

short a = 1, b = 2;
short c = a + b;

the compiler has to emit code to, in effect, convert a from short to int, convert b from short to int, do the addition, and convert the sum back to short. You may have saved a little bit of space on the storage for a, b, and c, but your code is likely to be bigger (and slower).

If you instead write

int a = 1, b = 2;
int c = a + b;

you spend a little more storage space in a, b, and c, but the code is probably smaller and quicker.

This is somewhat of an oversimplified argument, but it's behind your observation that usage of type short is rare, and plain int is generally recommended. Basically, since it's the machine's "natural" size, it's presumed to be the most straightforward type to do arithmetic in, without extra conversions to and from less-natural types. It's sort of a "When in Rome, do as the Romans do" argument, but it generally does make using plain int advantageous.

If you have lots of not-so-large integers to store, on the other hand (a large array of them, or a large array of structures containing not-so-large integers), the storage savings for the data might be large, and worth it as traded off against the (relatively smaller) increase in the code size, and the potential speed increase.

See also this previous SO question and this C FAQ list entry.


Addendum: like any optimization problem, if you really care about data space usage, code space usage, and code speed, you'll want to perform careful measurements using your exact machine and processor. Your processor might not end up requiring any "extra conversion instructions" to convert to/from the smaller types, after all, so using them might not be so much of a disadvantage. But at the same time you can probably confirm that, for isolated variables, using them might not yield any measurable advantage, either.


Addendum 2. Here's a data point. I experimented with the code

extern short a, b, c;

void f()
{
    c = a + b;
}

I compiled with two compilers, gcc and clang (compiling for an Intel processor on a Mac). I then changed short to int and compiled again. The int-using code was 7 bytes smaller under gcc, and 10 bytes smaller under clang. Inspection of the assembly language output suggests that the difference was in truncating the result so as to store it in c; fetching short as opposed to int doesn't seem to change the instruction count.

However, I then tried calling the two different versions, and discovered that it made virtually no difference in the run time, even after 10000000000 calls. So the "using short might make the code bigger" part of the answer is confirmed, but maybe not "and also make it slower".

这篇关于为什么使用short不好的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆