为什么标志枚举通常用十六进制值定义 [英] Why are flag enums usually defined with hexadecimal values
问题描述
很多时候我看到使用十六进制值的标志枚举声明.例如:
A lot of times I see flag enum declarations that use hexadecimal values. For example:
[Flags]
public enum MyEnum
{
None = 0x0,
Flag1 = 0x1,
Flag2 = 0x2,
Flag3 = 0x4,
Flag4 = 0x8,
Flag5 = 0x10
}
当我声明一个枚举时,我通常是这样声明的:
When I declare an enum, I usually declare it like this:
[Flags]
public enum MyEnum
{
None = 0,
Flag1 = 1,
Flag2 = 2,
Flag3 = 4,
Flag4 = 8,
Flag5 = 16
}
为什么有些人选择用十六进制而不是十进制写值有什么理由或理由?在我看来,使用十六进制值时更容易混淆,并且不小心写了 Flag5 = 0x16
而不是 Flag5 = 0x10
.
Is there a reason or rationale to why some people choose to write the value in hexadecimal rather than decimal? The way I see it, it's easier to get confused when using hex values and accidentally write Flag5 = 0x16
instead of Flag5 = 0x10
.
推荐答案
原理可能不同,但我看到的一个优点是十六进制提醒你:好吧,我们不是在任意人类发明的世界中处理数字不再以十为底.我们正在处理比特——机器的世界——我们将按照它的规则行事."除非您正在处理数据的内存布局很重要的相对较低级别的主题,否则很少使用十六进制.使用它暗示这就是我们现在所处的情况.
Rationales may differ, but an advantage I see is that hexadecimal reminds you: "Okay, we're not dealing with numbers in the arbitrary human-invented world of base ten anymore. We're dealing with bits - the machine's world - and we're gonna play by its rules." Hexadecimal is rarely used unless you're dealing with relatively low-level topics where the memory layout of data matters. Using it hints at the fact that that's the situation we're in now.
另外,我不确定 C#,但我知道在 C x <<y
是一个有效的编译时常量.使用位移似乎最清楚:
Also, i'm not sure about C#, but I know that in C x << y
is a valid compile-time constant.
Using bit shifts seems the most clear:
[Flags]
public enum MyEnum
{
None = 0,
Flag1 = 1 << 0, //1
Flag2 = 1 << 1, //2
Flag3 = 1 << 2, //4
Flag4 = 1 << 3, //8
Flag5 = 1 << 4 //16
}
这篇关于为什么标志枚举通常用十六进制值定义的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!