更好的演员阵容? [英] Better casts?

查看:63
本文介绍了更好的演员阵容?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

关于数字类型,在我看来,演员阵容属于两个

类别中的一个:

1.改变对象价值的演员

2.实际上多余的转换,但摆脱编译器/ lint

警告


例如,请考虑以下代码:


unsigned int ui;

...

unsigned char uc =(unsigned char)ui;


这里,从代码中不清楚开发人员想要什么?
实现:

1. ui可能超过unsigned char范围而且

程序员只想查看较低(例如8)位。基本上,

他想切断重要的位,因此想要更改

原始值。

2.开发人员知道ui不能保持超过

unsigned char范围的值,因此将ui分配给uc是安全的,并且不会丢失
位(即保留原始值)。他只会将编译器/ Lint闭嘴




引入可明确指出
$ b $的构建宏是否合理b程序员想做的事情,如:


#define VALUE_CAST(type,e)((type)(e))

#define WARNING_CAST(类型,e)((类型)(e))

在下面的代码中,演员的目的是不言自明的:


unsigned char uc = WARNING_CAST(unsigned char,ui);


也许WARNING_CAST甚至可以通过断言来增强,如果

源对象在目标类型的范围。

有何评论?

解决方案

Ralf写道:

关于数字类型,在我看来,演员阵容属于两个类别中的一个:
1.改变对象价值的演员
2.演员实际上是多余的,但是摆脱编译器/ lint
警告




你能不能得到更好的编译器?


< snip>


-

Nick Keighley


Ralf写道:

关于数值类型,在我看来演员阵容属于两个类别之一:
1.改变对象价值的演员阵容


这些,我会称错误...演员肯定不是语言

构造用来改变对象的价值。

2.演员实际上是多余的,但摆脱了
编译器/ lint


这些可能根本不是多余的,例如哪里有真正的

需要更改包含值的对象类型

问题。

警告
<作为一个例子,请考虑以下代码:

unsigned int ui;
...
unsigned char uc =(unsigned char)ui;
1. ui可能超出了unsigned char范围

程序员想要查看较低的(例如8位)。
基本上,他想切断重要的位,因此
想要改变原始值。


不,程序员并不真正/想要/改变价值,而不是她只是希望高阶位都是

零。完全可以在转换之前检查它。

2.开发人员知道ui不能保存超过
unsigned char范围的值,因此将ui分配给uc是安全并且不会丢失位(即保留原始值)。他只是强制关闭编译器/ Lint。


他实际上可能会施放,因为设计要求从这一点起价值

驻留在不同类型的物体中。

在这里考虑嵌入式设计中的硬件寄存器。

引入可以清楚地表明程序员想要做什么的转换宏是否有意义,如:

#define VALUE_CAST(type,e)((type)(e))
#define WARNING_CAST(type,e)((type)(e))

在下面的代码中演员的目的是不言自明的:

unsigned char uc = WARNING_CAST(unsigned char,ui);


这些肯定是可能的,但我不认为他们给聚会带来了很多

...

也许WARNING_CAST甚至可以通过断言来增强
检查源对象是否在目标类型的范围内。任何评论?




并且向这些添加断言肯定会被

防御性编程所致。 ;-)


我的tuppence,无论如何...


干杯


Vladimir


Nick Keighley写道:

Ralf写道:

关于数值类型,在我的视图,强制转换属于两个类别之一:
1.转换对象的值转换
2.转换实际上是多余的,但摆脱了编译器/ lint
警告



你不能只是得到一个更好的编译器吗?

< snip>




一个更好的程序员会更好...... ;-)


干杯


弗拉基米尔


Regarding numerical types, in my view, casts fall in one of two
categories:
1. Casts that change the value of an object
2. Casts that are actually redundant, but get rid of compiler/lint
warnings

As an example, consider this code:

unsigned int ui;
...
unsigned char uc = (unsigned char)ui;

Here, it is not clear from the code what the developer wanted to
achieve:
1. It is possible that ui exceeds the unsigned char range and the
programmer only wants to look at the lower (e. g. 8) bits. Basically,
he wants to cut off significant bits and hence wants to change the
original value.
2. The developer knows that ui cannot hold values that exceed the
unsigned char range, so assigning ui to uc is safe and doesn''t lose
bits (i. e. the original value is preserved). He only casts to shut up
the compiler/Lint.

Would it make sense to introduce cast macros that clearly indicate what
the programmer wants to do, as in:

#define VALUE_CAST(type, e) ( (type)(e) )
#define WARNING_CAST(type, e) ( (type)(e) )

In the code below the purpose of the cast would be self-explanatory:

unsigned char uc = WARNING_CAST(unsigned char, ui);

Maybe WARNING_CAST could be even augmented by an assert checking if the
source object is in the range of the target type.
Any comments?

解决方案

Ralf wrote:

Regarding numerical types, in my view, casts fall in one of two
categories:
1. Casts that change the value of an object
2. Casts that are actually redundant, but get rid of compiler/lint
warnings



couldn''t you just get a better compiler?

<snip>

--
Nick Keighley


Ralf wrote:

Regarding numerical types, in my view, casts fall in one of
two categories:
1. Casts that change the value of an object
These, I''d call bugs... Cast are decidedly not the language
constructs one should use to change the value of an object.
2. Casts that are actually redundant, but get rid of
compiler/lint
These may not be redundant at all, e.g. where there''s genuine
need to change the type of the object containing the value in
question.
warnings

As an example, consider this code:

unsigned int ui;
...
unsigned char uc = (unsigned char)ui;

Here, it is not clear from the code what the developer wanted
to achieve:
1. It is possible that ui exceeds the unsigned char range
and the
programmer only wants to look at the lower (e. g. 8) bits.
Basically, he wants to cut off significant bits and hence
wants to change the original value.
No, the programmer does not really /want/ to change the value,
rather she''s just hoping that the high order bits will all be
zero. It is entirely possible to check for that before casting.
2. The developer knows that ui cannot hold values that
exceed the
unsigned char range, so assigning ui to uc is safe and doesn''t
lose bits (i. e. the original value is preserved). He only
casts to shut up the compiler/Lint.
He may actually cast because the design requires that the value
from this point on resides in an object of a different type.
Think hardware registers in an embedded design here.

Would it make sense to introduce cast macros that clearly
indicate what the programmer wants to do, as in:

#define VALUE_CAST(type, e) ( (type)(e) )
#define WARNING_CAST(type, e) ( (type)(e) )

In the code below the purpose of the cast would be
self-explanatory:

unsigned char uc = WARNING_CAST(unsigned char, ui);
These are certainly possible, but I don''t think they bring much
to the party...

Maybe WARNING_CAST could be even augmented by an assert
checking if the source object is in the range of the target
type. Any comments?



And adding asserts to these would certainly be death by
defensive programming. ;-)

My tuppence, anyway...

Cheers

Vladimir


Nick Keighley wrote:

Ralf wrote:

Regarding numerical types, in my view, casts fall in one of
two categories:
1. Casts that change the value of an object
2. Casts that are actually redundant, but get rid of
compiler/lint
warnings



couldn''t you just get a better compiler?

<snip>



A better programmer would be preferable... ;-)

Cheers

Vladimir


这篇关于更好的演员阵容?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆