如何使我的数据类型与c中的编译器无关 [英] How to make my data types independent of compiler in c

查看:397
本文介绍了如何使我的数据类型与c中的编译器无关的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在学习uC / OS,并阅读这篇文章:


由于不同的微处理器具有不同的字长 ,μC/ OS-II的端口
包括一系列确定
可移植性的类型定义具体来说,μC/ OS-II的代码从不使用C的
short,int和,长数据类型,因为它们固有地是
不可移植的。相反,我定义了整数数据类型,它们都是
便携式和直观的,如清单1.1所示。另外,为了方便,
我已经包括浮点数据类型,即使μC/ OS-II不使用浮点数。以下是列出1.1




  typedef unsigned char BOOLEAN; 
typedef unsigned char INT8U;
typedef signed char INT8S;
typedef unsigned int INT16U;
typedef signed int INT16S;
typedef unsigned long INT32U;
typedef signed long INT32S;
typedef float FP32;
typedef double FP64;
#define BYTE INT8S
#define UBYTE INT8U
#define WORD INT16S
#define UWORD INT16U
#define LONG INT32S
#define ULONG INT32U

我的问题是:



1-什么作者的意思是字长(我的问题主体中的第一个粗体字)?



2-为什么短的int和long数据类型本质上是不可移植的。 p>

3-是 typedef 是一个微处理器指令,如果它的功能是什么?!



4-可以写 typedef unsigned char(anything)而不是 typedef unsigned char INT8U;



5-为什么作者代码 typedef unsigned char INT8U; 然后 #define UBYTE INT8U
我不能直接使用它 typedef unsigned char UBYTE;



6-有两次使用 typedef unsigned char 其中之一是 typedef unsigned char INT8U; 另一个 typedef unsigned char BOOLEAN; 他为什么这样做?

解决方案


1-作者是什么意思的字长


一个字是内存的基本单位,如一页 - 实际上,有一篇文章关于字也是,我不会反流。对C的意义就是像你的作者所说,它并不总是一样的,而是由硬件特性决定的。这可能是C标准不是规定基本类型的字面大小的一个原因;最明显的一个是指针的大小,在32位系统上为4字节,在64位系统上为8字节,以反映地址空间的大小。


2-为什么短的int和long数据类型本质上是不可移植的。


更多准确地说:它们与C 一样便于携带,但是它们的大小不能标准化,这对于需要固定的特定尺寸的许多应用程序来说,这些都是无用的。


3- typedef是一个微处理器指令,如果它的功能是什么?!




不,它不是一个处理器指令。这是一个很好的语法糖,可让您定义自定义类型。


4-可以写typedef unsigned char(任何东西)而不是typedef unsigned char INT8U;



是的,这是想法。请注意,C标准甚至不能规定字符的大小,尽管我从来没有听说过这样的一个实现,除了8位的 [但是有人在评论中] 。 / p>


5-为什么作者代码typedef unsigned char INT8U;然后#define UBYTE INT8U不能使用这个直接typedef unsigned char UBYTE;


你可以,是的。作者可能希望限制类型所定义的地点数量。由于使用 #define 是一个预处理器指令,它也可能会略微简化可执行文件(尽管程度不大,可能被认为普遍有意义)。


6-有两次使用typedef unsigned char,其中一个是typedef unsigned char INT8U;和另一个typedef unsigned char BOOLEAN;为什么他这样做?


再次,使用typedef是很多关于糖的;它们可以使您的代码更清洁,更容易阅读,并且(假定它们正确完成),更强大。 Boolean是一种仅具有两个有意义的值,即零(false)或不为零(true)的类型的数学派生CS项。所以在理论上可以只用一点就可以实现,但是既不容易也不是最终的效率(因为没有1位寄存器的处理器,所以它们不得不切片和假)。定义bool或boolean类型在C中很常见,用于表示该值的重要性是true还是false。 ($ var)(true)和 if(!var)(false),因为C已经以这种方式评估而NULL是通过 if(!var)的唯一值。而使用类似 INT8U 的内容,表示您正在处理从十进制0到255的值,因为它未签名。我认为把 U 前期是一个比较常见的做法( UINT8 ),但是如果你习惯了这个概念相当清楚当然,typedef / define不难检查。






关于 stdint.h



整数类型是变化范围最大的类型,实际上ISO C标准要求实现包括各种整数的定义 stdint.h 中的最小大小的类型。这些名称像 int_least8_t 。当然,许多事情都需要具有真正固定大小(不只是最小)的类型,而大多数常见的实现确实提供了它们。 C99标准规定,如果它们可用,则可以通过以下格式访问它们: intN_t (signed)和 uintN_t (unsigned),其中 N 是位数。签名类型也被指定为二进制补码,所以可以使用各种类型的值非常便于携带的方式。



最后一点,虽然我不熟悉MicroC,但我不会将该文档作为C代表,在一个有限制的和专门的环境中使用(16位int,由typedef暗示)是不寻常的,所以如果你在其他地方运行该代码,INT16U可以是32位等)。我猜想MicroC只符合ANSI C,这是最老和最小的标准;显然没有stdint.h。


I was studying uC/OS and read this article:

Because different microprocessors have different word length, the port of μC/OS-II includes a series of type definitions that ensures portability Specifically, μC/OS-II’s code never makes use of C’s short, int and, long data types because they are inherently non-portable. Instead, I defined integer data types that are both portable and intuitive as shown in listing 1.1. Also, for convenience, I have included floating-point data types even though μC/OS-II doesn’t make use of floating-point. The following is listing 1.1

typedef unsigned char BOOLEAN;
typedef unsigned char INT8U;
typedef signed char INT8S;
typedef unsigned int INT16U;
typedef signed int INT16S;
typedef unsigned long INT32U;
typedef signed long INT32S;
typedef float FP32;
typedef double FP64;
#define BYTE INT8S
#define UBYTE INT8U
#define WORD INT16S
#define UWORD INT16U
#define LONG INT32S
#define ULONG INT32U

My questions is :

1- What does the writer mean by word length(the first bold words in my question body) ?!

2- Why short int and long data types are inherently non-portable.

3- Is typedef is a microprocessor directive, and if it is what is its function ?!

4- Can I write typedef unsigned char (anything) instead of typedef unsigned char INT8U;

5- Why did the author code typedef unsigned char INT8U; and then #define UBYTE INT8U can't I use this directly typedef unsigned char UBYTE;

6- There is a double use of typedef unsigned char one of them is typedef unsigned char INT8U; and the other typedef unsigned char BOOLEAN; Why did he do that?!

解决方案

1- What does the writer mean by word length

A word is a fundamental unit of memory like a page -- actually, there's an article on word too, which I won't regurgitate. The significance to C is, like your author says, that it is not always the same but it is determined by hardware characteristics. This maybe one reason the C standard doesn't dictate the literal size of basic types; the most obvious one to contemplate is the size of pointers, which will be 4 bytes on 32-bit systems and 8 on 64-bit systems, to reflect the size of the address space.

2- Why short int and long data types are inherently non-portable.

More accurately: they're as portable as C but their size is not standardized, which may make them useless for many applications where a fixed specific size is required.

3- Is typedef is a microprocessor directive, and if it is what is its function ?!

No, it's not a processor directive. It's a nice piece of syntactic sugar which enables you to define custom types.

4- Can I write typedef unsigned char (anything) instead of typedef unsigned char INT8U;

Yep, that's the idea. Beware that the C standard doesn't even dictate the size of a char, although I've never heard of an implementation where it is anything but 8-bits [but someone in the comments has].

5- Why did the author code typedef unsigned char INT8U; and then #define UBYTE INT8U can't I use this directly typedef unsigned char UBYTE;

You could, yes. Possibly the author wanted to restrict the number of places such a type is defined. Since using the #define is a pre-processor directive, it might also also slightly streamline the executable (although not to a degree that could be considered generally significant).

6- There is a double use of typedef unsigned char one of them is typedef unsigned char INT8U; and the other typedef unsigned char BOOLEAN; Why did he do that?!

Again, use of typedefs is a lot about "sugar"; they can make your code cleaner, easier to read, and (presuming they are done properly), more robust. "Boolean" is a math derived CS term for a type which only has two meaningful values, zero (false) or not zero (true). So it could in theory be implemented with just one bit, but that's neither easy nor in the end efficient (because there are no processors with 1-bit registers, they would have to slice dice and fake such anyway). Defining a "bool" or "boolean" type is common in C and used to indicate that the significance of the value is either true or false -- it works well with, e.g. if (var) (true) and if (!var) (false) since C already evaluates that way (0 and NULL are the only values that will pass if (!var)). Whereas using something like INT8U indicates you are dealing with a value that ranges from decimal 0 to 255, since it unsigned. I think putting the U upfront is a more common practice (UINT8), but if you are used to the concepts it is reasonably clear. And of course the typedef/define is not hard to check.


About stdint.h

Integer types are the ones with the greatest range of variation, and in fact the ISO C standard does require that an implementation include definitions for various integer types with certain minimum sizes in stdint.h. These have names like int_least8_t. Of course, types with a real fixed size (not just a minimum) are needed for many things, and most common implementations do provide them. The C99 standard dictates that if they are available, they should be accessible via names following the pattern intN_t (signed) and uintN_t (unsigned), where N is the number of bits. The signed types are also specified as two's complement, so one can work with such values in all kinds of highly portable ways.

As a final note, while I'm not familiar with MicroC, I would not take that documentation as representative of C generally -- it is intended for use in a somewhat restrictive and specialized environment (a 16-bit int, implied by the typedefs, is unusual, and so if you ran that code elsewhere, INT16U could be 32-bits, etc). I'd guess MicroC only conforms to ANSI C, which is the oldest and most minimal standard; evidently it has no stdint.h.

这篇关于如何使我的数据类型与c中的编译器无关的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆