为什么C不是我最喜欢的编程语言 [英] Why C Is Not My Favourite Programming Language

查看:92
本文介绍了为什么C不是我最喜欢的编程语言的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在过去的十年中,我一直在使用C进行大量的小型和中型个人

项目,我最近才意识到

从那时起它取得了多少进展。我越来越多地使用脚本语言(特别是Python和Bourne shell),它们提供相同的速度,但使用起来更简单,更安全。我可以

不再理解为什么有人愿意使用C编程

除了最低级别的东西之外的任何东西。即使是系统工具,

文本编辑器等也可以简单地编写,而不会损失Python中的功能或效率。无论如何,这是我的理由。我有兴趣听到一些使用C的智能优势(不是
理性化)。


没有字符串类型

--------------


C没有字符串类型。咦?大多数理智的编程语言都有一个

字符串类型,它允许人们只说这是一个字符串。让

编译器负责其余部分。对于C来说并非如此。它是如此顽固而且笨拙,它只有三种类型的变量;一切都是一个

数字,一个更大的数字,一个指针或这三个的组合。

因此,我们没有正确的字符串但是数组无符号整数" ;.

" char"基本上只是一个非常小的数字。现在我们必须使用无符号整数来表示多字节字符。


什么。 A. Crock。一个丑陋的黑客。


无效操作的功能

---------------------- ----------------


从另一个字符串复制一个字符串需要包含< string.h>在您的

源代码中,有两个复制字符串的函数。一个

甚至可以想象使用其他函数复制字符串(如果一个

想要,但我无法想象为什么)。为什么任何普通语言

只需复制字符串两个函数?为什么我们不能像其他类型一样使用

赋值运算符(''='')?哦,我忘了。

C中没有字符串这样的东西;只是一个大的连续坚持

内存。大!更好的是,没有语法:


*字符串连接

*字符串比较

* substrings

同样可以将数字转换为字符串,反之亦然。你必须使用

之类的东西,比如atol(),或strtod(),或者printf()上的变体。三个用于变量类型转换的
系列函数。你好?灵活

铸造?您好?


我甚至不能让我开始缺少指数

运营商。


无字符串类型:redux

-------------------------


因为没有真正的字符串类型,我们有两个选项:数组或

指针。数组大小只能是常量。这意味着我们冒着缓冲区溢出的风险

,因为我们必须尝试(徒劳)提前猜测

我们需要多少个字符。青衫。唯一的选择是使用

malloc(),这只是陷入了陷阱。

指针的整个概念是一个等待发生的事故。你不能两次释放相同的

指针。你必须经常检查malloc()

的返回值,你不能抛弃它。没有内置的方法可以判断是否正在使用某个内存,或者指针是否被释放,依此类推。

不得不求助于低级别的记忆操作,只需要能够存储一行文字就可以了......


鼓励缓冲溢出

-------------------------------------


几乎任何实质性的C代码都会出现缓冲区溢出。

这是由程序员意外地将太多数据放入一个空间或者b $ b空间或留下一个指向某处的指针,因为返回的

函数在沿线的某个地方窜了起来。 C不包括

告诉阵列结束或分配的内存块是否超出
。告诉的唯一方法是运行,测试并等待

段错误。或者是一次壮观的撞车或者从程序中缓慢,稳定地泄漏内存,令人痛苦地流血到它死亡。


鼓励缓冲区溢出的功能
------------------------------------------


*得到()

* strcat()

* strcpy()

* sprintf()

* vsprintf()

* bcopy()

* scanf()

* fscanf()

* sscanf()

* getwd()

* getopt()

* realpath()

* getpass()


列表一直在继续。需要我多说?好吧,是的,我这样做。


你看,即使你没有写任何记忆你仍然可以访问

记忆你不应该至。 C不能为跟踪字符串的b / b
而烦恼;字符串的结尾由空''\0''

字符表示。一切都好,对吗?好吧,你的C库中的一些函数,或许像strlen()这样的
,如果

它就不会在''字符串''的末尾运行。它没有空。如果你使用的是二进制字符串怎么办?

这可能是粗心编程,但是我们都会犯错误所以

语言作者必须承担一些责任所以

不容忍。


没有内置布尔类型

--------------- --------


如果你不相信我,只需看:


$ cat> test.c

int main(无效)

{

bool b;

返回0;

}


$ gcc -ansi -pedantic -Wall -W test.c

test.c:在函数''main'' :

test.c:3:''bool''未声明(首次使用此功能)


直到1999 ISO C标准我们终于能够使用''bool''作为

a数据类型。但猜猜怎么了?它是作为宏实现的,其中一个实际上必须包含一个头文件才能使用它!


高级或低级等级?

------------------------


一方面,我们的事实是没有字符串类型和

一点点自动内存管理,这意味着一种低级语言。另一方面,

,我们有大量的库函数,一个预处理器和许多其他暗示高级语言的东西。 C尝试

两者兼而有之,因此传播得太薄了。


这个很棒的事情就是当C缺乏真正有用的时候br />
功能,例如合理强大的数据输入,借口C''sa

低级语言可以随时使用,作为一个完美的功能,让C保持无益和致命的稀疏。


C的初衷是为了它是一个便携式程序集用于编写UNIX的
语言。不幸的是,从一开始C就有了b / b
包含额外的东西,这使得它作为一个组装失败了。

语言。它的kludgy字符串就是一个很好的例子。如果它至少是b $ b便携式这些失败可能是可以原谅的,但是C不可移动。


整数溢出而没有警告

--------------------------------


自我解释。一分钟你有一个十五位数字,然后尝试

加倍或三倍它 - 繁荣 - 它的价值突然

-234891234890892或类似的东西。愚蠢,愚蠢,愚蠢。有多难?

会发出警告或溢出错误甚至只是
将变量重置为零?


这被广泛称为不良做法。大多数有能力的开发人员承认,默默地忽略错误是一种不好的态度;

对于像C这样常用的语言尤其如此。


便携性?!

-------------


请。至少有四个官方规格的C我可以从我的头脑中找到
名称,并且没有编译器正确实现

所有这些。他们发生冲突,他们成长和成长。这个问题不是b
$;它每天都在增加。新的编译器和库是开发的,并且正在开发专有扩展。 GNU C不是
与ANSI C相同并不是K& RC与微软不一样

C不是与POSIX C相同.C不可移植;各种各样的机器

架构是完全不同的,C不能正常适应因为

它是如此的羊肉。它被困在Unix范式中。


如果它不是C预处理器,那么实际上它几乎不可能获得C
在多个处理器硬件系列上运行,

甚至只是略有不同的操作系统。编程

语言不应该需要C预处理器,以便它可以在
FreeBSD,Linux或Windows上运行而不会编译。


C无法为了后退

兼容性而适应新条件,抛弃摆脱愚蠢的机会,

完全没用对于不存在的

目标而言是彻头彻尾的危险功能。然而,C正在增长新的触角和不必要的功能

,因为白痴认为在他们的C

库中添加七个新功能将使生活更轻松。它没有。


即使是C89和C99标准也是以荒谬的方式相互冲突的。你可以使用长型或不是吗?某个

常量由一个隐藏在我的C

库深处的预处理器宏定义?以这种特殊的方式使用函数将是未定义的,还是可接受的?b $ b你是什​​么意思,getch()是不是正确的

函数但是getc()和getchar()是什么?


这个错误的含义' '便携性''

-------------------------------------- ------


因为C假装是可移植的,即使是专业的C程序员也可能被硬件和一种无情的编程语言所困扰;

几乎所有比较,字符赋值,算术或

字符串输出都可以在没有明显原因的情况下爆炸,因为

的字节顺序或因为你的特定的处理器将所有字符视为

无符号或愚蠢,微妙,致命的陷阱。


过时的,无法解释的惯例

- -------------------------------


除上述问题外, C还有各种

特性(总是没有报道),甚至连b
C的老师都不知道:


*不要使用fflush(stdin)。

*" gets()是邪恶的。"

*" main( )必须返回一个整数。

*" main()只能接受三组参数中的一组。

*" main()只能返回EXIT_SUCCESS或EXIT_FAILURE。

*你没有强制转换malloc()的返回值。"

*" fileno()isn ANSI兼容函数。

*预处理器宏不应该使用任何一个参数而不是

一次。


....所有这些不必要和未提及的怪癖都意味着错误的代码。死亡

减少一千。讽刺的时候你认为Kernighan会以同样的方式思考

Pascal当C有多少小问题时,b / b
逐渐让你痛苦地死去。


责备Progammer

---------------------


由于C很难学习,甚至更难以实际使用而不会以微妙但可怕的方式破坏某些东西

它被认为是任何出错的都是程序员的错。

如果你的程序有段错误,那就是你的错。如果它崩溃,神秘地

返回184没有错误信息,这是你的错。当一个单一的

条件你恰好忘记了编码时,它就是你的错误。


显然,程序员必须承担

a破解程序的大部分责任。但正如我们已经看到的那样,C积极地试图让程序员失败。这会增加故障率,但是对于一些原因我们不会因为发现另一个缓冲区溢出而责备该语言而导致b $ b被发现。 C程序员试图通过创建'tua culpa'的文化来掩盖C'的不一致性和

的不足之处;如果有什么东西'b $ b错了,那就是你的错,不是编译器,链接器,汇编器,
规范,文档或硬件。


编译器必须承担一些责任。两个原因。第一个是

,大多数编译器都内置了专有扩展。让我

提醒你,使用C的一半是它应该是

可移植并在任何地方编译。添加扩展名违反了C的原始精神,并消除了它的一个优点(尽管已经减少了优势)。


另一个(也许是更紧迫的)原因是缺少任何内容,除了C编译器所做的最小错误检查之外。对于你的编译器捕获的每十种类型的错误,另外五十只将会漏掉。

除了变量类型和语法检查之外,编译器不会查找

其他任何东西。它所能做的就是对不寻常的行为发出警告,虽然这些警告通常都是虚假的。另一方面,单个

错误可能会导致一个荒谬的级联,或者使编译器因为错误的分号而死亡,或者更准确地<和$ br />
这是一个错误构造的解析器和语法。然而,尽管如此,

,这是你的错。


引用Unix Haters''手册:


"如果你做了一个小小的遗漏,就像一个单独的分号,一个C

编译器往往会变得如此困惑和恼火,它会一塌糊涂

并且抱怨它只是无法编译文件的其余部分,因为一个

缺少分号已经把它抛出了太多。


所以C编译器可能会给出几百个错误,说明如果错过一个分号,那么你的代码中有一半是错误的。可以吗?b $ b变得更糟吗?当然可以!这是C!


你看,在编译时,编译器通常不会给你带来错误信息

。如果你写下这样完全愚蠢的代码,有时它甚至不会给你任何警告




#include< stdio.h>


int main()

{

char * p;

puts(p);

返回0;

}


当我们用''trusty''编译器gcc编译它时,我们没有错误

或警告。即使使用''-W''和''-Wall''标志来制作

,也要注意危险的代码,它什么也没说。


$ gcc -W -Wall stupid.c

$


事实上,除非你试图优化
$ b,否则不会发出任何警告$ b程序带有''-O''标志。但是如果你从来没有优化你的程序呢?

嗯,你现在有一个危险的程序。除非你再次检查代码

,否则你可能永远不会注意到这个错误。


本节(和整个文档)的真正含义是纯粹的

C的不友好性以及如何尽可能难以使用,因为它需要很大的痛苦。它以错误的方式灵活;它可以做很多很多不同的东西,但是这样就不可能用它做任何单一的事情。


困在20世纪70年代

--------------------


C已超过30岁,它表明。它缺乏现代

语言所具有的功能,例如异常处理,许多有用的数据类型,

函数重载,可选函数参数和垃圾

集合。考虑到从1970年开始使用汇编语言在

计算机上只使用一种数据类型构建

就不足为奇了。


C是为20世纪70年代的计算机和程序员设计的,为了记忆而牺牲了稳定性和程序员时间。

尽管最新的标准是仅仅五年之久

旧,C还没有更新,以利用增加的内存和

处理器能力来实现诸如自动内存之类的东西

管理。做什么的?向后兼容的假象和

可移植性。


更多缺少数据类型

--------- ------------------


哈希表。为什么这么难实现? C用于内核和系统实用程序之类的编程,其中
经常使用哈希表。然而,对于C'的创建者而言,它并没有发生,或者可能包括哈希表作为一种类型的数组在编写UNIX时可能是一个很好的想法吗? Perl有它们。 PHP有它们。使用C你必须使用
假哈希表,即便如此它根本不起作用。


多维数组。在你告诉我你可以做的事情之前,你可以做一些像b / a
int multiarray [50] [50] [50]我想我应该指出那是'

an数组数组。不同的事。特别是当你认为你也可以将它作为一堆指针使用时。 C程序员

称之为灵活性。其他人称之为冗余,或者更准确地称为b $ b,混乱。


复数。他们可能在C99,但有多少编译器支持

?让你理解复杂数字的概念并不是很难,所以为什么它们不包括在第一位呢?是不是在1989年才发现
的复数?


二进制字符串。只用两个成员制作一个

强制结构就不那么难了:一个字符串为* b $ b字节的字符串和一个用于字符串的字符串字符串的长度。二进制字符串在Unix上总是存在
,所以为什么C不能更容易接受?


库大小

- -----------


C的实际核心很小,即使一些语法

也不是最有效率或可读性(例如:组合''?:''

语句)。臃肿的一件事是C库。完整C库中

函数的数量符合所有重要的

标准,达到四位数字。有很多

冗余,代码真的不应该存在。


这有连锁反应,比如大量的配置

常量由预处理器定义(不应该是
必需),库的大小(GNU C库几乎填满了

软盘及其文档,三)并且不一致地命名

除了重复之外的功能组。


例如,a将字符串转换为长整数的函数是

atol()。也可以使用strtol()完全相同的东西。繁荣 -

即时冗余。更糟糕的是,这两个功能都包含在

C99,POSIX和SUSv3标准中!


它会变得更糟吗?当然可以!这是C!


因此,有一个等价的atod()

和strtod()函数是合乎逻辑的用于将字符串转换为double。因为你可能已经猜到了,这不是真的。它们被称为atof()和strtod()。

这是非常愚蠢的。还有更多的例子散布在

标准C库中,就像狗在公园里的臭臭一样。


Single Unix Specification version 3指定1,123个函数

必须可供兼容系统的C程序员使用。我们已经了解了冗余和不必要的功能,但是这些1,123个功能分散了多少个头文件?
? 62.

这是正确的,平均而言,C库标题将定义大约

十八个函数。即使你只需要使用一个函数

来自每个,比如五个库(一个常见的事件)你可能会很好

结束,包括90,100甚至甚至150个函数定义你将永远不需要
。膨胀,臃肿,臃肿。 Python有正确的想法;它的导入

语句允许您根据需要精确定义每个库中需要的函数(以及全局

变量!)。但是C?哦,不。


指定结构成员

----------------------- -----


为什么这需要两个运营商?为什么我必须在''。''和

'' - >''之间选择一个荒谬,随意的原因?哦,我忘了;这只是另外一个C'的问题。


限制语法

------ --------


几个例子应该很好地说明我的意思。如果

你已经用PHP编程了很长一段时间,你可能会知道''break''关键字。您可以使用它来使用整数来突破任意深度的嵌套循环,如下所示:


for($ i = 0; $ i < 10; $ i ++){


for($ j = 0; $ j< 10; $ j ++){


for ($ k = 0; $ k< 10; $ k ++){

休息2;

}

}


/ *突破到这里* /


}


在C中无法做到这一点。如果你想从

系列的嵌套for或while循环中突破,那么你必须使用goto。这个

就是所谓的粗暴黑客。


除此之外,没有办法比较任何非数字数据
使用switch语句
类型。甚至不是字符串。在编程

语言D中,可以做到:


char s [];


switch(s ){


case" hello":

/ * something * /

break;


case" goodbye":

/ *其他* /

休息;


case"也许是:

/ *另一个动作* /

休息;


默认:

/ * * *

休息;


}


C不允许您使用switch和case语句对于字符串。一个
必须使用几个变量迭代一个大小写的字符串数组

并将它们与strcmp()的给定字符串进行比较。这降低了

的性能,而且只是另一个黑客。


实际上,这是无偿库函数运行的一个例子

狂野再一次。即使比较一个字符串也需要使用

strcmp()函数:


char string [] =" Blah,blah,blah \ n" ;;


if(strcmp(string," something")== 0){


/ *做点什么* /


}


刷新标准I / O

-------------- -------


你可以做到这一点,但不是那个的简单缩影。哲学

C;一个人必须做两件不同的事情才能刷新标准输入和

标准输出。


要刷新标准输出流,使用fflush()函数

(由< stdio.h>定义)。打印完每个

的文字后,通常不需要这样做,但很高兴知道它就在那里,对吧?


不幸的是,fflush()不能用于刷新标准

输入的内容。一些C标准明确地将其定义为具有未定义的行为,但这是不合逻辑的甚至教科书作者

有时在示例和某些编译器中错误地使用fflush(stdin) />
不会打扰你来警告你。一个人甚至不需要冲洗

标准输入;你要求一个带有getchar()的角色,并且程序

应该只读取给定的第一个角色并忽略其余的角色。

但我离题了...... />

没有''真实'的方式来刷新标准输入,比如结束

线。相反,人们必须像这样使用kludge:


int c;


做{


errno = 0;

c = getchar();


if(errno){

fprintf(stderr,

"刷新标准输入缓冲区时出错:%s \ n",

strerror(错误));

}

} while((c!=''\ n'')&&(!feof(stdin)));


这是对的;你需要使用一个变量,一个循环结构,两个

库函数和几行异常处理代码来刷新

标准

输入缓冲区。


错误处理不一致

------------------------- -


一位经验丰富的C程序员只需阅读本节的标题就可以说出我在说什么
。有许多不兼容的方法,其中C库函数表示发生了错误



*返回零。

*返回非零。

*返回EOF。

*返回NULL指针。

*设置errno。

*需要调用另一个函数。

*向用户输出诊断消息。

*触发断言失败。

*崩溃。


某些功能实际上可能最多使用其中三种方法。 (对于

实例,fread()。)但事实是这些都没有相互兼容

并且错误处理不会自动发生;每个

时间C程序员使用库函数他们必须手动检查

是否有错误。这个膨胀的代码本来是完美的b / b
可读,没有if-blocks用于错误处理和变量保持

跟踪错误。在一个大型软件项目中,必须编写一个部分

的代码,以便进行数百次错误处理。如果你忘记了,那么

就会出现可怕的错误。例如,如果你没有检查malloc()的返回值

,你可能会意外地尝试使用空指针。哎呀...


交换数组下标

----------------------- -------


嘿,汤普森,我怎么能让C'的语法更加模糊,而且b $ b难以理解?


你如何允许5 [var]与var [5]相同?"


"哇;不必要的和混乱的句法愚蠢!谢谢!


欢迎你,丹尼斯。


是的,我知道阵列订阅只是一个添加形式

所以它应该是可交换的,但是看起来有点愚蠢

来说5 [var]与var相同5]你怎么用

var''th值5?


Variadic anonymous macros

---- ---------------------


如果你不明白变量匿名宏是什么,
它们是宏(即预处理器定义的伪函数),它们可以获取可变数量的参数。对于

实施来说,听起来很简单。我的意思是,这一切都是由预处理器完成的,对吧?并且

此外,即使在最初的K& RC中,您也可以定义具有可变数量的

参数的正确函数,对吧?


在这种情况下,为什么我不能这样做:


#define错误(...)fprintf(stderr,...)


没有得到海湾合作委员会的警告?


警告:在C99中引入了匿名的可变参数宏


这是对的,伙计们。直到1999年底,开发30年后,C编程语言才开始,我们被允许用预处理器完成这样一个简单的任务。


C标准没有意义

------------------------- -------


只需要ANSI C标准中的一个简单引用 - nay,单个脚注

- 就可以证明巨大的愚蠢整个事情。

女士们,先生们,以及其他所有人,我向你们介绍......脚注82:


所有空白都是等价的,除非确定情况。


我会对此做一个评论,但它太容易了。


预处理器功率太大

---------------------------


相当愚蠢,实际C语言的一半在

预处理器中重新实现。 (这应该从一开始就是一个问题;冗余

通常表示潜在的问题。)我们可以#define假的

变量,#ifdef和#ifndef的虚假条件,看,即使是#if,#endif和其他船员,也有
!多么有用!


嗯,对不起,没有。


预处理器对于像C这样的语言来说是一个好主意。已经

迭代,C不可移植。预处理器对于弥合不同计算机体系结构和库之间的差距至关重要,并允许在多台机器上编译
a程序而无需依赖

外部程序。在这种情况下,可以使用#define语句

完全有效地设置程序可以使用的''标志'

确定所有类型的东西: which C standard is being used, which

library, who wrote it, and so on and so forth.


Now, the situation isn''t as bad as for C++. In C++, the preprocessor is

so packed with unnecessary rubbish that one can actually use it to

calculate an arbitrary series of Fibonacci numbers at compile-time.

However, C comes dangerously close; it allows the programmer to define

fake global variables with wacky values which would not otherwise be

proper code, and then compare values of these variables.为什么? It’’s not

needed; the C language of the Plan 9 operating system doesn’’t let you

play around with preprocessor definitions like this. It’’s all just

bloat.


"But what about when we want to use a constant throughout a program? We

don’’t want to have to go through the program changing the value each

time we want to change the constant!" some may complain. Well, there’’s

these things called global variables. And there’’s this keyword, const.

It makes a constant variable. Do you see where I’’m going with this?


You can do search and replace without the preprocessor, too. In fact,

they were able to do it back in the seventies on the very first

versions of Unix. They called it sed. Need something more like cpp? Use

m4 and stop complaining. It’’s the Unix way.

I''ve been utilising C for lots of small and a few medium-sized personal
projects over the course of the past decade, and I''ve realised lately
just how little progress it''s made since then. I''ve increasingly been
using scripting languages (especially Python and Bourne shell) which
offer the same speed and yet are far more simple and safe to use. I can
no longer understand why anyone would willingly use C to program
anything but the lowest of the low-level stuff. Even system utilities,
text editors and the like could be trivially written with no loss of
functionality or efficiency in Python. Anyway, here''s my reasons. I''d
be interested to hear some intelligent advantages (not
rationalisations) for using C.

No string type
--------------

C has no string type. Huh? Most sane programming languages have a
string type which allows one to just say "this is a string" and let the
compiler take care of the rest. Not so with C. It''s so stubborn and
dumb that it only has three types of variable; everything is either a
number, a bigger number, a pointer or a combination of those three.
Thus, we don''t have proper strings but "arrays of unsigned integers".
"char" is basically just a really small number. And now we have to
start using unsigned ints to represent multibyte characters.

What. A. Crock. An ugly hack.

Functions for insignificant operations
--------------------------------------

Copying one string from another requires including <string.h> in your
source code, and there are two functions for copying a string. One
could even conceivably copy strings using other functions (if one
wanted to, though I can''t imagine why). Why does any normal language
need two functions just for copying a string? Why can''t we use the
assignment operator (''='') like for the other types? Oh, I forgot.
There''s no such thing as strings in C; just a big continuous stick of
memory. Great! Better still, there''s no syntax for:

* string concatenation
* string comparison
* substrings

Ditto for converting numbers to strings, or vice versa. You have to use
something like atol(), or strtod(), or a variant on printf(). Three
families of functions for variable type conversion. Hello? Flexible
casting? Hello?

And don''t even get me started on the lack of an exponentiation
operator.

No string type: the redux
-------------------------

Because there''s no real string type, we have two options: arrays or
pointers. Array sizes can only be constants. This means we run the risk
of buffer overflow since we have to try (in vain) to guess in advance
how many characters we need. Pathetic. The only alternative is to use
malloc(), which is just filled with pitfalls. The whole concept of
pointers is an accident waiting to happen. You can''t free the same
pointer twice. You have to always check the return value of malloc()
and you mustn''t cast it. There''s no builtin way of telling if a spot of
memory is in use, or if a pointer''s been freed, and so on and so forth.
Having to resort to low-level memory operations just to be able to
store a line of text is asking for...

The encouragement of buffer overflows
-------------------------------------

Buffer overflows abound in virtually any substantial piece of C code.
This is caused by programmers accidentally putting too much data in one
space or leaving a pointer pointing somewhere because a returning
function ballsed up somewhere along the line. C includes no way of
telling when the end of an array or allocated block of memory is
overrun. The only way of telling is to run, test, and wait for a
segfault. Or a spectacular crash. Or a slow, steady leakage of memory
from a program, agonisingly ''bleeding'' it to death.

Functions which encourage buffer overflows
------------------------------------------

* gets()
* strcat()
* strcpy()
* sprintf()
* vsprintf()
* bcopy()
* scanf()
* fscanf()
* sscanf()
* getwd()
* getopt()
* realpath()
* getpass()

The list goes on and on and on. Need I say more? Well, yes I do.

You see, even if you''re not writing any memory you can still access
memory you''re not supposed to. C can''t be bothered to keep track of the
ends of strings; the end of a string is indicated by a null ''\0''
character. All fine, right? Well, some functions in your C library,
such as strlen(), perhaps, will just run off the end of a ''string'' if
it doesn''t have a null in it. What if you''re using a binary string?
Careless programming this may be, but we all make mistakes and so the
language authors have to take some responsibility for being so
intolerant.

No builtin boolean type
-----------------------

If you don''t believe me, just watch:

$ cat > test.c
int main(void)
{
bool b;
return 0;
}

$ gcc -ansi -pedantic -Wall -W test.c
test.c: In function ''main'':
test.c:3: ''bool'' undeclared (first use in this function)

Not until the 1999 ISO C standard were we finally able to use ''bool'' as
a data type. But guess what? It''s implemented as a macro and one
actually has to include a header file to be able to use it!

High-level or low-level?
------------------------

On the one hand, we have the fact that there is no string type and
little automatic memory management, implying a low-level language. On
the other hand, we have a mass of library functions, a preprocessor and
a plethora of other things which imply a high-level language. C tries
to be both, and as a result spreads itself too thinly.

The great thing about this is that when C is lacking a genuinely useful
feature, such as reasonably strong data typing, the excuse "C''s a
low-level language" can always be used, functioning as a perfect
''reason'' for C to remain unhelpfully and fatally sparse.

The original intention for C was for it to be a portable assembly
language for writing UNIX. Unfortunately, from its very inception C has
had extra things packed into it which make it fail as an assembly
language. Its kludgy strings are a good example. If it were at least
portable these failings might be forgivable, but C is not portable.

Integer overflow without warning
--------------------------------

Self explanatory. One minute you have a fifteen digit number, then try
to double or triple it and - boom - its value is suddenly
-234891234890892 or something similar. Stupid, stupid, stupid. How hard
would it have been to give a warning or overflow error or even just
reset the variable to zero?

This is widely known as bad practice. Most competent developers
acknowledge that silently ignoring an error is a bad attitude to have;
this is especially true for such a commonly used language as C.

Portability?!
-------------

Please. There are at least four official specifications of C I could
name from the top of my head and no compiler has properly implemented
all of them. They conflict, and they grow and grow. The problem isn''t
subsiding; it''s increasing each day. New compilers and libraries are
developed and proprietary extensions are being developed. GNU C isn''t
the same as ANSI C isn''t the same as K&R C isn''t the same as Microsoft
C isn''t the same as POSIX C. C isn''t portable; all kinds of machine
architectures are totally different, and C can''t properly adapt because
it''s so muttonheaded. It''s trapped in The Unix Paradigm.

If it weren''t for the C preprocessor, then it would be virtually
impossible to get C to run on multiple families of processor hardware,
or even just slightly differing operating systems. A programming
language should not require a C preprocessor so that it can run on both
FreeBSD, Linux or Windows without failing to compile.

C is unable to adapt to new conditions for the sake of "backward
compatibility", throwing away the opportunity to get rid of stupid,
utterly useless and downright dangerous functions for a nonexistent
goal. And yet C is growing new tentacles and unnecessary features
because of idiots who think adding seven new functions to their C
library will make life easier. It does not.

Even the C89 and C99 standards conflict with each other in ridiculous
ways. Can you use the long long type or can''t you? Is a certain
constant defined by a preprocessor macro hidden deep, deep inside my C
library? Is using a function in this particular way going to be
undefined, or acceptable? What do you mean, getch() isn''t a proper
function but getc() and getchar() are?

The implications of this false ''portability''
--------------------------------------------

Because C pretends to be portable, even professional C programmers can
be caught out by hardware and an unforgiving programming language;
almost anything like comparisons, character assignments, arithmetic, or
string output can blow up spectacularly for no apparent reason because
of endianness or because your particular processor treats all chars as
unsigned or silly, subtle, deadly traps like that.

Archaic, unexplained conventions
--------------------------------

In addition to the aforementioned problems, C also has various
idiosyncracies (invariably unreported) which not even some teachers of
C are aware of:

* "Don''t use fflush(stdin)."
* "gets() is evil."
* "main() must return an integer."
* "main() can only take one of three sets of arguments."
* "main() can only return either EXIT_SUCCESS or EXIT_FAILURE."
* "You musn''t cast the return value of malloc()."
* "fileno() isn''t an ANSI compliant function."
* "A preprocessor macro oughtn''t use any of its arguments more than
once."

....all these unnecessary and unmentioned quirks mean buggy code. Death
by a thousand cuts. Ironic when you consider that Kernighan thinks of
Pascal in the same way when C has just as many little gotchas that
bleed you to death gradually and painfully.

Blaming The Progammer
---------------------

Due to the fact that C is pretty difficult to learn and even harder to
actually use without breaking something in a subtle yet horrific way
it''s assumed that anything which goes wrong is the programmer''s fault.
If your program segfaults, it''s your fault. If it crashes, mysteriously
returning 184 with no error message, it''s your fault. When one single
condition you''d just happened to have forgotten about whilst coding
screws up, it''s your fault.

Obviously the programmer has to shoulder most of the responsibility for
a broken program. But as we''ve already seen, C positively tries to make
the programmer fail. This increases the failure rate and yet for some
reason we don''t blame the language when yet another buffer overflow is
discovered. C programmers try to cover up C''s inconsistencies and
inadequacies by creating a culture of ''tua culpa''; if something''s
wrong, it''s your fault, not that of the compiler, linker, assembler,
specification, documentation, or hardware.

Compilers have to take some of the blame. Two reasons. The first is
that most compilers have proprietary extensions built into them. Let me
remind you that half of the point of using C is that it should be
portable and compile anywhere. Adding extensions violates the original
spirit of C and removes one of its advantages (albeit an already
diminished advantage).

The other (and perhaps more pressing) reason is the lack of anything
beyond minimal error checking which C compilers do. For every ten types
of errors your compiler catches, another fifty will slip through.
Beyond variable type and syntax checking the compiler does not look for
anything else. All it can do is give warnings on unusual behaviour,
though these warnings are often spurious. On the other hand, a single
error can cause a ridiculous cascade, or make the compiler fall over
and die because of a misplaced semicolon, or, more accurately and
incriminatingly, a badly constructed parser and grammar. And yet,
despite this, it''s your fault.

To quote The Unix Haters'' Handbook:

"If you make even a small omission, like a single semicolon, a C
compiler tends to get so confused and annoyed that it bursts into tears
and complains that it just can''t compile the rest of the file since one
missing semicolon has thrown it off so much."

So C compilers may well give literally hundreds of errors stating that
half of your code is wrong if you miss out a single semicolon. Can it
get worse? Of course it can! This is C!

You see, a compiler will often not deluge you with error information
when compiling. Sometimes it will give you no warning whatsoever even
if you write totally foolish code like this:

#include <stdio.h>

int main()
{
char *p;
puts(p);
return 0;
}

When we compile this with our ''trusty'' compiler gcc, we get no errors
or warnings at all. Even when using the ''-W'' and ''-Wall'' flags to make
it watch out for dangerous code it says nothing.

$ gcc -W -Wall stupid.c
$

In fact, no warning is given ever unless you try to optimise the
program with a ''-O'' flag. But what if you never optimise your program?
Well, you now have a dangerous program. And unless you check the code
again you may well never notice that error.

What this section (and entire document) is really about is the sheer
unfriendliness of C and how it is as if it takes great pains to be as
difficult to use as possible. It is flexible in the wrong way; it can
do many, many different things, but this makes it impossible to do any
single thing with it.

Trapped in the 1970s
--------------------

C is over thirty years old, and it shows. It lacks features that modern
languages have such as exception handling, many useful data types,
function overloading, optional function arguments and garbage
collection. This is hardly surprising considering that it was
constructed from an assembler language with just one data type on a
computer from 1970.

C was designed for the computer and programmer of the 1970s,
sacrificing stability and programmer time for the sake of memory.
Despite the fact that the most recent standard is just half a decade
old, C has not been updated to take advantage of increased memory and
processor power to implement such things as automatic memory
management. What for? The illusion of backward compatibility and
portability.

Yet more missing data types
---------------------------

Hash tables. Why was this so difficult to implement? C is intended for
the programming of things like kernels and system utilities, which
frequently use hash tables. And yet it didn''t occur to C''s creators
that maybe including hash tables as a type of array might be a good
idea when writing UNIX? Perl has them. PHP has them. With C you have to
fake hash tables, and even then it doesn''t really work at all.

Multidimensional arrays. Before you tell me that you can do stuff like
int multiarray[50][50][50] I think that I should point out that that''s
an array of arrays of arrays. Different thing. Especially when you
consider that you can also use it as a bunch of pointers. C programmers
call this "flexibility". Others call it "redundancy", or, more
accurately, "mess".

Complex numbers. They may be in C99, but how many compilers support
that? It''s not exactly difficult to get your head round the concept of
complex numbers, so why weren''t they included in the first place? Were
complex numbers not discovered back in 1989?

Binary strings. It wouldn''t have been that hard just to make a
compulsory struct with a mere two members: a char * for the string of
bytes and a size_t for the length of the string. Binary strings have
always been around on Unix, so why wasn''t C more accommodating?

Library size
------------

The actual core of C is admirably small, even if some of the syntax
isn''t the most efficient or readable (case in point: the combined ''? :''
statement). One thing that is bloated is the C library. The number of
functions in a full C library which complies with all significant
standards runs into four digit figures. There''s a great deal of
redundancy, and code which really shouldn''t be there.

This has knock-on effects, such as the large number of configuration
constants which are defined by the preprocessor (which shouldn''t be
necessary), the size of libraries (the GNU C library almost fills a
floppy disk and its documentation, three) and inconsistently named
groups of functions in addition to duplication.

For example, a function for converting a string to a long integer is
atol(). One can also use strtol() for exactly the same thing. Boom -
instant redundancy. Worse still, both functions are included in the
C99, POSIX and SUSv3 standards!

Can it get worse? Of course it can! This is C!

As a result it''s only logical that there''s an equivalent pair of atod()
and strtod() functions for converting a string to a double. As you''ve
probably guessed, this isn''t true. They are called atof() and strtod().
This is very foolish. There are yet more examples scattered through the
standard C library like a dog''s smelly surprises in a park.

The Single Unix Specification version three specifies 1,123 functions
which must be available to the C programmer of the compliant system. We
already know about the redundancies and unnecessary functions, but
across how many header files are these 1,123 functions spread out? 62.
That''s right, on average a C library header will define approximately
eighteen functions. Even if you only need to use maybe one function
from each of, say, five libraries (a common occurrence) you may well
wind up including 90, 100 or even 150 function definitions you will
never need. Bloat, bloat, bloat. Python has the right idea; its import
statement allows you to define exactly the functions (and global
variables!) you need from each library if you prefer. But C? Oh, no.

Specifying structure members
----------------------------

Why does this need two operators? Why do I have to pick between ''.'' and
''->'' for a ridiculous, arbitrary reason? Oh, I forgot; it''s just yet
another of C''s gotchas.

Limited syntax
--------------

A couple of examples should illustrate what I mean quite nicely. If
you''ve ever programmed in PHP for a substantial period of time, you''re
probably aware of the ''break'' keyword. You can use it to break out from
nested loops of arbitrary depth by using an integer, like so:

for ($i = 0; $i < 10; $i++) {

for ($j = 0; $j < 10; $j++) {

for ($k = 0; $k < 10; $k++) {
break 2;
}
}

/* breaks out to here */

}

There is no way of doing this in C. If you want to break out from a
series of nested for or while loops then you have to use a goto. This
is what is known as a crude hack.

In addition to this, there is no way to compare any non-numerical data
type using a switch statement. Not even strings. In the programming
language D, one can do:

char s[];

switch (s) {

case "hello":
/* something */
break;

case "goodbye":
/* something else */
break;

case "maybe":
/* another action */
break;

default:
/* something */
break;

}

C does not allow you to use switch and case statements for strings. One
must use several variables to iterate through an array of case strings
and compare them to the given string with strcmp(). This reduces
performance and is just yet another hack.

In fact, this is an example of gratuitous library functions running
wild once again. Even comparing one string to another requires use of
the strcmp() function:

char string[] = "Blah, blah, blah\n";

if (strcmp(string, "something") == 0) {

/* do something */

}

Flushing standard I/O
---------------------

A simple microcosm of the "you can do this, but not that" philosophy of
C; one has to do two different things to flush standard input and
standard output.

To flush the standard output stream, the fflush() function is used
(defined by <stdio.h>). One doesn''t usually need to do this after every
bit of text is printed, but it''s nice to know it''s there, right?

Unfortunately, fflush() can''t be used to flush the contents of standard
input. Some C standards explicitly define it as having undefined
behaviour, but this is so illogical that even textbook authors
sometimes mistakenly use fflush(stdin) in examples and some compilers
won''t bother to warn you about it. One shouldn''t even have to flush
standard input; you ask for a character with getchar(), and the program
should just read in the first character given and disregard the rest.
But I digress...

There is no ''real'' way to flush standard input up to, say, the end of a
line. Instead one has to use a kludge like so:

int c;

do {

errno = 0;
c = getchar();

if (errno) {
fprintf(stderr,
"Error flushing standard input buffer: %s\n",
strerror(errno));
}

} while ((c != ''\n'') && (!feof(stdin)));

That''s right; you need to use a variable, a looping construct, two
library functions and several lines of exception handling code to flush
the standard
input buffer.

Inconsistent error handling
---------------------------

A seasoned C programmer will be able to tell what I''m talking about
just by reading the title of this section. There are many incompatible
ways in which a C library function indicates that an error has
occurred:

* Returning zero.
* Returning nonzero.
* Returning EOF.
* Returning a NULL pointer.
* Setting errno.
* Requiring a call to another function.
* Outputting a diagnostic message to the user.
* Triggering an assertion failure.
* Crashing.

Some functions may actually use up to three of these methods. (For
instance, fread().) But the thing is that none of these are compatible
with each other and error handling does not occur automatically; every
time a C programmer uses a library function they must check manually
for an error. This bloats code which would otherwise be perfectly
readable without if-blocks for error handling and variables to keep
track of errors. In a large software project one must write a section
of code for error handling hundreds of times. If you forget, something
can go horribly wrong. For example, if you don''t check the return value
of malloc() you may accidentally try to use a null pointer. Oops...

Commutative array subscripting
------------------------------

"Hey, Thompson, how can I make C''s syntax even more obfuscated and
difficult to understand?"

"How about you allow 5[var] to mean the same as var[5]?"

"Wow; unnecessary and confusing syntactic idiocy! Thanks!"

"You''re welcome, Dennis."

Yes, I understand that array subscription is just a form of addition
and so it should be commutative, but doesn''t it seem just a bit foolish
to say that 5[var] is the same as var[5]? How on earth do you take the
var''th value of 5?

Variadic anonymous macros
-------------------------

In case you don''t understand what variadic anonymous macros are,
they''re macros (i.e. pseudofunctions defined by the preprocessor) which
can take a variable number of arguments. Sounds like a simple thing to
implement. I mean, it''s all done by the preprocessor, right? And
besides, you can define proper functions with variable numbers of
arguments even in the original K&R C, right?

In that case, why can''t I do:

#define error(...) fprintf(stderr, ...)

without getting a warning from GCC?

warning: anonymous variadic macros were introduced in C99

That''s right, folks. Not until late 1999, 30 years after development on
the C programming language began, have we been allowed to do such a
simple task with the preprocessor.

The C standards don''t make sense
--------------------------------

Only one simple quote from the ANSI C standard - nay, a single footnote
- is needed to demonstrate the immense idiocy of the whole thing.
Ladies, gentlemen, and everyone else, I present to you...footnote 82:

All whitespace is equivalent except in certain situations.

I''d make a cutting remark about this, but it''d be too easy.

Too much preprocessor power
---------------------------

Rather foolishly, half of the actual C language is reimplemented in the
preprocessor. (This should be a concern from the start; redundancy
usually indicates an underlying problem.) We can #define fake
variables, fake conditions with #ifdef and #ifndef, and look, there''s
even #if, #endif and the rest of the crew! How useful!

Erm, sorry, no.

Preprocessors are a good idea for a language like C. As has been
iterated, C is not portable. Preprocessors are vital to bridging the
gap between different computer architectures and libraries and allowing
a program to compile on multiple machines without having to rely on
external programs. The #define statement, in this case, can be used
perfectly validly to set ''flags'' that can be used by a program to
determine all sorts of things: which C standard is being used, which
library, who wrote it, and so on and so forth.

Now, the situation isn''t as bad as for C++. In C++, the preprocessor is
so packed with unnecessary rubbish that one can actually use it to
calculate an arbitrary series of Fibonacci numbers at compile-time.
However, C comes dangerously close; it allows the programmer to define
fake global variables with wacky values which would not otherwise be
proper code, and then compare values of these variables. Why? It''s not
needed; the C language of the Plan 9 operating system doesn''t let you
play around with preprocessor definitions like this. It''s all just
bloat.

"But what about when we want to use a constant throughout a program? We
don''t want to have to go through the program changing the value each
time we want to change the constant!" some may complain. Well, there''s
these things called global variables. And there''s this keyword, const.
It makes a constant variable. Do you see where I''m going with this?

You can do search and replace without the preprocessor, too. In fact,
they were able to do it back in the seventies on the very first
versions of Unix. They called it sed. Need something more like cpp? Use
m4 and stop complaining. It''s the Unix way.

推荐答案

cat > test.c

int main(void)

{

bool b;

return 0;

}

cat > test.c
int main(void)
{
bool b;
return 0;
}


gcc -ansi -pedantic -Wall -W test.c

test.c: In function ’’main’’:

test.c:3: ’’bool’’ undeclared (first use in this function)


Not until the 1999 ISO C standard were we finally able to use ’’bool’’ as

a data type.但猜猜怎么了? It’’s implemented as a macro and one

actually has to include a header file to be able to use it!


High-level or low-level?

------------------------


On the one hand, we have the fact that there is no string type and

little automatic memory management, implying a low-level language. On

the other hand, we have a mass of library functions, a preprocessor and

a plethora of other things which imply a high-level language. C tries

to be both, and as a result spreads itself too thinly.


The great thing about this is that when C is lacking a genuinely useful

feature, such as reasonably strong data typing, the excuse "C’’s a

low-level language" can always be used, functioning as a perfect

’’reason’’ for C to remain unhelpfully and fatally sparse.


The original intention for C was for it to be a portable assembly

language for writing UNIX. Unfortunately, from its very inception C has

had extra things packed into it which make it fail as an assembly

language. Its kludgy strings are a good example. If it were at least

portable these failings might be forgivable, but C is not portable.


Integer overflow without warning

--------------------------------


Self explanatory. One minute you have a fifteen digit number, then try

to double or triple it and - boom - its value is suddenly

-234891234890892 or something similar. Stupid, stupid, stupid. How hard

would it have been to give a warning or overflow error or even just

reset the variable to zero?


This is widely known as bad practice. Most competent developers

acknowledge that silently ignoring an error is a bad attitude to have;

this is especially true for such a commonly used language as C.


Portability?!

-------------


Please. There are at least four official specifications of C I could

name from the top of my head and no compiler has properly implemented

all of them. They conflict, and they grow and grow. The problem isn’’t

subsiding; it’’s increasing each day. New compilers and libraries are

developed and proprietary extensions are being developed. GNU C isn’’t

the same as ANSI C isn’’t the same as K&R C isn’’t the same as Microsoft

C isn’’t the same as POSIX C. C isn’’t portable; all kinds of machine

architectures are totally different, and C can’’t properly adapt because

it’’s so muttonheaded. It’’s trapped in The Unix Paradigm.


If it weren’’t for the C preprocessor, then it would be virtually

impossible to get C to run on multiple families of processor hardware,

or even just slightly differing operating systems. A programming

language should not require a C preprocessor so that it can run on both

FreeBSD, Linux or Windows without failing to compile.


C is unable to adapt to new conditions for the sake of "backward

compatibility", throwing away the opportunity to get rid of stupid,

utterly useless and downright dangerous functions for a nonexistent

goal. And yet C is growing new tentacles and unnecessary features

because of idiots who think adding seven new functions to their C

library will make life easier. It does not.


Even the C89 and C99 standards conflict with each other in ridiculous

ways. Can you use the long long type or can’’t you? Is a certain

constant defined by a preprocessor macro hidden deep, deep inside my C

library? Is using a function in this particular way going to be

undefined, or acceptable? What do you mean, getch() isn’’t a proper

function but getc() and getchar() are?


The implications of this false ’’portability’’

--------------------------------------------


Because C pretends to be portable, even professional C programmers can

be caught out by hardware and an unforgiving programming language;

almost anything like comparisons, character assignments, arithmetic, or

string output can blow up spectacularly for no apparent reason because

of endianness or because your particular processor treats all chars as

unsigned or silly, subtle, deadly traps like that.


Archaic, unexplained conventions

--------------------------------


In addition to the aforementioned problems, C also has various

idiosyncracies (invariably unreported) which not even some teachers of

C are aware of:

$ b$b * "Don’’t use fflush(stdin)."

* "gets() is evil."

* "main() must return an integer."

* "main() can only take one of three sets of arguments."

* "main() can only return either EXIT_SUCCESS or EXIT_FAILURE."

* "You musn’’t cast the return value of malloc()."

* "fileno() isn’’t an ANSI compliant function."

* "A preprocessor macro oughtn’’t use any of its arguments more than

once."


....all these unnecessary and unmentioned quirks mean buggy code. Death

by a thousand cuts. Ironic when you consider that Kernighan thinks of

Pascal in the same way when C has just as many little gotchas that

bleed you to death gradually and painfully.


Blaming The Progammer

---------------------


Due to the fact that C is pretty difficult to learn and even harder to

actually use without breaking something in a subtle yet horrific way

it’’s assumed that anything which goes wrong is the programmer’’s fault.

If your program segfaults, it’’s your fault. If it crashes, mysteriously

returning 184 with no error message, it’’s your fault. When one single

condition you’’d just happened to have forgotten about whilst coding

screws up, it’’s your fault.


Obviously the programmer has to shoulder most of the responsibility for

a broken program. But as we’’ve already seen, C positively tries to make

the programmer fail. This increases the failure rate and yet for some

reason we don’’t blame the language when yet another buffer overflow is

discovered. C programmers try to cover up C’’s inconsistencies and

inadequacies by creating a culture of ’’tua culpa’’; if something’’s

wrong, it’’s your fault, not that of the compiler, linker, assembler,

specification, documentation, or hardware.


Compilers have to take some of the blame.两个原因。 The first is

that most compilers have proprietary extensions built into them. Let me

remind you that half of the point of using C is that it should be

portable and compile anywhere. Adding extensions violates the original

spirit of C and removes one of its advantages (albeit an already

diminished advantage).


The other (and perhaps more pressing) reason is the lack of anything

beyond minimal error checking which C compilers do. For every ten types

of errors your compiler catches, another fifty will slip through.

Beyond variable type and syntax checking the compiler does not look for

anything else. All it can do is give warnings on unusual behaviour,

though these warnings are often spurious. On the other hand, a single

error can cause a ridiculous cascade, or make the compiler fall over

and die because of a misplaced semicolon, or, more accurately and

incriminatingly, a badly constructed parser and grammar. And yet,

despite this, it’’s your fault.


To quote The Unix Haters’’ Handbook:


"If you make even a small omission, like a single semicolon, a C

compiler tends to get so confused and annoyed that it bursts into tears

and complains that it just can’’t compile the rest of the file since one

missing semicolon has thrown it off so much."


So C compilers may well give literally hundreds of errors stating that

half of your code is wrong if you miss out a single semicolon. Can it

get worse?当然可以! This is C!


You see, a compiler will often not deluge you with error information

when compiling. Sometimes it will give you no warning whatsoever even

if you write totally foolish code like this:


#include <stdio.h>


int main()

{

char *p;

puts(p);

return 0;

}


When we compile this with our ’’trusty’’ compiler gcc, we get no errors

or warnings at all. Even when using the ’’-W’’ and ’’-Wall’’ flags to make

it watch out for dangerous code it says nothing.

gcc -ansi -pedantic -Wall -W test.c
test.c: In function ''main'':
test.c:3: ''bool'' undeclared (first use in this function)

Not until the 1999 ISO C standard were we finally able to use ''bool'' as
a data type. But guess what? It''s implemented as a macro and one
actually has to include a header file to be able to use it!

High-level or low-level?
------------------------

On the one hand, we have the fact that there is no string type and
little automatic memory management, implying a low-level language. On
the other hand, we have a mass of library functions, a preprocessor and
a plethora of other things which imply a high-level language. C tries
to be both, and as a result spreads itself too thinly.

The great thing about this is that when C is lacking a genuinely useful
feature, such as reasonably strong data typing, the excuse "C''s a
low-level language" can always be used, functioning as a perfect
''reason'' for C to remain unhelpfully and fatally sparse.

The original intention for C was for it to be a portable assembly
language for writing UNIX. Unfortunately, from its very inception C has
had extra things packed into it which make it fail as an assembly
language. Its kludgy strings are a good example. If it were at least
portable these failings might be forgivable, but C is not portable.

Integer overflow without warning
--------------------------------

Self explanatory. One minute you have a fifteen digit number, then try
to double or triple it and - boom - its value is suddenly
-234891234890892 or something similar. Stupid, stupid, stupid. How hard
would it have been to give a warning or overflow error or even just
reset the variable to zero?

This is widely known as bad practice. Most competent developers
acknowledge that silently ignoring an error is a bad attitude to have;
this is especially true for such a commonly used language as C.

Portability?!
-------------

Please. There are at least four official specifications of C I could
name from the top of my head and no compiler has properly implemented
all of them. They conflict, and they grow and grow. The problem isn''t
subsiding; it''s increasing each day. New compilers and libraries are
developed and proprietary extensions are being developed. GNU C isn''t
the same as ANSI C isn''t the same as K&R C isn''t the same as Microsoft
C isn''t the same as POSIX C. C isn''t portable; all kinds of machine
architectures are totally different, and C can''t properly adapt because
it''s so muttonheaded. It''s trapped in The Unix Paradigm.

If it weren''t for the C preprocessor, then it would be virtually
impossible to get C to run on multiple families of processor hardware,
or even just slightly differing operating systems. A programming
language should not require a C preprocessor so that it can run on both
FreeBSD, Linux or Windows without failing to compile.

C is unable to adapt to new conditions for the sake of "backward
compatibility", throwing away the opportunity to get rid of stupid,
utterly useless and downright dangerous functions for a nonexistent
goal. And yet C is growing new tentacles and unnecessary features
because of idiots who think adding seven new functions to their C
library will make life easier. It does not.

Even the C89 and C99 standards conflict with each other in ridiculous
ways. Can you use the long long type or can''t you? Is a certain
constant defined by a preprocessor macro hidden deep, deep inside my C
library? Is using a function in this particular way going to be
undefined, or acceptable? What do you mean, getch() isn''t a proper
function but getc() and getchar() are?

The implications of this false ''portability''
--------------------------------------------

Because C pretends to be portable, even professional C programmers can
be caught out by hardware and an unforgiving programming language;
almost anything like comparisons, character assignments, arithmetic, or
string output can blow up spectacularly for no apparent reason because
of endianness or because your particular processor treats all chars as
unsigned or silly, subtle, deadly traps like that.

Archaic, unexplained conventions
--------------------------------

In addition to the aforementioned problems, C also has various
idiosyncracies (invariably unreported) which not even some teachers of
C are aware of:

* "Don''t use fflush(stdin)."
* "gets() is evil."
* "main() must return an integer."
* "main() can only take one of three sets of arguments."
* "main() can only return either EXIT_SUCCESS or EXIT_FAILURE."
* "You musn''t cast the return value of malloc()."
* "fileno() isn''t an ANSI compliant function."
* "A preprocessor macro oughtn''t use any of its arguments more than
once."

....all these unnecessary and unmentioned quirks mean buggy code. Death
by a thousand cuts. Ironic when you consider that Kernighan thinks of
Pascal in the same way when C has just as many little gotchas that
bleed you to death gradually and painfully.

Blaming The Progammer
---------------------

Due to the fact that C is pretty difficult to learn and even harder to
actually use without breaking something in a subtle yet horrific way
it''s assumed that anything which goes wrong is the programmer''s fault.
If your program segfaults, it''s your fault. If it crashes, mysteriously
returning 184 with no error message, it''s your fault. When one single
condition you''d just happened to have forgotten about whilst coding
screws up, it''s your fault.

Obviously the programmer has to shoulder most of the responsibility for
a broken program. But as we''ve already seen, C positively tries to make
the programmer fail. This increases the failure rate and yet for some
reason we don''t blame the language when yet another buffer overflow is
discovered. C programmers try to cover up C''s inconsistencies and
inadequacies by creating a culture of ''tua culpa''; if something''s
wrong, it''s your fault, not that of the compiler, linker, assembler,
specification, documentation, or hardware.

Compilers have to take some of the blame. Two reasons. The first is
that most compilers have proprietary extensions built into them. Let me
remind you that half of the point of using C is that it should be
portable and compile anywhere. Adding extensions violates the original
spirit of C and removes one of its advantages (albeit an already
diminished advantage).

The other (and perhaps more pressing) reason is the lack of anything
beyond minimal error checking which C compilers do. For every ten types
of errors your compiler catches, another fifty will slip through.
Beyond variable type and syntax checking the compiler does not look for
anything else. All it can do is give warnings on unusual behaviour,
though these warnings are often spurious. On the other hand, a single
error can cause a ridiculous cascade, or make the compiler fall over
and die because of a misplaced semicolon, or, more accurately and
incriminatingly, a badly constructed parser and grammar. And yet,
despite this, it''s your fault.

To quote The Unix Haters'' Handbook:

"If you make even a small omission, like a single semicolon, a C
compiler tends to get so confused and annoyed that it bursts into tears
and complains that it just can''t compile the rest of the file since one
missing semicolon has thrown it off so much."

So C compilers may well give literally hundreds of errors stating that
half of your code is wrong if you miss out a single semicolon. Can it
get worse? Of course it can! This is C!

You see, a compiler will often not deluge you with error information
when compiling. Sometimes it will give you no warning whatsoever even
if you write totally foolish code like this:

#include <stdio.h>

int main()
{
char *p;
puts(p);
return 0;
}

When we compile this with our ''trusty'' compiler gcc, we get no errors
or warnings at all. Even when using the ''-W'' and ''-Wall'' flags to make
it watch out for dangerous code it says nothing.


gcc -W -Wall stupid.c
gcc -W -Wall stupid.c


这篇关于为什么C不是我最喜欢的编程语言的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆