语言枪战中的Py 2.5 [英] Py 2.5 on Language Shootout

查看:70
本文介绍了语言枪战中的Py 2.5的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

The Computer Language Shootout刚刚公布了结果为
Python 2.5和Psyco 1.5.2。使用新的结果比较旧的(Python 2.4)Gentoo

Pentium 4结果(现在不再可见了),我发现使用Python 2.5的所有测试都更快比使用Python 2.4的那些
(由于N被更改,一些结果无法比较):


Gentoo Pentium 4,Python 2.4.3测量:

Program&记录CPU时间内存KB GZip N

二元树99.26 15,816 402 16

chameneos Timout 5.000.000

cheap-concurrency 23,13 5.252 160 15.000

fannkuch 66,38 2.200 395 10

fasta 81,62 9.884 861 2.500.000

k-nucleotide 15,52 15.580 459 250.000

mandelbrot 363,86 2.412 472 3.000

n-body Timout 20.000.000

nsieve 9,79 34.416 269 9

nsieve-bits 164,72 42.412 320 11

partial-sums 38,64 2.300 410 2.500.000

pidigits 9,22 2.388 391 2.500
recursive 701,64 14.360 344 11

regex-dna 6,21 24.160 326 500.000

反向补充2,7 46.032 272 2.500.000

spectrum-norm 696,76 2.456 266 2.500

启动6,38 29 200

sum-file 8,08 2.364 61 8.000

关于Psyco,两项测试只会更糟(源代码,CPU和SO

是相同的):


旧(Python 2.4.3,旧版Psyco):

nsieve 4.22 22,680 211 9

反向补充1.66 49,336 330 2,500,000


新(Python 2.5,Psyco 1.5.2):

nsieve 4.26 22,904 211 9

反向补充1.75 52,056 330 2,500,000


再见,

熊宝宝

The The Computer Language Shootout has just published results for
Python 2.5 and Psyco 1.5.2. Comparing the old (Python 2.4) Gentoo
Pentium 4 results (now not visible anymore) with the new results, I
have seen that all the tests with Python 2.5 are faster than the ones
with Python 2.4 (some results can''t be compared because N is changed):

Gentoo Pentium 4, Python 2.4.3 measurements:
Program & Logs CPU Time Memory KB GZip N
binary-trees 99.26 15,816 402 16
chameneos Timout 5.000.000
cheap-concurrency 23,13 5.252 160 15.000
fannkuch 66,38 2.200 395 10
fasta 81,62 9.884 861 2.500.000
k-nucleotide 15,52 15.580 459 250.000
mandelbrot 363,86 2.412 472 3.000
n-body Timout 20.000.000
nsieve 9,79 34.416 269 9
nsieve-bits 164,72 42.412 320 11
partial-sums 38,64 2.300 410 2.500.000
pidigits 9,22 2.388 391 2.500
recursive 701,64 14.360 344 11
regex-dna 6,21 24.160 326 500.000
reverse-complement 2,7 46.032 272 2.500.000
spectral-norm 696,76 2.456 266 2.500
startup 6,38 29 200
sum-file 8,08 2.364 61 8.000
Regarding Psyco, two tests only are worse (the sourcecode, CPU and SO
are the same):

Old (Python 2.4.3, older Psyco):
nsieve 4.22 22,680 211 9
reverse-complement 1.66 49,336 330 2,500,000

New (Python 2.5, Psyco 1.5.2):
nsieve 4.26 22,904 211 9
reverse-complement 1.75 52,056 330 2,500,000

Bye,
bearophile

推荐答案

Alioth是一个很棒的网站,用于选择实现

基元的语言。通常它是C.


两个基本的基准测试,Partial-sums和Spectral-norm,可能是使用Numarray完成的b
,或者是如果大部分的

程序都是用Python编写的,并且需要实现类似的

数值程序,那么就完成Numarray。速度将接近编译语言

基准。但是,这些基准的具体措辞禁止这种方法。 Spectral-norm必须假装数据集是无限的,并且

部分和必须在一个简单的哑循环中实现。


查看基准测试,一个让人觉得Python是一种很慢的语言。

我的第一个严肃的Python编程练习涉及将一个900

行的Bash Shell程序转换为500行Python程序,加速时间为
因子为17.使用Python允许OO结构和高级

容器,这意味着该程序更易于维护和移植,
这是演习的主要目的。加速是一个惊人的

和欢迎的附带好处。我认为这很糟糕,因为Python

字节码解释器可能比bash'的直接解释快了一个数量级,因为在Python系统调用中

递归目录和创建符号链接没有分叉到

单独的进程。事实上,我猜想,因为大部分时间都花在系统调用上,所以
Python程序的整体速度会比C程序少一点。 。


几乎可以通过对b / b
进行分析并将慢速位作为基元来快速任意制作大型Python程序。大小可能是更值得关注的


Alioth is a great site for selecting the language in which to implement
primitives. Usually it''s C.

Two of the alioth benchmarks, Partial-sums and Spectral-norm, could be
done using Numarray, or would be done with Numarray if most of the
program was in Python and there was a need to implement a similar
numerical procedure. The speed would be up near the compiled language
benchmarks. However the specific wording of these benchmarks prohibits
this approach. Spectral-norm must pretend the dataset is infinite, and
Partial-sums has to be implemented in a simple dumb loop.

Looking over the benchmarks, one gains the impression that Python is a
slow language.
My first serious Python programming exercise involved converting a 900
line Bash Shell program to a 500 line Python program, with a speedup
factor of 17. Using Python allowed an OO structure and advanced
containers, meaning the program was more maintainable and portable,
which were the main aims of the exercise. The speedup was a surprising
and welcome side benefit. I think it was mosly because the Python
byte-code interpreter is probably an order of magnitude faster than
Bash''s direct interpretation, and because in Python system calls to
recurse directories and create symbolic links were not forked to
separate processes. In fact I would guess that the overall speed of the
Python program would be little less than a C program, given that most
of the time would be spent in system calls.

Its almost possible to make a large Python program arbitrarily fast by
profiling it and implementing slow bits as primitives. Size is probably
of greater concern.



pg ****** @ acay.com.au 写道:

Alioth是一个很棒的网站用于选择实现

原语的语言。通常它是C.
Alioth is a great site for selecting the language in which to implement
primitives. Usually it''s C.



并且选择一种你可能需要实现的语言

基元在C中: - )

And for selecting a language for which you might need to implement
primitives in C :-)


>

其中两个基准测试,Partial-sums和Spectral-norm,可以完成
使用Numarray,或者如果大部分

程序都是Python,并且需要实现类似的

数字程序,那么将使用Numarray。速度将接近编译语言

基准。但是,这些基准的具体措辞禁止这种方法。谱 - 范数必须假装数据集是无限的,并且

部分和必须在一个简单的哑循环中实现。
>
Two of the alioth benchmarks, Partial-sums and Spectral-norm, could be
done using Numarray, or would be done with Numarray if most of the
program was in Python and there was a need to implement a similar
numerical procedure. The speed would be up near the compiled language
benchmarks. However the specific wording of these benchmarks prohibits
this approach. Spectral-norm must pretend the dataset is infinite, and
Partial-sums has to be implemented in a simple dumb loop.



我们不会使用天真的递归算法来查找fibonnaci

数...除非我们对递归感兴趣为了它自己的缘故。


也许谱表规范的作者对函数调用很感兴趣。

也许部分和的作者对简单的哑巴很感兴趣循环

和简单的哑数学。

And we wouldn''t use a na?ve recursive algorithm to find fibonnaci
numbers ... unless we were interested in recursion for its own sake.

Maybe the author of spectral-norm was interested in function calls.
Maybe the author of partial-sums was interested in simple dumb loops
and simple dumb Math.


查看基准测试,可以看出Python是一个

语言慢。
Looking over the benchmarks, one gains the impression that Python is a
slow language.



这甚至意味着什么 - 一种缓慢的语言?

What does that even mean - a slow language?


我的第一个认真的Python编程练习涉及将一个900 b / b
ash Shell程序转换为一个500行的Python程序,加速时间为
因子为17.使用Python允许OO结构和高级

容器,意味着程序更易维护和便携,

这是练习的主要目的。加速是一个惊人的

和欢迎的附带好处。我认为这很糟糕,因为Python

字节码解释器可能比bash'的直接解释快了一个数量级,因为在Python系统调用中

递归目录和创建符号链接没有分叉到

单独的进程。事实上,我猜想,因为大部分时间都花在系统调用上,所以
Python程序的整体速度会比C程序少一点。 。
My first serious Python programming exercise involved converting a 900
line Bash Shell program to a 500 line Python program, with a speedup
factor of 17. Using Python allowed an OO structure and advanced
containers, meaning the program was more maintainable and portable,
which were the main aims of the exercise. The speedup was a surprising
and welcome side benefit. I think it was mosly because the Python
byte-code interpreter is probably an order of magnitude faster than
Bash''s direct interpretation, and because in Python system calls to
recurse directories and create symbolic links were not forked to
separate processes. In fact I would guess that the overall speed of the
Python program would be little less than a C program, given that most
of the time would be spent in system calls.



/我猜/

/I would guess/


几乎可以通过<快速任意制作一个大型Python程序br />
对其进行分析并将慢位作为基元实现。大小可能是更值得关注的

Its almost possible to make a large Python program arbitrarily fast by
profiling it and implementing slow bits as primitives. Size is probably
of greater concern.



我们可以简单地读到 - /它是不可能/任意快速地制作一个大的

Python程序。这是你的意思吗?

We could read that simply as - /it''s not possible/ to make a large
Python program arbitrarily fast. Is that what you meant?


Isaac Gouy写道:
Isaac Gouy wrote:
pg ****** @ acay.com.au 写道:

Alioth is一个很棒的网站,用于选择实现

原语的语言。通常它是C.
Alioth is a great site for selecting the language in which to implement
primitives. Usually it''s C.



用于选择你可能需要实现的语言

原语在C :-)


And for selecting a language for which you might need to implement
primitives in C :-)



好​​吧,如果你非常喜欢C,只需在C中做。: - )"

Well if you like C so much, just do it in C. ":-)"


>
>


两个基本的基准测试,Partial-sums和Spectral-norm,可能是使用Numarray完成的b
如果大部分

程序都是Python,并且需要实现类似的

数值程序,那么可以使用Numarray完成。速度将接近编译语言

基准。但是,这些基准的具体措辞禁止这种方法。谱 - 范数必须假装数据集是无限的,并且

部分和必须在一个简单的哑循环中实现。

Two of the alioth benchmarks, Partial-sums and Spectral-norm, could be
done using Numarray, or would be done with Numarray if most of the
program was in Python and there was a need to implement a similar
numerical procedure. The speed would be up near the compiled language
benchmarks. However the specific wording of these benchmarks prohibits
this approach. Spectral-norm must pretend the dataset is infinite, and
Partial-sums has to be implemented in a simple dumb loop.



我们不会使用天真的递归算法来查找fibonnaci

数...除非我们对递归感兴趣为了它自己的缘故。


也许谱表规范的作者对函数调用很感兴趣。

也许部分和的作者对简单的哑巴很感兴趣循环

和简单的哑数学。


And we wouldn''t use a na?ve recursive algorithm to find fibonnaci
numbers ... unless we were interested in recursion for its own sake.

Maybe the author of spectral-norm was interested in function calls.
Maybe the author of partial-sums was interested in simple dumb loops
and simple dumb Math.



我不是在争论这个。我认为你接受了我的观点。

I am not disputing this. I think you take my point though.


>
>

查看基准测试,获得了认为Python是一种慢速语言。
Looking over the benchmarks, one gains the impression that Python is a
slow language.



这甚至意味着什么 - 一种缓慢的语言?


What does that even mean - a slow language?



aloth基准测试提供了一组数字可以比较

语言。

The alioth benchmarks provide a set of numbers by which
languages may be compared.


>
>

我的第一个严肃的Python编程练习涉及将900 b / b
行Bash Shell程序转换为500行Python程序,加速

系数为17.使用Python允许OO结构和高级

容器,这意味着该程序更易于维护和移植,

这是主要目标这个练习。加速是一个惊人的

和欢迎的附带好处。我认为这很糟糕,因为Python

字节码解释器可能比bash'的直接解释快了一个数量级,因为在Python系统调用中

递归目录和创建符号链接没有分叉到

单独的进程。事实上,我猜想,因为大部分时间都花在系统调用上,所以
Python程序的整体速度会比C程序少一点。 。
My first serious Python programming exercise involved converting a 900
line Bash Shell program to a 500 line Python program, with a speedup
factor of 17. Using Python allowed an OO structure and advanced
containers, meaning the program was more maintainable and portable,
which were the main aims of the exercise. The speedup was a surprising
and welcome side benefit. I think it was mosly because the Python
byte-code interpreter is probably an order of magnitude faster than
Bash''s direct interpretation, and because in Python system calls to
recurse directories and create symbolic links were not forked to
separate processes. In fact I would guess that the overall speed of the
Python program would be little less than a C program, given that most
of the time would be spent in system calls.



/我猜/


/I would guess/



我没有时间或兴趣在C中重新编码以找出答案。

实际上,由于OO和STL,选择将是C ++。

也许遍历和链接包含大约1000个文件的树将是

不是

需要一整秒。我可能错了。我所知道的是,它比Bash快了很多。

I don''t have the time, or interest, to recode it in C to find out.
In reality the choice would be C++ because of OO and STL.
Perhaps traversing and linking a tree containing about 1000 files would
not
take a full second. I might be wrong. All i know is, its a lot faster
than Bash.


>
>

几乎可以通过

快速制作一个大型Python程序,并将慢速位作为基元实现。大小可能是更值得关注的

Its almost possible to make a large Python program arbitrarily fast by
profiling it and implementing slow bits as primitives. Size is probably
of greater concern.



我们可以简单地读到 - /它不可能/任意快速地制作一个大的
Python程序。这是你的意思吗?


We could read that simply as - /it''s not possible/ to make a large
Python program arbitrarily fast. Is that what you meant?



编号我的意思是如果我的Python程序太大了它的执行内存


要求,那将是一个难以处理的问题。相比

优化执行热点,我可能不得不使用另一种语言。


干杯,

PS Alioth是一个很棒的网站。

No. I meant that if my Python program is too big in terms of its
execution memory
requirements, then that would be a difficult issue to deal with. Rather
than
optimizing execution hotspots, I might have to use another language.

Cheers,
P.S. Alioth is a great site.


这篇关于语言枪战中的Py 2.5的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆