Python:有没有办法保持从int到long int的自动转换? [英] Python: Is there a way to keep an automatic conversion from int to long int from happening?

查看:152
本文介绍了Python:有没有办法保持从int到long int的自动转换?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Python比其他脚本语言更强类型化。例如,在Perl中:

Python is more strongly typed than other scripting languages. For example, in Perl:

perl -E '$c=5; $d="6"; say $c+$d'   #prints 11

但在Python中:

>>> c="6"
>>> d=5
>>> print c+d
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: cannot concatenate 'str' and 'int' objects

Perl将检查一个字符串并转换为一个数字, + - / * ** 运算符可以按预期使用数字。 PHP类似。

Perl will inspect a string and convert to a number, and the + - / * ** operators work as you expect with a number. PHP is similar.

Python使用 + 来连接字符串,以便 c + d <的尝试操作/ code>失败,因为c是一个字符串,d是一个int。与Perl相比,Python对数字类型有更强的感觉。好的 - 我可以解决这个问题。

Python uses + to concatenate strings so the the attempted operation of c+d fails because c is a string, d an int. Python has stronger sense of numeric types than does Perl. OK -- I can deal with that.

但请考虑:

>>> from sys import maxint
>>> type(maxint)
<type 'int'>
>>> print maxint
9223372036854775807
>>> type(maxint+2)
<type 'long'>
>>> print maxint+2
9223372036854775809
>>> type((maxint+2)+maxint)
<type 'long'>
>>> print ((maxint+2)+maxint)
18446744073709551616

现在Python将 autopromote ,在这种情况下是64位长(OS X,python) 2.6.1)到一个具有任意精度的python long int。即使类型不同,它们也是相似的,Python允许使用通常的数字运算符。通常这很有帮助。例如,它有助于平滑32位和64位之间的差异。

Now Python will autopromote from an int, which in this case is a 64 bit long (OS X, python 2.6.1) to a python long int which is of arbitrary precision. Even though the types are not the same, they are similar and Python allows the usual numeric operators to be used. Usually this is helpful. It is helpful with smoothing the differences between 32 bit and 64 bit for example.

int long 的转换是单向的:

>>> type((maxint+2)-2)
<type 'long'>

转换完成后,该变量上的所有操作现已完成以任意精度。任意精度操作比原生int操作慢几个数量级。在我正在处理的脚本上,我会有一些执行是快速的,其他因为这个延长到几个小时。考虑:

Once the conversion is made, all operations on that variable are now done in arbitrary precision. The arbitrary precision operations are orders of magnitude slower than the native int operations. On a script I am working on, I would have some execution be snappy and other that extended into hours because of this. Consider:

>>> print maxint**maxint        # execution so long it is essentially a crash

所以我的问题:是否有打败或不允许自动推广Python int 到Python long 的方法?

So my question: Is there a way to defeat or not allow the auto-promotion of a Python int to a Python long?

编辑,跟进:

我在表单中收到了几条评论'为什么你想要C风格的溢出行为?'问题是这个特殊的代码片段在C和Perl中的32位工作正常(使用使用int )具有C的溢出行为。尝试将此代码移植到Python失败了。 Python的不同溢出行为最终成为问题的(部分)。代码中混合了许多不同的习语(C,Perl,某些python)(以及混合的那些评论),所以它具有挑战性。

I received several comments in the form of 'why on earth would you want to have C style overflow behavior?' The issue was that this particular piece of code worked OK on 32 bits in C and Perl (with use int) with C's overflow behavior. There was a failed attempt to port this code to Python. Python's different overflow behavior turn out to be (part) of the problem. The code has many of those different idioms (C, Perl, some python) mixed in (and those comments mixed in), so it was challenging.

基本上,图像正在进行的分析是基于盘的高通滤波器,以执行类似的图像比较。部分高通滤波器具有两个大多项式的基于整数的乘法。溢出本质上是一种不关心,它很大......的逻辑,因此结果与基于C的溢出一样。所以使用Horner的规则与O(n 2 )时间是浪费,因为较大的多项式只是大 - 粗略正义形式的carot-top饱和算法。

Essentially, the image analysis being done is a disc based high-pass filter to perform similar image comparison. Part of the high-pass filter has an integer-based multiplication of two large polynomials. The overflow was essentially a "don't - care, it's big..." kind of logic so the result was as intended with a C-based overflow. So the use of Horner's rule with O(n2) time was a waste since the larger polynomials would just be "big" -- a rough-justice form of carot-top's saturation arithmetic.

将基于循环的多项式乘法更改为FFT的形式可能要快得多。对于Horner规则多项式乘法,FFT与O(n 2 )接近线性时间运行。从光盘到内存也将加速这一进程。图像并不是非常大,但原始代码是在被认为是巨大!!!的时候编写的。代码所有者还没准备好丢弃他心爱的代码,所以我们会看到。他的'正确答案'可能只是保留Perl或C,如果他想要那个代码。

Changing the loop-based polynomial multiplication to a form of FFT is probably significantly faster. FFT runs in close to linear time vs O(n2) for Horner's rule polynomial multiply. Going from disc based to in-memory will also speed this up. The images are not terribly big, but the original code was written at a time when they were considered "huge!!!" The code owner is not quite ready to trash his beloved code, so we'll see. The 'right answer' for him probably is just keep Perl or C if he wants that code.

感谢您的回答。我不知道Python的十进制模块,这似乎最接近我的要求 - 即使在这种情况下还有其他问题需要解决!

Thanks for the answers. I did not know about Python's decimal module, and that seemed to be closest to what I was asking -- even though there are other issues to be solved in this case!

推荐答案

所以你想抛出一个真正的方式,并在溢出时复古。你傻。

So you want to throw out the One True Way and go retro on overflows. Silly you.

C / C ++ / C#/ Java样式的溢出没有好的好处。 它无法可靠地引发错误情况 。对于C和C99,它是ANSI和POSIX中的未定义行为(C ++要求模数返回),这是一个已知的安全风险。你为什么要这个?

There is no good upside to the C / C++ / C# / Java style of overflow. It does not reliably raise an error condition. For C and C99 it is "undefined behavior" in ANSI and POSIX (C++ mandates modulo return) and it is a known security risk. Why do you want this?

Python方法无缝溢出到更长的路是更好的方法。我相信这是由Perl 6改编的相同行为。

The Python method of seamlessly overflowing to a long is the better way. I believe this is the same behavior being adapted by Perl 6.

你可以使用十进制模块获得更多有限溢出:

You can use the Decimal module to get more finite overflows:

>>> from decimal import *
>>> from sys import maxint
>>> getcontext()
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999999, Emax=999999999, capitals=1,
flags=[], traps=[DivisionByZero, Overflow, InvalidOperation])

>>> d=Decimal(maxint)
>>> d
Decimal('9223372036854775807')
>>> e=Decimal(maxint)
>>> f=d**e
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 2225, in __pow__
    ans = ans._fix(context)
  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 1589, in _fix
    return context._raise_error(Overflow, 'above Emax', self._sign)
  File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 3680, in _raise_error
    raise error(explanation)
decimal.Overflow: above Emax

您可以设置精度和边界条件十进制类和溢出几乎是立即的。您可以设置陷阱。您可以设置最大值和最小值。真的 - 它如何变得更好? (说实话,我不知道相对速度,但我怀疑它比numby快,但显然比本机的速度慢......)

You can set your precision and boundary conditions with Decimal classes and the overflow is nearly immediate. You can set what you trap for. You can set your max and min. Really -- How does it get better than this? (I don't know about relative speed to be honest, but I suspect it is faster than numby but slower than native ints obviously...)

针对您的具体问题对于图像处理来说,这听起来像是考虑某种形式的饱和算术的自然应用。您也可以考虑,如果您在32算术上有溢出,请在明显的情况下检查操作数:pow,**,*。您可以考虑超载运营商并检查您不具备的条件想要。

For your specific issue of image processing, this sounds like a natural application to consider some form of saturation arithmetic. You also might consider, if you are having overflows on 32 arithmetic, check operands along the way on obvious cases: pow, **, *. You might consider overloaded operators and check for the conditions you don't want.

如果十进制,饱和或重载运算符不起作用 - 你可以写一个扩展名。如果你想抛出Python的溢出方式来复古,天堂会帮助你...

If Decimal, saturation, or overloaded operators don't work -- you can write an extension. Heaven help you if you want to throw out the Python way of overflow to go retro...

这篇关于Python:有没有办法保持从int到long int的自动转换?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆