PHP试图分配127 TB内存和内存泄漏与preg_match和preg_replace [英] PHP trying to allocate 127 TB of memory and memory leak with preg_match and preg_replace

查看:190
本文介绍了PHP试图分配127 TB内存和内存泄漏与preg_match和preg_replace的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想我已经发现,似乎创造的Apache / PHP的内存泄漏问题时UNI code字符与 $一个常规的前pression一个分隔符或有时在任何地方p $ pg_match preg_replace 。这可能是这种情况发生在多个 $ P $皮克_ * 方法。

I think I have found a problem that seems to create a memory leak in Apache / PHP when unicode characters as a delimiter or sometimes anywhere in a regular expression with preg_match and preg_replace. It is possible that this happens in more preg_* methods.

创建一个新的PHP文件 test.php的包含以下内容:

Create a new PHP file test.php with the following contents:

<?php
    preg_match( '°test°i', 'test', $matches );

测试案例2

创建一个新的PHP文件 test.php的包含以下内容:

<?php
    preg_match( '°', 'test', $matches );

用作分隔符的UNI code字°是程度的标志。尝试其他任何单code字符看看会发生什么,如果你喜欢。

The unicode character ° used as a delimiter is the degree sign. Try any other unicode character to see what happens, if you like.

已经上传的文件到一个web服务器与的Apache 2.4.10(Debian的) PHP 5.6.0-1 + B1 ,从你喜欢的浏览器上运行。希望看到一个空白页或一个消息,说不是无效响应或此页无法加载。

Having uploaded the file to a webserver with Apache 2.4.10 (Debian) and PHP 5.6.0-1+b1, run it from your favourite browser. Expect to see a blank page or a message saying either "invalid response" or "this page could not be loaded".

这将导致你的Apache error.log中(通常/var/log/error.log)以下两行:

This will result in the following two lines in your Apache error.log (usually /var/log/error.log):

[Mon Dec 15 10:31:09.941622 2014] [:error] [pid 6292] [client ###.###.###.###:64413] PHP Warning:  preg_match():  in /path/to/test.php on line 2
[Mon Dec 15 10:31:09.941796 2014] [:error] [pid 6292] [client ###.###.###.###:64413] PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 139979759686648 bytes) in Unknown on line 0

注意PHP字节的数量试图分配是刚刚超过127太字节。

Note that the amount of bytes PHP tried to allocate is just over 127 Terabytes.

尝试了上面的脚本将导致各类通知或致命的错误,弹出即使在code,它甚至不应该能够产生它们的运行后,PHP脚本。例如,自动加载扩展类似乎不正常了工作,可能会显示类似以下错误:

Running PHP scripts after trying out the above script will result in all kinds of notices or fatal errors that pop-up even in code that shouldn't even be able to produce them. For instance, autoloading extended classes does not seem to work correctly anymore and may display errors like the following:

MyClass类文件MyExtendingClass.php未发现3号线

和文件MyExtendingClass.php是这样的:

And the file MyExtendingClass.php would look like this:

<?php
    class MyExtendingClass extends MyClass
    {
    }

正如你所看到 MyClass的显然是第2行,即使它确实存在,并自动加载磁带机已设置正确,PHP找不到它了。

As you can see MyClass is clearly on line 2 and even though it does exist and the autoloader has been set up correctly, PHP can't find it anymore.

显然,不正规的前pressions采用UNI code字符。但是,使用某些UNI code字符时为什么PHP泄漏内存?是否有此行为的解释吗?我想知道为什么PHP认为应分配的字节这样一个大量。

Obviously, don't use unicode characters in regular expressions. But why does PHP leak memory when using certain unicode characters? Is there an explanation for this behavior? I'd like to know why PHP thinks it should allocate such a vast amount of bytes.

的Apache / 2.4.10(Debian的)PHP / 5.6.0-1 + B1的OpenSSL / 1.0.1i配置

Apache/2.4.10 (Debian) PHP/5.6.0-1+b1 OpenSSL/1.0.1i configured

推荐答案

我有一个类似的错误,用PHP试图加载到RAM中的东西slighlty超过127 TB大,但对我来说那发生后脚本执行完毕。

I'm having a similar error, with PHP trying to load into RAM something slighlty bigger than 127 TB, but to me it happens AFTER the script is finished.

PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 140683487706824 bytes) in Unknown on line 0

我使用的PHP版本5.4.39-0 + deb7u2,我看到脚本执行后发生,因为脚本本身工作得很好,并显示没有问题。其实,我试图登录脚本的执行时间和内存使用,它是所有只是罚款,直至脚本结束。

I'm using PHP version 5.4.39-0+deb7u2, and I see it happen after the script execution because the script itself works just fine and shows no issues. I actually tried logging the script execution time and RAM usage and it was all just fine, until the end of the script.

我的问题是最有可能与使用DBLIB + freetds的组合,连接到远程服务器MSSQL从Debian的,这是一个的已知的bug#64511 。它实际上报称固定的,但在我的情况下,它似乎不那么。

My issue is most likely related to using the dblib+freetds combination, for connecting to remote MSSQL server, from Debian, which is a known bug #64511. It's actually reported fixed, but in my case it doesn't seem so.

这其实很难得到任何规则,这种行为,但这里是我看到的。

It's actually very hard to get any rules in this behavior, but here's what I see.


  • 我连接到远程MSSQL

  • 执行查询(这实际上是由一个存储过程的调用,然后后马上执行SELECT查询)

执行后,如果我会用这样的事情code,以获取结果:

After execution, if I would use something like this code to fetch the results:

$sprocResultSet = $PDOquery->fetchAll(PDO::FETCH_ASSOC);
$PDOquery->nextRowset();
$sprocExitCode = $PDOquery->fetchAll(PDO::FETCH_ASSOC);

的情况下,我会得到奇怪的内存分配错误,与所谓TB的加载。

in most of the cases, I'd get that weird RAM allocation error, with supposedly terabytes being loaded.

当,而不是指定使用fetchall()内提取的风格,我用了一个setFetchMode(),喜欢这里:

When instead of specifying the fetch style inside the fetchAll(), I used a setFetchMode(), like here:

$PDOquery->setFetchMode(PDO::FETCH_ASSOC);
$sprocResultSet = $PDOquery->fetchAll();
$PDOquery->nextRowset();
$sprocExitCode = $PDOquery->fetchAll();

然后我从来没有与分配RAM注意到了同样的错误。

then I never noticed the same error with allocating RAM.

我也尝试关闭游标和归零的$ PDOquery变量,或者只是让它自动关闭脚本上月底 - 没有帮助。此外,也许重要的是要提的 - 无论是存储过程,它正重返只有一行数据后的额外SELECT查询,所以绝对没有什么大的结果集回来

I did try closing the cursors and nulling the $PDOquery variable, or just letting it auto-close on script end - none helped. Also, maybe important to mention - both the sproc and the additional SELECT query after it are getting back just one line of data, so there's definitely no big result set coming back.

所以...我不知道如果这有助于在所有情况下,但如果你有类似的情况我自己,尝试设置的默认获取模式PDO提前。

So... I'm not sure if this helps in all cases, but if you have similar situation as myself, try setting that default fetch mode for PDO in advance.

和刚刚两点补充。首先,在上述code时,PDO的nextResultSet()时,这也是报告中的错误 - 造成一个问题,带记忆:的 nextRowset导致内存损坏
  其次,PDO的取()不使用,因为使用时,它会与DBLIB错误报告死亡。

And just two things to add. First, in the code above, the PDO's nextResultSet() is used, which is also reported as a bug - causing an issue with memory: nextRowset causes memory corruption. Second, PDO's fetch() is NOT used, because whenever used it would die with DBLIB error reported.

这篇关于PHP试图分配127 TB内存和内存泄漏与preg_match和preg_replace的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆