生产剖析规范 [英] Profiling Code on Production

查看:77
本文介绍了生产剖析规范的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想实现一种实现,以在生产服务器上配置代码配置文件并需要一些最佳实践建议.显然,由于增加了开销,因此对所有请求进行概要分析是一个坏主意,因此我正在研究一些将针对每个请求随机调用概要分析器的技术.大约每10,000个请求就有1个配置文件.

I'm toying around with the idea of implementing something that profiles code on the production server and wanted some best-practice advice. Obviously it's a bad idea to profile ALL requests because of the added overhead so I was looking into some techniques that will randomly invoke the profiler per request. Something like 1 profile per every 10,000 requests.

我知道有一种方法可以通过Facebook的 XHProf Profiler 来实现,但我希望使用xdebug的类似解决方案.

I know there is a way to achieve such a task with Facebook's XHProf Profiler but was hoping for a similar solution using xdebug.

所以我的问题是(假设xdebug是探查器):

So my questions are (assuming xdebug is the profiler):

  1. 这种功能是否值得建议?我想从生产环境中获取一些真实世界的数据,但不是如果这意味着由于开销而破坏了用户体验,则不会.
  2. 在生产环境中安装xdebug是否以任何方式(假设未启用调试器)向攻击者/利用者开放服务器?是否有这种配置的样板配置?
  3. 在适当的样本量下触发事件探查器的最佳方法是什么?

对此事的其他见解将不胜感激.

Any other insight into the matter would be much appreciated.

推荐答案

不要重新发明轮子. XHProf Profiler绝对是在生产环境中对代码进行性能分析的最佳工具.

Don't reinvent the wheel. XHProf Profiler is definitely the best tool for the job when it comes to profiling code within a production environment.

用于在xdebug中启用分析的选项仅限于始终通过php.ini文件或.htaccess文件通过xdebug.profiler_enable = 1进行分析,或者选择性地通过xdebug.profiler_enable_trigger = 1启用分析.在后一种情况下,必须设置XDEBUG_PROFILE GET或POST参数,或发送名称为XDEBUG_PROFILE的cookie.这意味着,如果有人调皮的话,他们可以通过简单地将GET参数附加到一堆请求中来减慢服务器的爬网速度.

Your options for enabling profiling within xdebug are limited to either having profiling always on via a php.ini file or .htaccess file via xdebug.profiler_enable = 1 or selectively turning on profiling via xdebug.profiler_enable_trigger = 1. In the latter case you must have an XDEBUG_PROFILE GET or POST parameter set or send a cookie with the name XDEBUG_PROFILE. This means that should someone mischievous want to, they could slow your server to a crawl by simply appending that GET parameter to a bunch of requests.

我看到的唯一一个可以对相对随机的请求进行概要分析的选项是让cron脚本定期将.htaccess文件放置在适当的目录中,然后将其移出目录.尽管如此,这还是不理想的.

The only option I could see that would profile a relatively random sample of requests is to have a cron script place an .htaccess file in the appropriate directory, periodically, and then move it out of the directory. Still, that is less than desirable.

如果您决定使用XHProf,请查看 XHGUI .

If you do decide to go with XHProf take a look at XHGUI.

http://phpadvent.org/2010/profiling-with- xhgui-by-paul-reinheimer

这篇关于生产剖析规范的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆