Python解释器最多占用我130%的CPU.那怎么可能? [英] Python interpreters uses up to 130% of my CPU. How is that possible?

查看:59
本文介绍了Python解释器最多占用我130%的CPU.那怎么可能?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在使用python进行一些 I/O密集型负载测试.我的程序所做的就是尽可能快地向目标服务器发送 HTTP 请求.

I am currently doing some I/O intensive load-testing using python. All my program does is to send HTTP requests as fast as possible to my target server.

要管理此问题,我使用最多20个线程 ,因为我本质上受I/O和远程服务器限制的约束.

To manage this, I use up to 20 threads as I'm essentially bound to I/O and remote server limitations.

根据顶部", CPython 在我的双核计算机上使用的峰值为130%CPU .

According to 'top', CPython uses a peak of 130% CPU on my dual core computer.

那怎么可能?我以为GIL可以防止这种情况发生?还是Linux计算"每个应用程序消耗的资源的方式?

How is that possible ? I thought the GIL prevented this ? Or is it the way Linux 'counts' the resources consumed by each applications ?

推荐答案

前100%是指单个核心.在双核计算机上,您最多可以使用200%.

100 percent in top refer to a single core. On a dual-core machine, you have up to 200 per cent available.

单个单线程进程只能使用单个内核,因此限制为100%.由于您的进程有多个线程,因此没有什么可以阻止它同时使用两个内核.

A single single-threaded process can only make use of a single core, so it is limited to 100 percent. Since your process has several threads, nothing is stopping it from making use of both cores.

GIL仅阻止纯Python代码并发执行.许多库调用(包括大多数I/O东西)都会释放GIL,因此这里也没有问题.与Internet上的许多FUD相反,GIL很少会降低实际性能,如果这样做,通常比使用线程有更好的解决方案.

The GIL only prevents pure-Python code from being executed concurrently. Many library calls (including most I/O stuff) release the GIL, so no problem here as well. Contrary to much of the FUD on the internet, the GIL rarely reduces real-world performance, and if it does, there are usually better solutions to the problem than using threads.

这篇关于Python解释器最多占用我130%的CPU.那怎么可能?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆