进程优先级(优先级)设置对Linux没有影响 [英] Process niceness (priority) setting has no effect on Linux

查看:124
本文介绍了进程优先级(优先级)设置对Linux没有影响的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我写了一个测试程序,它只包含一个无限循环,其中包含一些 内部计算,不执行 I/O操作.我尝试启动该程序的两个实例,其中一个实例具有较高的 值,另一个值低:

sudo nice -n 19 taskset 1 ./test
sudo nice -n -20 taskset 1 ./test

taskset命令可确保两个程序在同一内核上执行. 与我的预期相反,热门报告指出这两个程序都获得了大约50%的 计算时间.这是为什么?漂亮的命令甚至有效果吗?

解决方案

您看到的行为几乎可以肯定是因为Linux 2.6.38(在2010年)中添加了自动组功能.大概当您描述了运行这两个命令时,它们是在不同终端窗口中运行的.如果您在相同终端窗口中运行了它们,那么您应该已经看到nice值起作用了.该答案的其余部分详细说明了这个故事.

内核提供了一种称为自动分组的功能,可以在面对多进程,占用大量CPU的工作负载(例如,使用大量并行构建进程(即make(1) -j标志)构建Linux内核)的情况下提高交互式桌面性能. /p>

在创建新会话时会创建一个新的自动组 通过setsid(2);例如,当启动新的终端窗口时,就会发生这种情况. fork(2)创建的新进程继承了它的 父级的自动组成员身份.因此, 会话是同一自动组的成员.

启用自动分组后,自动分组的所有成员 被放置在相同的内核调度程序任务组"中. Linux内核调度程序采用了一种算法,该算法可均衡 跨任务组的CPU周期.可以通过以下示例来说明这种方法对于交互式桌面性能的好处.

假设有两个自动组争用同一CPU (即,假定使用单个CPU系统或使用taskset(1) 将所有进程限制在SMP系统上的同一CPU上). 第一组包含来自内核的十个受CPU约束的进程 构建从make -j10开始.另一个包含一个 受CPU限制的进程:视频播放器.自动分组的作用是 两组将各自接收一半的CPU周期.那是, 视频播放器将获得50%的CPU周期,而不是 只有9%的周期,这可能会导致视频质量下降 回放. SMP系统上的情况更为复杂,但是 总体效果是相同的:调度程序分配CPU周期 跨任务组,这样一个包含较大的自动组 CPU绑定进程的数量不会最终占用CPU周期 牺牲了系统上的其他作业.

很好的价值和小组安排

在安排非实时流程(例如,已安排的流程)时 根据默认的SCHED_OTHER策略), 调度程序采用一种称为组调度"的技术,在该技术下,线程在任务组"中进行调度. 任务组是在各种情况下形成的,此处的相关情况是自动分组.

如果启用了自动分组功能,则所有 (隐式)放置在自动组中(即,与 由setsid(2)创建)组成一个任务组.每个新的自动组是 因此是一个单独的任务组.

在组调度下,线程的nice值会影响 仅相对于同一线程中的其他线程调度决策 任务组.这在以下方面产生了一些令人惊讶的后果 UNIX系统上具有不错的价值的传统语义.特别是,如果启用了自动分组(这是各种Linux发行版中的默认设置),则 在进程上使用nice(1)会产生影响 仅用于相对于在 相同的会话(通常是相同的终端窗口).

相反,对于(例如)唯一的两个过程 不同会话(例如,不同终端)中受CPU约束的进程 窗口,每个窗口的工作都与不同的自动组相关联), 在其中一个会话中修改流程的不错的价值 相对于调度程序的决定而言,没有影响 在另一个会话中处理.大概就是您所看到的场景,尽管您没有明确提到使用两个终端窗口.

如果要防止自动分组干扰此处所述的传统nice行为,则可以禁用此功能

echo 0 > /proc/sys/kernel/sched_autogroup_enabled

请注意,尽管这样做也会影响自动组功能旨在为桌面交互带来的好处(见上文).

自动分组的好值

可以通过以下方式查看进程的自动组成员身份 文件/proc/[pid]/autogroup:

$ cat /proc/1/autogroup
/autogroup-1 nice 0

此文件还可用于修改分配的CPU带宽 到一个自动分组.这是通过在"nice"中写一个数字来完成的 范围到文件以设置自动分组的nice值.允许的 范围是+19(低优先级)到-20(高优先级).

自动分组好设置与过程具有相同的含义 不错的值,但适用于将CPU周期分配给 整体自动分组,基于其他分组的相对好值 自动分组.对于自动组内部的进程,CPU会对其进行循环 收到的将是自动组的不错值的产品(与 到其他自动组)和该过程的不错的价值(与 同一自动组中的其他进程).

I wrote a test program which consists of just an infinite loop with some computations inside, and performs no I/O operations. I tried starting two instances of the program, one with a high niceness value, and the other with a low niceness value:

sudo nice -n 19 taskset 1 ./test
sudo nice -n -20 taskset 1 ./test

The taskset command ensures that both programs execute on the same core. Contrary to my expectation, top reports that both programs get about 50% of the computation time. Why is that? Does the nice command even have an effect?

解决方案

The behavior you are seeing is almost certainly because of the autogroup feature that was added in Linux 2.6.38 (in 2010). Presumably when you described running the two commands, they were run in different terminal windows. If you had run them in the same terminal window, then you should have seen the nice value have an effect. The rest of this answer elaborates the story.

The kernel provides a feature known as autogrouping to improve interactive desktop performance in the face of multiprocess, CPU-intensive workloads such as building the Linux kernel with large numbers of parallel build processes (i.e., the make(1) -j flag).

A new autogroup is created when a new session is created via setsid(2); this happens, for example, when a new terminal window is started. A new process created by fork(2) inherits its parent's autogroup membership. Thus, all of the processes in a session are members of the same autogroup.

When autogrouping is enabled, all of the members of an autogroup are placed in the same kernel scheduler "task group". The Linux kernel scheduler employs an algorithm that equalizes the distribution of CPU cycles across task groups. The benefits of this for interactive desktop performance can be described via the following example.

Suppose that there are two autogroups competing for the same CPU (i.e., presume either a single CPU system or the use of taskset(1) to confine all the processes to the same CPU on an SMP system). The first group contains ten CPU-bound processes from a kernel build started with make -j10. The other contains a single CPU-bound process: a video player. The effect of autogrouping is that the two groups will each receive half of the CPU cycles. That is, the video player will receive 50% of the CPU cycles, rather than just 9% of the cycles, which would likely lead to degraded video playback. The situation on an SMP system is more complex, but the general effect is the same: the scheduler distributes CPU cycles across task groups such that an autogroup that contains a large number of CPU-bound processes does not end up hogging CPU cycles at the expense of the other jobs on the system.

The nice value and group scheduling

When scheduling non-real-time processes (e.g., those scheduled under the default SCHED_OTHER policy), the scheduler employs a technique known as "group scheduling", under which threads are scheduled in "task groups". Task groups are formed in the various circumstances, with the relevant case here being autogrouping.

If autogrouping is enabled, then all of the threads that are (implicitly) placed in an autogroup (i.e., the same session, as created by setsid(2)) form a task group. Each new autogroup is thus a separate task group.

Under group scheduling, a thread's nice value has an effect for scheduling decisions only relative to other threads in the same task group. This has some surprising consequences in terms of the traditional semantics of the nice value on UNIX systems. In particular, if autogrouping is enabled (which is the default in various Linux distributions), then employing nice(1) on a process has an effect only for scheduling relative to other processes executed in the same session (typically: the same terminal window).

Conversely, for two processes that are (for example) the sole CPU-bound processes in different sessions (e.g., different terminal windows, each of whose jobs are tied to different autogroups), modifying the nice value of the process in one of the sessions has no effect in terms of the scheduler's decisions relative to the process in the other session. This presumably is the scenario you saw, though you don't explicitly mention using two terminal windows.

If you want to prevent autogrouping interfering with the traditional nice behavior as described here, you can disable the feature

echo 0 > /proc/sys/kernel/sched_autogroup_enabled

Be aware though that this will also have the effect of disabling the benefits for desktop interactivity that the autogroup feature was intended to provide (see above).

The autogroup nice value

A process's autogroup membership can be viewed via the file /proc/[pid]/autogroup:

$ cat /proc/1/autogroup
/autogroup-1 nice 0

This file can also be used to modify the CPU bandwidth allocated to an autogroup. This is done by writing a number in the "nice" range to the file to set the autogroup's nice value. The allowed range is from +19 (low priority) to -20 (high priority).

The autogroup nice setting has the same meaning as the process nice value, but applies to distribution of CPU cycles to the autogroup as a whole, based on the relative nice values of other autogroups. For a process inside an autogroup, the CPU cycles that it receives will be a product of the autogroup's nice value (compared to other autogroups) and the process's nice value (compared to other processes in the same autogroup).

这篇关于进程优先级(优先级)设置对Linux没有影响的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆