写一个C程序来衡量上下文切换在Linux操作系统所花费的时间 [英] Write a C program to measure time spent in context switch in Linux OS

查看:142
本文介绍了写一个C程序来衡量上下文切换在Linux操作系统所花费的时间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们可以写一个C程序找出上下文切换在Linux中花费的时间?能否请您分享code,如果你有吗?
谢谢

Can we write a c program to find out time spent in context switch in Linux? Could you please share code if you have one? Thanks

推荐答案

剖析开关时间是非常困难的,但内核中的延迟分析工具,以及OProfile的(可以分析内核本身)会帮助你在那里

Profiling the switching time is very difficult, but the in-kernel latency profiling tools, as well as oprofile (which can profile the kernel itself) will help you there.

有关基准交互式应用程序的性能,我已经写了一个小工具,称为latencybench的措施意想不到的延迟高峰:

For benchmarking the interactive application performance, I have written a small tool called latencybench that measures unexpected latency spikes:

// Compile with g++ latencybench.cc -o latencybench -lboost_thread-mt
// Should also work on MSVC and other platforms supported by Boost.

#include <boost/format.hpp>
#include <boost/thread/thread.hpp>
#include <boost/date_time.hpp>
#include <algorithm>
#include <cstdlib>
#include <csignal>

volatile bool m_quit = false;

extern "C" void sighandler(int) {
    m_quit = true;
}

std::string num(unsigned val) {
    if (val == 1) return "one occurrence";
    return boost::lexical_cast<std::string>(val) + " occurrences";
}

int main(int argc, char** argv) {
    using namespace boost::posix_time;
    std::signal(SIGINT, sighandler);
    std::signal(SIGTERM, sighandler);
    time_duration duration = milliseconds(10);
    if (argc > 1) {
        try {
            if (argc != 2) throw 1;
            unsigned ms = boost::lexical_cast<unsigned>(argv[1]);
            if (ms > 1000) throw 2;
            duration = milliseconds(ms);
        } catch (...) {
            std::cerr << "Usage: " << argv[0] << " milliseconds" << std::endl;
            return EXIT_FAILURE;
        }
    }
    typedef std::map<long, unsigned> Durations;
    Durations durations;
    unsigned samples = 0, wrongsamples = 0;
    unsigned max = 0;
    long last = -1;
    std::cout << "Measuring actual sleep delays when requesting " << duration.total_milliseconds() << " ms: (Ctrl+C when done)" << std::endl;
    ptime begin = boost::get_system_time();
    while (!m_quit) {
        ptime start = boost::get_system_time();
        boost::this_thread::sleep(start + duration);
        long actual = (boost::get_system_time() - start).total_milliseconds();
        ++samples;
        unsigned num = ++durations[actual];
        if (actual != last) {
            std::cout << "\r  " << actual << " ms " << std::flush;
            last = actual;
        }
        if (actual != duration.total_milliseconds()) {
            ++wrongsamples;
            if (num > max) max = num;
            std::cout << "spike at " << start - begin << std::endl;
            last = -1;
        }
    }
    if (samples == 0) return 0;
    std::cout << "\rTotal measurement duration:  " << boost::get_system_time() - begin << "\n";
    std::cout << "Number of samples collected: " << samples << "\n";
    std::cout << "Incorrect delay count:       " << wrongsamples << boost::format(" (%.2f %%)") % (100.0 * wrongsamples / samples) << "\n\n";
    std::cout << "Histogram of actual delays:\n\n";
    unsigned correctsamples = samples - wrongsamples;
    const unsigned line = 60;
    double scale = 1.0;
    char ch = '+';
    if (max > line) {
        scale = double(line) / max;
        ch = '*';
    }
    double correctscale = 1.0;
    if (correctsamples > line) correctscale = double(line) / correctsamples;
    for (Durations::const_iterator it = durations.begin(); it != durations.end(); ++it) {
        std::string bar;
        if (it->first == duration.total_milliseconds()) bar = std::string(correctscale * it->second, '>');
        else bar = std::string(scale * it->second, ch);
        std::cout << boost::format("%5d ms | %s %d") % it->first % bar % it->second << std::endl;
    }
    std::cout << "\n";
    std::string indent(30, ' ');
    std::cout << indent << "+-- Legend ----------------------------------\n";
    std::cout << indent << "|  >  " << num(1.0 / correctscale) << " (of " << duration.total_milliseconds() << " ms delay)\n";
    if (wrongsamples > 0) std::cout << indent << "|  " << ch << "  " << num(1.0 / scale) << " (of any other delay)\n";
}

结果在Ubuntu 2.6.32-14-通用内核。测量时,我被编译C ++ code具有四个核心,并在同一时间玩带有OpenGL图形游戏(以使其更有趣):

Results on Ubuntu 2.6.32-14-generic kernel. While measuring, I was compiling C++ code with four cores and playing a game with OpenGL graphics at the same time (to make it more interesting):

Total measurement duration:  00:01:45.191465
Number of samples collected: 10383
Incorrect delay count:       196 (1.89 %)

Histogram of actual delays:

   10 ms | >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 10187
   11 ms | *************************************************** 70
   12 ms | ************************************************************ 82
   13 ms | ********* 13
   14 ms | ********* 13
   15 ms | ** 4
   17 ms | *** 5
   18 ms | * 2
   19 ms | **** 6
   20 ms |  1

                              +-- Legend ----------------------------------
                              |  >  169 occurrences (of 10 ms delay)
                              |  *  one occurrence (of any other delay)

通过RT-内核补丁,我得到更好的结果,pretty多10-12毫秒而已。

With rt-patched kernels I get much better results, pretty much 10-12 ms only.

在打印输出的传说似乎患了舍入误差或东西(并粘贴源$ C ​​$ c是不完全相同的版本)。我从来没有真正抛光此应用程序释放...

The legend in the printout appears to be suffering of a rounding error or something (and the source code pasted is not the exact same version). I never really polished this application for a release...

这篇关于写一个C程序来衡量上下文切换在Linux操作系统所花费的时间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆