在Mac OS X上分叉后,为什么tzset()会慢很多? [英] Why is tzset() a lot slower after forking on Mac OS X?

查看:99
本文介绍了在Mac OS X上分叉后,为什么tzset()会慢很多?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

分叉后调用tzset()似乎很慢.我只有在分叉之前先在父进程中调用tzset()时才会看到速度缓慢.我的TZ环境变量未设置.我dtruss完成了我的测试程序,它发现子进程对于每个tzset()调用均读取/etc/localtime,而父进程仅读取一次.这种文件访问似乎是速度缓慢的根源,但是我无法确定为什么每次在子进程中都对其进行访问.

Calling tzset() after forking appears to be very slow. I only see the slowness if I first call tzset() in the parent process before forking. My TZ environment variable is not set. I dtruss'd my test program and it revealed the child process reads /etc/localtime for every tzset() invocation, while the parent process only reads it once. This file access seems to be the source of the slowness, but I wasn't able to determine why it's accessing it every time in the child process.

这是我的测试程序foo.c:

Here is my test program foo.c:

#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <unistd.h>

void check(char *msg);

int main(int argc, char **argv) {
  check("before");

  pid_t c = fork();
  if (c == 0) {
    check("fork");
    exit(0);
  }

  wait(NULL);

  check("after");
}

void check(char *msg) {
  struct timeval tv;

  gettimeofday(&tv, NULL);
  time_t start = tv.tv_sec;
  suseconds_t mstart = tv.tv_usec;

  for (int i = 0; i < 10000; i++) {
    tzset();
  }

  gettimeofday(&tv, NULL);
  double delta = (double)(tv.tv_sec - start);
  delta += (double)(tv.tv_usec - mstart)/1000000.0;

  printf("%s took: %fs\n", msg, delta);
}

我这样编译并执行foo.c:

I compiled and executed foo.c like this:

[muir@muir-work-mb scratch]$ clang -o foo foo.c
[muir@muir-work-mb scratch]$ env -i ./foo
before took: 0.002135s
fork took: 1.122254s
after took: 0.001120s

我正在运行Mac OS X 10.10.1(也在10.9.5上复制).

I'm running Mac OS X 10.10.1 (also reproduced on 10.9.5).

我最初是通过ruby注意到速度变慢的(子进程中的Time#localtime变慢).

I originally noticed the slowness via ruby (Time#localtime slow in child process).

推荐答案

肯·托马斯(Ken Thomases)的回答可能是正确的,但我对一个更具体的答案感到好奇,因为我仍然发现单线程程序执行此类操作时会出现速度慢的意外行为. fork之后的简单/通用操作.在检查 http://opensource.apple.com/source/Libc/Libc-997.1.1/stdtime/FreeBSD/localtime.c (不是100%肯定这是正确的来源),我想我有一个答案.

Ken Thomases's response may be correct, but I was curious about a more specific answer because I still find the slowness unexpected behavior for a single-threaded program performing such a simple/common operation after forking. After examining http://opensource.apple.com/source/Libc/Libc-997.1.1/stdtime/FreeBSD/localtime.c (not 100% sure this is the correct source), I think I have an answer.

该代码使用被动通知来确定时区是否已更改(与每次stat ing /etc/localtime相对).在fork之后,似乎已注册的通知令牌在子进程中变得无效.此外,该代码将使用无效令牌作为时区已更改的肯定通知来处理错误,并每次都继续读取/etc/localtime.我想这是fork后可以得到的不确定行为吗?不过,如果图书馆注意到该错误并为该通知重新注册,那就太好了.

The code uses passive notifications to determine if the time zone has changed (as opposed to stating /etc/localtime every time). It appears that the registered notification token becomes invalid in the child process after forking. Furthermore, the code treats the error from using an invalid token as a positive notification that the timezone has changed, and proceeds to read /etc/localtime every time. I guess this is the kind of undefined behavior you can get after forking? It would be nice if the library noticed the error and re-registered for the notification, though.

这是localtime.c的代码片段,其中将错误值与状态值混合在一起:

Here is the snippet of code from localtime.c that mixes the error value with the status value:

nstat = notify_check(p->token, &ncheck);
if (nstat || ncheck) {

我证明了使用此程序进行分叉后注册令牌无效:

I demonstrated that the registration token becomes invalid after fork using this program:

#include <notify.h>
#include <stdio.h>
#include <stdlib.h>

void bail(char *msg) {
  printf("Error: %s\n", msg);
  exit(1);
}

int main(int argc, char **argv) {
  int token, something_changed, ret;

  notify_register_check("com.apple.system.timezone", &token);

  ret = notify_check(token, &something_changed);
  if (ret)
    bail("notify_check #1 failed");
  if (!something_changed)
    bail("expected change on first call");

  ret = notify_check(token, &something_changed);
  if (ret)
    bail("notify_check #2 failed");
  if (something_changed)
    bail("expected no change");

  pid_t c = fork();
  if (c == 0) {
    ret = notify_check(token, &something_changed);
    if (ret) {
      if (ret == NOTIFY_STATUS_INVALID_TOKEN)
        printf("ret is invalid token\n");

      if (!notify_is_valid_token(token))
        printf("token is not valid\n");

      bail("notify_check in fork failed");
    }

    if (something_changed)
      bail("expected not changed");

    exit(0);
  }

  wait(NULL);
}

并像这样运行它:

muir-mb:projects muir$ clang -o notify_test notify_test.c 
muir-mb:projects muir$ ./notify_test 
ret is invalid token
token is not valid
Error: notify_check in fork failed

这篇关于在Mac OS X上分叉后,为什么tzset()会慢很多?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆