使用命名管道使用bash - 数据丢失问题 [英] Using named pipes with bash - Problem with data loss

查看:295
本文介绍了使用命名管道使用bash - 数据丢失问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

做了一些网上搜索,发现简单的教程使用命名管道。然而,当我做后台作业的任何东西我似乎失去了很多的数据。

Did some search online, found simple 'tutorials' to use named pipes. However when I do anything with background jobs I seem to lose a lot of data.

使用Ubuntu 10.04使用Linux 2.6.32-25泛型#45 Ubuntu的SMP周六10月16日19时52分42秒UTC 2010 x86_64的GNU / Linux的

Using Ubuntu 10.04 with Linux 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:52:42 UTC 2010 x86_64 GNU/Linux

GNU的bash,版本4.1.5(1) - 释放下(x86_64-PC-Linux的GNU)。

GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu).

我的bash的功能是:

My bash function is:

function jqs
{
  pipe=/tmp/__job_control_manager__
  trap "rm -f $pipe; exit"  EXIT SIGKILL

  if [[ ! -p "$pipe" ]]; then
      mkfifo "$pipe"
  fi

  while true
  do
    if read txt <"$pipe"
    then
      echo "$(date +'%Y'): new text is [[$txt]]"

      if [[ "$txt" == 'quit' ]]
      then
    break
      fi
    fi
  done
}

我在后台运行,这样的:

I run this in the background:

> jqs&
[1] 5336

现在我喂它:

for i in 1 2 3 4 5 6 7 8
do
  (echo aaa$i > /tmp/__job_control_manager__ && echo success$i &)
done

的输出不一致。
我经常没有得到所有的成功相呼应。
我得到最多数量的新文本回声是成功的回声,有时少。

The output is inconsistent. I frequently don't get all success echoes. I get at most as many new text echos as success echoes, sometimes less.

如果我删除'和;'从饲料,似乎工作,但我阻止,直到输出被读取。因此,我不想让子进程被阻塞,但不是主要的过程。

If I remove the '&' from the 'feed', it seems to work, but I am blocked until the output is read. Hence me wanting to let sub-processes get blocked, but not the main process.

这样做的目的是写一个简单的作业控制脚本,这样我可以运行在最有发言权的并联10个就业机会,并为排队以后处理的休息,但可靠地知道,他们运行。

The aim being to write a simple job control script so I can run say 10 jobs in parallel at most and queue the rest for later processing, but reliably know that they do run.

以下全部任务管理器:

function jq_manage
{
  export __gn__="$1"

  pipe=/tmp/__job_control_manager_"$__gn__"__
  trap "rm -f $pipe"    EXIT
  trap "break"      SIGKILL

  if [[ ! -p "$pipe" ]]; then
      mkfifo "$pipe"
  fi

  while true
  do
    date
    jobs
    if (($(jobs | egrep "Running.*echo '%#_Group_#%_$__gn__'" | wc -l) < $__jN__))
    then
      echo "Waiting for new job"
      if read new_job <"$pipe"
      then
    echo "new job is [[$new_job]]"

    if [[ "$new_job" == 'quit' ]]
    then
      break
    fi

    echo "In group $__gn__, starting job $new_job"
    eval "(echo '%#_Group_#%_$__gn__' > /dev/null; $new_job) &"
      fi
    else
      sleep 3
    fi
  done
}

function jq
{
  # __gn__ = first parameter to this function, the job group name (the pool within which to allocate __jN__ jobs)
  # __jN__ = second parameter to this function, the maximum of job numbers to run concurrently

  export __gn__="$1"
  shift
  export __jN__="$1"
  shift

  export __jq__=$(jobs | egrep "Running.*echo '%#_GroupQueue_#%_$__gn__'" | wc -l)
  if (($__jq__ '<' 1))
  then
    eval "(echo '%#_GroupQueue_#%_$__gn__' > /dev/null; jq_manage $__gn__) &"
  fi

  pipe=/tmp/__job_control_manager_"$__gn__"__

  echo $@ >$pipe
}

呼叫

jq <name> <max processes> <command>
jq abc 2 sleep 20

将开始一个进程。
这部分工作正常。启动第二个,罚款。
一个接一个的手似乎很好地工作。
但在一个循环开始10似乎失去该系统,如在上面的简单的例子。

will start one process. That part works fine. Start a second one, fine. One by one by hand seem to work fine. But starting 10 in a loop seems to lose the system, as in the simpler example above.

任何提示至于我能做些什么来解决IPC数据的这种明显的损失将大大AP preciated。

Any hints as to what I can do to solve this apparent loss of IPC data would be greatly appreciated.

问候,
阿兰。

Regards, Alain.

推荐答案

您的问题是如果如下语句:

while true
do
    if read txt <"$pipe"
    ....
done

正在发生的事情是,你的作业队列服务器打开,每次关闭管道周围循环。这意味着某些客户端时,他们尝试写入管道得到一个破管道的错误 - 那就是管道的读者消失笔者打开后,

What is happening is that your job queue server is opening and closing the pipe each time around the loop. This means that some of the clients are getting a "broken pipe" error when they try to write to the pipe - that is, the reader of the pipe goes away after the writer opens it.

要解决这个问题,改变你在服务器循环打开管道一次整个循环:

To fix this, change your loop in the server open the pipe once for the entire loop:

while true
do
    if read txt
    ....
done < "$pipe"

完成这种方式,管被打开一次并保持打开。

Done this way, the pipe is opened once and kept open.

您将需要小心你的循环内运行什么,在循环内所有的处理把标准输入连接到命名管道。您将要确保你重定向从别的地方循环中的所有进程的标准输入,否则他们可能会从管道中消耗的数据。

You will need to be careful of what you run inside the loop, as all processing inside the loop will have stdin attached to the named pipe. You will want to make sure you redirect stdin of all your processes inside the loop from somewhere else, otherwise they may consume the data from the pipe.

编辑:随着现在的问题是,你是在你的读取当最后一个客户端关闭管道越来越EOF,可以使用jilles方法欺骗的文件描述符,或者你可以确保你有一个客户也是一样,保持管开的侧写:

With the problem now being that you are getting EOF on your reads when the last client closes the pipe, you can use jilles method of duping the file descriptors, or you can just make sure you are a client too and keep the write side of the pipe open:

while true
do
    if read txt
    ....
done < "$pipe" 3> "$pipe"

这将举行管道上的fd 3.打开相同的原则同样适用与此文件描述符与标准输入的写入端。您将需要关闭它,所有的子进程不继承它。它可能是重要小于:标准输入,但它会更干净。

This will hold the write side of the pipe open on fd 3. The same caveat applies with this file descriptor as with stdin. You will need to close it so any child processes dont inherit it. It probably matters less than with stdin, but it would be cleaner.

这篇关于使用命名管道使用bash - 数据丢失问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆