使用os.kill()后子流程何时终止? [英] How to find out when subprocess has terminated after using os.kill()?
问题描述
subprocess.Popen
。由于我的应用程序的架构限制,我无法使用 Popen.terminate()
终止子进程, Popen.poll()
来检查进程何时终止。这是因为我不能在变量中引用启动的子进程。 相反,我必须编写进程标识 pid
到子程序启动时的文件 pidfile
。当我想停止子进程时,我打开这个 pidfile
并使用 os.kill(pid,signal.SIGTERM)
来阻止它。
我的问题是:如何查看子流程何时终止?在调用 os.kill()
之后,使用 signal.SIGTERM
需要大约1-2分钟才能终止。首先,我认为 os.waitpid()
将是这个任务的正确的事情,但是当我在 os.kill()
之后调用它时,它给我 OSError:[Errno 10]没有子进程
。
顺便说一下,我使用两种窗体启动和停止来自HTML模板的子进程,程序逻辑在Django视图中。当我的应用程序处于调试模式时,该异常会在浏览器中显示。知道我在视图中调用的子进程( python manage.py crawlwebpages
)本身也可以调用另一个子进程,即Scrapy抓取器的一个实例。我将这个Scrapy实例的 pid
写入 pidfile
,这是我要终止的。
以下是相关代码:
def process_main_page_forms(request):
如果request.method =='POST':
if request.POST ['form-type'] == u'webpage-crawler-form':
template_context = _crawl_webpage(request)
elif request.POST ['form-type'] == u'stop-crawler-form':
template_context = _stop_crawler(request)
else:
template_context = {
'webpage_crawler_form':WebPageCrawlerForm(),
'stop_crawler_form':StopCrawlerForm()}
返回渲染(request,'main.html',template_context)
def _crawl_webpage(request):
webpage_crawler_form = WebPageCrawlerForm(request.POST)
如果webpage_crawler_form.is_valid():
url_to_crawl = webpage_crawler _form.cleaned_data ['url_to_crawl']
maximum_pages_to_crawl = webpage_crawler_form.cleaned_data ['maximum_pages_to_crawl']
program ='python manage.py crawlwebpages'+'-n'+ str(maximum_pages_to_crawl)+ ''+ url_to_crawl
p = subprocess.Popen(program.split())
template_context = {
'webpage_crawler_form':webpage_crawler_form,
'stop_crawler_form':StopCrawlerForm )
返回template_context
def _stop_crawler(请求):
stop_crawler_form = StopCrawlerForm(request.POST)
如果stop_crawler_form.is_valid ():
with open('scrapy_crawler_process.pid','rb')as pidfile:
process_id = int(pidfile.read()。strip())
print'PROCESS ID: ',process_id
os.kill(process_id,signal.SIGTERM)
os.waitpid(process_id,os.WNOHANG)#这给我的OSError
print'Crawler process terminated!'
template_context = {
'webpage_crawler_form':WebPageCrawlerForm(),
'stop_crawler_form':stop_crawler_form}
return template_context
我该怎么办?非常感谢!
编辑:
根据由 Jacek Konieczny ,我可以通过将函数 _stop_crawler(request)
中的代码更改为以下内容来解决我的问题:
def _stop_crawler(request):
stop_crawler_form = StopCrawlerForm(request.POST)
如果stop_crawler_form.is_valid():
打开('scrapy_crawler_process.pid','rb')作为pidfile:
process_id = int(pidfile.read()。strip())
#这些是基本行
os.kill(process_id,signal.SIGTERM)
while True:
try:
time.sleep(10)
os.kill(process_id,0)
除OSError:
break
print'Crawler procedures ss已终止!'
template_context = {
'webpage_crawler_form':WebPageCrawlerForm(),
'stop_crawler_form':stop_crawler_form}
返回template_context
检查进程是否仍在运行的通常方法是杀死()它与信号'0'。对于正在运行的作业,它不会执行任何操作,并且如果进程不存在,则会引发 OSError
异常与 errno = ESRCH
/ p>
[jajcus @ lolek〜] $ sleep 1000&
[1] 2405
[jajcus @ lolek〜] $ python
Python 2.7.3(默认,2012年5月11日,11:57:22)
[GCC 4.6.3 20120315(release)] on linux2
有关详细信息,请输入help,copyright,credits或license。
>>> import os
>>> os.kill(2405,0)
>>>> os.kill(2405,15)
>>>> os.kill(2405,0)
追溯(最近的最后一次呼叫):
文件< stdin>,第1行,< module>
OSError:[Errno 3]没有这样的过程
但是,只要有可能,调用进程的父进程,并使用 wait()
函数系列来处理其终止。那就是 Popen
对象。
I have a Python program (precisely, a Django application) that starts a subprocess using subprocess.Popen
. Due to architecture constraints of my application, I'm not able to use Popen.terminate()
to terminate the subprocess and Popen.poll()
to check when the process has terminated. This is because I cannot hold a reference to the started subprocess in a variable.
Instead, I have to write the process id pid
to a file pidfile
when the subprocess starts. When I want to stop the subprocess, I open this pidfile
and use os.kill(pid, signal.SIGTERM)
to stop it.
My question is: How can I find out when the subprocess has really terminated? Using signal.SIGTERM
it needs approximately 1-2 minutes to finally terminate after calling os.kill()
. First I thought that os.waitpid()
would be the right thing for this task but when I call it after os.kill()
it gives me OSError: [Errno 10] No child processes
.
By the way, I'm starting and stopping the subprocess from a HTML template using two forms and the program logic is inside a Django view. The exception gets displayed in my browser when my application is in debug mode. It's probably also important to know that the subprocess that I call in my view (python manage.py crawlwebpages
) itself calls another subprocess, namely an instance of a Scrapy crawler. I write the pid
of this Scrapy instance to the pidfile
and this is what I want to terminate.
Here is the relevant code:
def process_main_page_forms(request):
if request.method == 'POST':
if request.POST['form-type'] == u'webpage-crawler-form':
template_context = _crawl_webpage(request)
elif request.POST['form-type'] == u'stop-crawler-form':
template_context = _stop_crawler(request)
else:
template_context = {
'webpage_crawler_form': WebPageCrawlerForm(),
'stop_crawler_form': StopCrawlerForm()}
return render(request, 'main.html', template_context)
def _crawl_webpage(request):
webpage_crawler_form = WebPageCrawlerForm(request.POST)
if webpage_crawler_form.is_valid():
url_to_crawl = webpage_crawler_form.cleaned_data['url_to_crawl']
maximum_pages_to_crawl = webpage_crawler_form.cleaned_data['maximum_pages_to_crawl']
program = 'python manage.py crawlwebpages' + ' -n ' + str(maximum_pages_to_crawl) + ' ' + url_to_crawl
p = subprocess.Popen(program.split())
template_context = {
'webpage_crawler_form': webpage_crawler_form,
'stop_crawler_form': StopCrawlerForm()}
return template_context
def _stop_crawler(request):
stop_crawler_form = StopCrawlerForm(request.POST)
if stop_crawler_form.is_valid():
with open('scrapy_crawler_process.pid', 'rb') as pidfile:
process_id = int(pidfile.read().strip())
print 'PROCESS ID:', process_id
os.kill(process_id, signal.SIGTERM)
os.waitpid(process_id, os.WNOHANG) # This gives me the OSError
print 'Crawler process terminated!'
template_context = {
'webpage_crawler_form': WebPageCrawlerForm(),
'stop_crawler_form': stop_crawler_form}
return template_context
What can I do? Thank you very much!
EDIT:
According to the great answer given by Jacek Konieczny, I could solve my problem by changing my code in the function _stop_crawler(request)
to the following:
def _stop_crawler(request):
stop_crawler_form = StopCrawlerForm(request.POST)
if stop_crawler_form.is_valid():
with open('scrapy_crawler_process.pid', 'rb') as pidfile:
process_id = int(pidfile.read().strip())
# These are the essential lines
os.kill(process_id, signal.SIGTERM)
while True:
try:
time.sleep(10)
os.kill(process_id, 0)
except OSError:
break
print 'Crawler process terminated!'
template_context = {
'webpage_crawler_form': WebPageCrawlerForm(),
'stop_crawler_form': stop_crawler_form}
return template_context
The usual way to check if a process is still running is to kill() it with signal '0'. It does nothing to a running job and raises an OSError
exception with errno=ESRCH
if the process does not exist.
[jajcus@lolek ~]$ sleep 1000 &
[1] 2405
[jajcus@lolek ~]$ python
Python 2.7.3 (default, May 11 2012, 11:57:22)
[GCC 4.6.3 20120315 (release)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.kill(2405, 0)
>>> os.kill(2405, 15)
>>> os.kill(2405, 0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 3] No such process
But whenever possible the caller should stay a parent of the called process and use wait()
function family to handle its termination. That is what Popen
object does.
这篇关于使用os.kill()后子流程何时终止?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!