如何重新启动scrapyd守护程序? [英] How do I restart the scrapyd daemon?

查看:210
本文介绍了如何重新启动scrapyd守护程序?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经完全按照文档。现在,我已经更改了 /etc/scrapyd/conf.d/000-default 中的一些配置变量。

I've installed the scrapyd daemon on an EC2 server exactly as described in the documentation. Now I've changed some of the configuration variables in /etc/scrapyd/conf.d/000-default.

如何做我不敢承认那些变化?我以为它涉及到重新启动守护程序,但是我找不到如何重新启动该守护程序的良好指导。

How do I get scrapyd to recognize those changes? I assume it involves restarting the daemon, but I can't find any good guidance on how to do so.

一个复杂的因素:我排队等待一堆爬网, d宁愿不失去他们。我认为scrapy知道如何退出并恢复正常运行,但是此功能没有充分的文档说明。

One complicating factor: I have a bunch of crawls queued up, and I'd rather not lose them. I think scrapy knows how to quit and resume them gracefully, but this feature isn't well-documented. Any guidance?

推荐答案

事实证明这很简单。

像这样杀死进程:

kill -INT $(cat /var/run/scrapyd.pid)

然后像这样重新启动它:

Then restart it like this:

/usr/bin/python /usr/local/bin/twistd -ny /usr/share/scrapyd/scrapyd.tac -u scrapy -g nogroup --pidfile /var/run/scrapyd.pid -l /var/log/scrapyd/scrapyd.log &

据我所知,两个命令都必须以root用户身份运行。

As far as I can tell, both commands need to be run as root.

这篇关于如何重新启动scrapyd守护程序?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆