python使用pyinstaller将scrapy转换为exe文件 [英] python scrapy conversion to exe file using pyinstaller

查看:78
本文介绍了python使用pyinstaller将scrapy转换为exe文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将一个scrapy 脚本转换为一个exe 文件.main.py 文件如下所示:

I am trying to convert a scrapy script to a exe file. The main.py file looks like this:

from scrapy.crawler import CrawlerProcess
from amazon.spiders.amazon_scraper import Spider

spider = Spider()
process = CrawlerProcess({
    'FEED_FORMAT': 'csv',
    'FEED_URI': 'data.csv',
    'DOWNLOAD_DELAY': 3,
    'RANDOMIZE_DOWNLOAD_DELAY': True,
    'ROTATING_PROXY_LIST_PATH': 'proxies.txt',
    'USER_AGENT_LIST': 'useragents.txt',
    'DOWNLOADER_MIDDLEWARES' : 
    {
        'rotating_proxies.middlewares.RotatingProxyMiddleware': 610,
        'rotating_proxies.middlewares.BanDetectionMiddleware': 620,
        'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
        'random_useragent.RandomUserAgentMiddleware': 400
    }
})

process.crawl(spider)
process.start() # the script will block here until the crawling is finished

scrapy 脚本看起来像任何其他脚本.我正在使用 pyinstaller.exe --onefile main.py 将其转换为 exe 文件.当我尝试打开 dist 文件夹中的 main.exe 文件时,它开始输出错误:

The scrapy script looks like any other. I am using pyinstaller.exe --onefile main.py to convert it to an exe file. When I try to open the main.exe file inside dist folder it starts outputing errors:

FileNotFoundError: [Errno 2] No such file or directory: '...\\scrapy\\VERSION'

我可以通过在 dist 文件夹中创建一个 scrapy 文件夹并从 lib/site-packages/scrapy 上传一个 VERSION 文件来修复它.在那之后,发生了许多其他错误,但我可以通过上传一些scrapy库来修复它们.

I can fix it by creating a scrapy folder inside the dist folder and uploading a VERSION file from lib/site-packages/scrapy. After that, many other errors occur but I can fix them by uploading some scrapy libraries.

最后它开始输出错误:

ModuleNotFoundError: No module named 'email.mime'

我什至不知道这是什么意思.没见过.

I don`t even know what does it mean. I have never seen it.

我正在使用:

Python 3.6.5
Scrapy 1.5.0
pyinstaller 3.3.1

推荐答案

我遇到了同样的情况.
与其试图让 pyinstaller 计算这个文件(我所有的尝试都失败了),我决定检查并更改部分 scrapy 代码以避免此错误.

I had the same situation.
Instead of trying to make pyinstaller count this file (I failed all my attempts to do it) I decided to check and change some part of scrapy code in order to avoid this error.

我注意到只有一个地方 \scrapy\VERSION 使用的文件-- \scrapy\__init__.py
我决定通过更改 scrapy__init__.py 从 scrapy\version 硬编码该值:

I noticed that there is only one place where \scrapy\VERSION file used-- \scrapy\__init__.py
I decided to hardcode that value from scrapy\version by changing scrapy__init__.py :

#import pkgutil
__version__ = "1.5.0" #pkgutil.get_data(__package__, 'VERSION').decode('ascii').strip()
version_info = tuple(int(v) if v.isdigit() else v
                     for v in __version__.split('.'))
#del pkgutil

此更改后,无需将版本存储在外部文件中.由于没有对 \scrapy\version 文件的引用 - 该错误不会发生.

After this change there is no need to store version in external file. As there is no reference to \scrapy\version file - that error will not occure.

在那之后,我遇到了与 FileNotFoundError: [Errno 2]rel="nofollow noreferrer">\scrapy\mime.types 文件.
\scrapy\mime.types 也有同样的情况——它只用于 \scrapy\responsetypes.py

After that I had the same FileNotFoundError: [Errno 2] with \scrapy\mime.types file.
There is the same situation with \scrapy\mime.types - it used only in \scrapy\responsetypes.py

...
#from pkgutil import get_data
...
    def __init__(self):
        self.classes = {}
        self.mimetypes = MimeTypes()
        #mimedata = get_data('scrapy', 'mime.types').decode('utf8')
        mimedata = """
        Copypaste all 750 lines of \scrapy\mime.types here
"""
        self.mimetypes.readfp(StringIO(mimedata))
        for mimetype, cls in six.iteritems(self.CLASSES):
            self.classes[mimetype] = load_object(cls)

此更改通过 \scrapy\mime.types 文件解决了 FileNotFoundError: [Errno 2].我同意将 750 行文本硬编码到 Python 代码中并不是最好的决定.

This change resolved FileNotFoundError: [Errno 2] with \scrapy\mime.types file. I agree that hardcode 750 lines of text into python code is not the best decision.

之后我开始收到 ModuleNotFoundError: No module named scrapy.spiderloader .我将 "scrapy.spiderloader" 添加到 pyinstaller 的隐藏导入参数中.
下一个问题ModuleNotFoundError:没有名为scrapy.statscollectors的模块.
用于我的 scrapy 脚本的 pyinstaller 命令的最终版本包含 46 个隐藏的导入 - 之后我收到了可用的 .exe 文件.

After that I started to recieve ModuleNotFoundError: No module named scrapy.spiderloader . I added "scrapy.spiderloader" into hidden imports parameter of pyinstaller.
Next Issue ModuleNotFoundError: No module named scrapy.statscollectors.
Final version of pyinstaller command for my scrapy script consist of 46 hidden imports - after that I received working .exe file.

这篇关于python使用pyinstaller将scrapy转换为exe文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆