1. 程式人生 > 程式設計 >Pyinstaller打包Scrapy專案的實現步驟

Pyinstaller打包Scrapy專案的實現步驟

1.安裝pyinstaller https://www.jb51.net/article/177160.htm

2.安裝pywin32 https://www.jb51.net/article/187388.htm

3.安裝其他模組

注意點:

scrapy用pyinstaller打包不能用

cmdline.execute('scrapy crawl douban -o test.csv --nolog'.split())

我用的是CrawlerProcess方式來輸出

舉個栗子:

1、在scrapy專案根目錄下建一個crawl.py(你可以自己定義)如下圖

Pyinstaller打包Scrapy專案的實現步驟

cralw.py程式碼如下

# -*- coding: utf-8 -*-
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from douban.spiders.douban_spider import Douban_spider

#打包需要的import
import urllib.robotparser
import scrapy.spiderloader
import scrapy.statscollectors
import scrapy.logformatter
import scrapy.dupefilters
import scrapy.squeues
import scrapy.extensions.spiderstate
import scrapy.extensions.corestats
import scrapy.extensions.telnet
import scrapy.extensions.logstats
import scrapy.extensions.memusage
import scrapy.extensions.memdebug
import scrapy.extensions.feedexport
import scrapy.extensions.closespider
import scrapy.extensions.debug
import scrapy.extensions.httpcache
import scrapy.extensions.statsmailer
import scrapy.extensions.throttle
import scrapy.core.scheduler
import scrapy.core.engine
import scrapy.core.scraper
import scrapy.core.spidermw
import scrapy.core.downloader
import scrapy.downloadermiddlewares.stats
import scrapy.downloadermiddlewares.httpcache
import scrapy.downloadermiddlewares.cookies
import scrapy.downloadermiddlewares.useragent
import scrapy.downloadermiddlewares.httpproxy
import scrapy.downloadermiddlewares.ajaxcrawl
import scrapy.downloadermiddlewares.chunked
import scrapy.downloadermiddlewares.decompression
import scrapy.downloadermiddlewares.defaultheaders
import scrapy.downloadermiddlewares.downloadtimeout
import scrapy.downloadermiddlewares.httpauth
import scrapy.downloadermiddlewares.httpcompression
import scrapy.downloadermiddlewares.redirect
import scrapy.downloadermiddlewares.retry
import scrapy.downloadermiddlewares.robotstxt
import scrapy.spidermiddlewares.depth
import scrapy.spidermiddlewares.httperror
import scrapy.spidermiddlewares.offsite
import scrapy.spidermiddlewares.referer
import scrapy.spidermiddlewares.urllength
import scrapy.pipelines
import scrapy.core.downloader.handlers.http
import scrapy.core.downloader.contextfactory

from douban.pipelines import DoubanPipeline
from douban.items import DoubanItem
import douban.settings

if __name__ == '__main__':
  setting = get_project_settings()
  process = CrawlerProcess(settings=setting)
  process.crawl(Douban_spider)
  process.start()

2、在crawl.py目錄下pyinstaller crawl.py生成dist,build(可刪)和crawl.spec(可刪)。

3、在crawl.exe目錄下建立資料夾scrapy,然後到自己安裝的scrapy資料夾中把VERSION和mime.types兩個檔案複製到剛才建立的scrapy資料夾中。

4、釋出程式 包括douban/dist 和douban/scrapy.cfg

如果沒有scrapy.cfg無法讀取settings.py和pipelines.py的配置

Pyinstaller打包Scrapy專案的實現步驟

5、在另外一臺機器上測試成功

6、對於自定義的pipelines和settings,貌似用pyinstaller打包後的 exe無法讀取到settings和pipelines,哪位高手看看能解決這個問題???

到此這篇關於Pyinstaller打包Scrapy專案的實現步驟的文章就介紹到這了,更多相關Pyinstaller打包Scrapy內容請搜尋我們以前的文章或繼續瀏覽下面的相關文章希望大家以後多多支援我們!