1. 程式人生 > 其它 >【Scrapy 框架翻譯】物品管道(Item Pipeline) 篇

【Scrapy 框架翻譯】物品管道(Item Pipeline) 篇

技術標籤:# Scrapy 資料採集pythonscrapyPipeline原始碼管道

版本號:Scrapy 2.4

文章目錄

內容介紹

Pipeline用於處理通過Scrapy抓取來的資料。
主要用途:

  1. 清理HTML資料
  2. 驗證抓取去的資料(檢查專案是否包含某些欄位)
  3. 檢查副本(並刪除)
  4. 將Scrapy的項儲存在資料庫中

pipeline基礎方法

每個專案管道元件都是一個Python類

process_item

(self, item, spider):pipeline處理定義的Items內容。

open_spider(self, spider):開啟Spider時呼叫此方法。

close_spider(self, spider):關閉Spider時呼叫此方法。

from_crawler(cls, crawler):當建立一個pipline例項的時候該方法會被呼叫,該方法必須返回一個pipline例項物件,一般用於獲取scrapy專案的配置setting中配置的值。

pipeline簡單舉例

抓取資料用於調整price屬性示例

from itemadapter import ItemAdapter
from
scrapy.exceptions import DropItem class PricePipeline: vat_factor = 1.15 def process_item(self, item, spider): adapter = ItemAdapter(item) if adapter.get('price'): if adapter.get('price_excludes_vat'): adapter['price'] = adapter['price'] * self.
vat_factor return item else: raise DropItem(f"Missing price in {item}")

資料寫入JSON檔案

import json

from itemadapter import ItemAdapter

class JsonWriterPipeline:

    def open_spider(self, spider):
        self.file = open('items.jl', 'w')

    def close_spider(self, spider):
        self.file.close()

    def process_item(self, item, spider):
        line = json.dumps(ItemAdapter(item).asdict()) + "\n"
        self.file.write(line)
        return item

資料寫入寫入MongoDB

import pymongo
from itemadapter import ItemAdapter

class MongoPipeline:

    collection_name = 'scrapy_items'

    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            mongo_uri=crawler.settings.get('MONGO_URI'),
            mongo_db=crawler.settings.get('MONGO_DATABASE', 'items')
        )

    def open_spider(self, spider):
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]

    def close_spider(self, spider):
        self.client.close()

    def process_item(self, item, spider):
        self.db[self.collection_name].insert_one(ItemAdapter(item).asdict())
        return item

頁面截圖

import hashlib
from urllib.parse import quote

import scrapy
from itemadapter import ItemAdapter

class ScreenshotPipeline:
    """
    每個Scrapy專案使用Splash渲染螢幕截圖的管道
	"""

    SPLASH_URL = "http://localhost:8050/render.png?url={}"

    async def process_item(self, item, spider):
        adapter = ItemAdapter(item)
        encoded_item_url = quote(adapter["url"])
        screenshot_url = self.SPLASH_URL.format(encoded_item_url)
        request = scrapy.Request(screenshot_url)
        response = await spider.crawler.engine.download(request, spider)

        if response.status != 200:
            # Error happened, return item.
            return item

        # Save screenshot to file, filename will be hash of url.
        url = adapter["url"]
        url_hash = hashlib.md5(url.encode("utf8")).hexdigest()
        filename = f"{url_hash}.png"
        with open(filename, "wb") as f:
            f.write(response.body)

        # Store filename in item.
        adapter["screenshot_filename"] = filename
        return item

資料重複過濾

from itemadapter import ItemAdapter
from scrapy.exceptions import DropItem

class DuplicatesPipeline:

    def __init__(self):
        self.ids_seen = set()

    def process_item(self, item, spider):
        adapter = ItemAdapter(item)
        if adapter['id'] in self.ids_seen:
            raise DropItem(f"Duplicate item found: {item!r}")
        else:
            self.ids_seen.add(adapter['id'])
            return item

pipeline啟用方法

在settings.py中設定,否則抓取資料無法處理

ITEM_PIPELINES = {
    'myproject.pipelines.PricePipeline': 300,
    'myproject.pipelines.JsonWriterPipeline': 800,
}