1. 程式人生 > 實用技巧 >scrapy-redis使用以及剖析

scrapy-redis使用以及剖析

scrapy-redis是一個基於redis的scrapy元件,通過它可以快速實現簡單分散式爬蟲程式,該元件本質上提供了三大功能:

  • scheduler - 排程器
  • dupefilter - URL去重規則(被排程器使用)
  • pipeline - 資料持久化

Scrapy-redis提供了下面四種元件(components):(四種元件意味著這四個模組都要做相應的修改)

  • Scheduler
  • Duplication Filter
  • Item Pipeline
  • Base Spider

scrapy-redis元件

scrapy-redis架構

URL去重
定義去重規則(被排程器呼叫並應用)
 
    a. 內部會使用以下配置進行連線Redis
 
        # REDIS_HOST = 'localhost'                            # 主機名
        # REDIS_PORT = 6379                                   # 埠
        # REDIS_URL = 'redis://user:pass@hostname:9001'       # 連線URL(優先於以上配置)
        # REDIS_PARAMS  = {}                                  # Redis連線引數             預設:REDIS_PARAMS = {'socket_timeout': 30,'socket_connect_timeout': 30,'retry_on_timeout': True,'encoding': REDIS_ENCODING,})
        # REDIS_PARAMS['redis_cls'] = 'myproject.RedisClient' # 指定連線Redis的Python模組  預設:redis.StrictRedis
        # REDIS_ENCODING = "utf-8"                            # redis編碼型別             預設:'utf-8'
     
    b. 去重規則通過redis的集合完成,集合的Key為:
     
        key = defaults.DUPEFILTER_KEY % {'timestamp': int(time.time())}
        預設配置:
            DUPEFILTER_KEY = 'dupefilter:%(timestamp)s'
              
    c. 去重規則中將url轉換成唯一標示,然後在redis中檢查是否已經在集合中存在
     
        from scrapy.utils import request
        from scrapy.http import Request
         
        req = Request(url='http://www.cnblogs.com/wupeiqi.html')
        result = request.request_fingerprint(req)
        print(result) # 8ea4fd67887449313ccc12e5b6b92510cc53675c
         
         
        PS:
            - URL引數位置不同時,計算結果一致;
            - 預設請求頭不在計算範圍,include_headers可以設定指定請求頭
            示例:
                from scrapy.utils import request
                from scrapy.http import Request
                 
                req = Request(url='http://www.baidu.com?name=8&id=1',callback=lambda x:print(x),cookies={'k1':'vvvvv'})
                result = request.request_fingerprint(req,include_headers=['cookies',])
                 
                print(result)
                 
                req = Request(url='http://www.baidu.com?id=1&name=8',callback=lambda x:print(x),cookies={'k1':666})
                 
                result = request.request_fingerprint(req,include_headers=['cookies',])
                 
                print(result)
         
"""
# Ensure all spiders share same duplicates filter through redis.
# DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
排程器
"""
排程器,排程器使用PriorityQueue(有序集合)、FifoQueue(列表)、LifoQueue(列表)進行儲存請求,並且使用RFPDupeFilter對URL去重

    a. 排程器
        SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'          # 預設使用優先順序佇列(預設),其他:PriorityQueue(有序集合),FifoQueue(列表)、LifoQueue(列表)
        SCHEDULER_QUEUE_KEY = '%(spider)s:requests'                         # 排程器中請求存放在redis中的key
        SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat"                  # 對儲存到redis中的資料進行序列化,預設使用pickle
        SCHEDULER_PERSIST = True                                            # 是否在關閉時候保留原來的排程器和去重記錄,True=保留,False=清空
        SCHEDULER_FLUSH_ON_START = True                                     # 是否在開始之前清空 排程器和去重記錄,True=清空,False=不清空
        SCHEDULER_IDLE_BEFORE_CLOSE = 10                                    # 去排程器中獲取資料時,如果為空,最多等待時間(最後沒資料,未獲取到)。
        SCHEDULER_DUPEFILTER_KEY = '%(spider)s:dupefilter'                  # 去重規則,在redis中儲存時對應的key
        SCHEDULER_DUPEFILTER_CLASS = 'scrapy_redis.dupefilter.RFPDupeFilter'# 去重規則對應處理的類


"""
# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# Default requests serializer is pickle, but it can be changed to any module
# with loads and dumps functions. Note that pickle is not compatible between
# python versions.
# Caveat: In python 3.x, the serializer must return strings keys and support
# bytes as values. Because of this reason the json or msgpack module will not
# work by default. In python 2.x there is no such issue and you can use
# 'json' or 'msgpack' as serializers.
# SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat"

# Don't cleanup redis queues, allows to pause/resume crawls.
# SCHEDULER_PERSIST = True

# Schedule requests using a priority queue. (default)
# SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'

# Alternative queues.
# SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.FifoQueue'
# SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.LifoQueue'

# Max idle time to prevent the spider from being closed when distributed crawling.
# This only works if queue class is SpiderQueue or SpiderStack,
# and may also block the same time when your spider start at the first time (because the queue is empty).
# SCHEDULER_IDLE_BEFORE_CLOSE = 10
資料持久化
2. 定義持久化,爬蟲yield Item物件時執行RedisPipeline

    a. 將item持久化到redis時,指定key和序列化函式

        REDIS_ITEMS_KEY = '%(spider)s:items'
        REDIS_ITEMS_SERIALIZER = 'json.dumps'

    b. 使用列表儲存item資料
起始URL相關
"""
起始URL相關

    a. 獲取起始URL時,去集合中獲取還是去列表中獲取?True,集合;False,列表
        REDIS_START_URLS_AS_SET = False    # 獲取起始URL時,如果為True,則使用self.server.spop;如果為False,則使用self.server.lpop
    b. 編寫爬蟲時,起始URL從redis的Key中獲取
        REDIS_START_URLS_KEY = '%(name)s:start_urls'

"""
# If True, it uses redis' ``spop`` operation. This could be useful if you
# want to avoid duplicates in your start urls list. In this cases, urls must
# be added via ``sadd`` command or you will get a type error from redis.
# REDIS_START_URLS_AS_SET = False

# Default start urls key for RedisSpider and RedisCrawlSpider.
# REDIS_START_URLS_KEY = '%(name)s:start_urls'
scrapy-redis示例
1 # DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
 2 #
 3 #
 4 # from scrapy_redis.scheduler import Scheduler
 5 # from scrapy_redis.queue import PriorityQueue
 6 # SCHEDULER = "scrapy_redis.scheduler.Scheduler"
 7 # SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'          # 預設使用優先順序佇列(預設),其他:PriorityQueue(有序集合),FifoQueue(列表)、LifoQueue(列表)
 8 # SCHEDULER_QUEUE_KEY = '%(spider)s:requests'                         # 排程器中請求存放在redis中的key
 9 # SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat"                  # 對儲存到redis中的資料進行序列化,預設使用pickle
10 # SCHEDULER_PERSIST = True                                            # 是否在關閉時候保留原來的排程器和去重記錄,True=保留,False=清空
11 # SCHEDULER_FLUSH_ON_START = False                                    # 是否在開始之前清空 排程器和去重記錄,True=清空,False=不清空
12 # SCHEDULER_IDLE_BEFORE_CLOSE = 10                                    # 去排程器中獲取資料時,如果為空,最多等待時間(最後沒資料,未獲取到)。
13 # SCHEDULER_DUPEFILTER_KEY = '%(spider)s:dupefilter'                  # 去重規則,在redis中儲存時對應的key
14 # SCHEDULER_DUPEFILTER_CLASS = 'scrapy_redis.dupefilter.RFPDupeFilter'# 去重規則對應處理的類
15 #
16 #
17 #
18 # REDIS_HOST = '10.211.55.13'                           # 主機名
19 # REDIS_PORT = 6379                                     # 埠
20 # # REDIS_URL = 'redis://user:pass@hostname:9001'       # 連線URL(優先於以上配置)
21 # # REDIS_PARAMS  = {}                                  # Redis連線引數             預設:REDIS_PARAMS = {'socket_timeout': 30,'socket_connect_timeout': 30,'retry_on_timeout': True,'encoding': REDIS_ENCODING,})
22 # # REDIS_PARAMS['redis_cls'] = 'myproject.RedisClient' # 指定連線Redis的Python模組  預設:redis.StrictRedis
23 # REDIS_ENCODING = "utf-8"                              # redis編碼型別             預設:'utf-8'
24 
25 配置檔案

配置檔案
1 import scrapy
 2 
 3 
 4 class ChoutiSpider(scrapy.Spider):
 5     name = "chouti"
 6     allowed_domains = ["chouti.com"]
 7     start_urls = (
 8         'http://www.chouti.com/',
 9     )
10 
11     def parse(self, response):
12         for i in range(0,10):
13             yield

爬蟲檔案