scrapy-redis使用以及剖析
阿新 • • 發佈:2018-04-01
dex localhost 取數據 param wls 默認 pid list isp
scrapy-redis是一個基於redis的scrapy組件,通過它可以快速實現簡單分布式爬蟲程序,該組件本質上提供了三大功能:
- scheduler - 調度器
- dupefilter - URL去重規則(被調度器使用)
- pipeline - 數據持久化
scrapy-redis組件
1. URL去重
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
定義去重規則(被調度器調用並應用) a. 內部會使用以下配置進行連接Redis
# REDIS_HOST = ‘localhost‘ # 主機名
# REDIS_PORT = 6379 # 端口
# REDIS_URL = ‘redis://user:pass@hostname:9001‘ # 連接URL(優先於以上配置)
# REDIS_PARAMS = {} # Redis連接參數 默認:REDIS_PARAMS = {‘socket_timeout‘: 30,‘socket_connect_timeout‘: 30,‘retry_on_timeout‘: True,‘encoding‘: REDIS_ENCODING,}) # REDIS_PARAMS[‘redis_cls‘] = ‘myproject.RedisClient‘ # 指定連接Redis的Python模塊 默認:redis.StrictRedis
# REDIS_ENCODING = "utf-8" # redis編碼類型 默認:‘utf-8‘
b. 去重規則通過redis的集合完成,集合的Key為:
key = defaults.DUPEFILTER_KEY % { ‘timestamp‘ : int (time.time())}
默認配置:
DUPEFILTER_KEY = ‘dupefilter:%(timestamp)s‘
c. 去重規則中將url轉換成唯一標示,然後在redis中檢查是否已經在集合中存在
from scrapy.utils import request
from scrapy.http import Request
req = Request(url = ‘http://www.cnblogs.com/wupeiqi.html‘ )
result = request.request_fingerprint(req)
print (result) # 8ea4fd67887449313ccc12e5b6b92510cc53675c
PS:
- URL參數位置不同時,計算結果一致;
- 默認請求頭不在計算範圍,include_headers可以設置指定請求頭
示例:
from scrapy.utils import request
from scrapy.http import Request
req = Request(url = ‘http://www.baidu.com?name=8&id=1‘ ,callback = lambda x: print (x),cookies = { ‘k1‘ : ‘vvvvv‘ })
result = request.request_fingerprint(req,include_headers = [ ‘cookies‘ ,])
print (result)
req = Request(url = ‘http://www.baidu.com?id=1&name=8‘ ,callback = lambda x: print (x),cookies = { ‘k1‘ : 666 })
result = request.request_fingerprint(req,include_headers = [ ‘cookies‘ ,])
print (result)
"""
# Ensure all spiders share same duplicates filter through redis.
# DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
|
2. 調度器
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
"""
調度器,調度器使用PriorityQueue(有序集合)、FifoQueue(列表)、LifoQueue(列表)進行保存請求,並且使用RFPDupeFilter對URL去重
a. 調度器
SCHEDULER_QUEUE_CLASS = ‘scrapy_redis.queue.PriorityQueue‘ # 默認使用優先級隊列(默認),其他:PriorityQueue(有序集合),FifoQueue(列表)、LifoQueue(列表)
SCHEDULER_QUEUE_KEY = ‘%(spider)s:requests‘ # 調度器中請求存放在redis中的key
SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat" # 對保存到redis中的數據進行序列化,默認使用pickle
SCHEDULER_PERSIST = True # 是否在關閉時候保留原來的調度器和去重記錄,True=保留,False=清空
SCHEDULER_FLUSH_ON_START = True # 是否在開始之前清空 調度器和去重記錄,True=清空,False=不清空
SCHEDULER_IDLE_BEFORE_CLOSE = 10 # 去調度器中獲取數據時,如果為空,最多等待時間(最後沒數據,未獲取到)。
SCHEDULER_DUPEFILTER_KEY = ‘%(spider)s:dupefilter‘ # 去重規則,在redis中保存時對應的key
SCHEDULER_DUPEFILTER_CLASS = ‘scrapy_redis.dupefilter.RFPDupeFilter‘# 去重規則對應處理的類
"""
# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
# Default requests serializer is pickle, but it can be changed to any module
# with loads and dumps functions. Note that pickle is not compatible between
# python versions.
# Caveat: In python 3.x, the serializer must return strings keys and support
# bytes as values. Because of this reason the json or msgpack module will not
# work by default. In python 2.x there is no such issue and you can use
# ‘json‘ or ‘msgpack‘ as serializers.
# SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat"
# Don‘t cleanup redis queues, allows to pause/resume crawls.
# SCHEDULER_PERSIST = True
# Schedule requests using a priority queue. (default)
# SCHEDULER_QUEUE_CLASS = ‘scrapy_redis.queue.PriorityQueue‘
# Alternative queues.
# SCHEDULER_QUEUE_CLASS = ‘scrapy_redis.queue.FifoQueue‘
# SCHEDULER_QUEUE_CLASS = ‘scrapy_redis.queue.LifoQueue‘
# Max idle time to prevent the spider from being closed when distributed crawling.
# This only works if queue class is SpiderQueue or SpiderStack,
# and may also block the same time when your spider start at the first time (because the queue is empty).
# SCHEDULER_IDLE_BEFORE_CLOSE = 10
|
3. 數據持久化
1 2 3 4 5 6 7 8 |
2. 定義持久化,爬蟲 yield Item對象時執行RedisPipeline
a. 將item持久化到redis時,指定key和序列化函數
REDIS_ITEMS_KEY = ‘%(spider)s:items‘
REDIS_ITEMS_SERIALIZER = ‘json.dumps‘
b. 使用列表保存item數據
|
4. 起始URL相關
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
"""
起始URL相關
a. 獲取起始URL時,去集合中獲取還是去列表中獲取?True,集合;False,列表
REDIS_START_URLS_AS_SET = False # 獲取起始URL時,如果為True,則使用self.server.spop;如果為False,則使用self.server.lpop
b. 編寫爬蟲時,起始URL從redis的Key中獲取
REDIS_START_URLS_KEY = ‘%(name)s:start_urls‘
"""
# If True, it uses redis‘ ``spop`` operation. This could be useful if you
# want to avoid duplicates in your start urls list. In this cases, urls must
# be added via ``sadd`` command or you will get a type error from redis.
# REDIS_START_URLS_AS_SET = False
# Default start urls key for RedisSpider and RedisCrawlSpider.
# REDIS_START_URLS_KEY = ‘%(name)s:start_urls‘
|
scrapy-redis使用以及剖析