13.scrapy框架的日誌等級和請求傳參
阿新 • • 發佈:2018-12-05
今日概要
- 日誌等級
- 請求傳參
- 如何提高scrapy的爬取效率
今日詳情
一.Scrapy的日誌等級
- 在使用scrapy crawl spiderFileName執行程式時,在終端裡列印輸出的就是scrapy的日誌資訊。
- 日誌資訊的種類:
ERROR : 一般錯誤
WARNING : 警告
INFO : 一般的資訊
DEBUG : 除錯資訊
- 設定日誌資訊指定輸出:
在settings.py配置檔案中,加入
LOG_LEVEL = ‘指定日誌資訊種類’即可。
LOG_FILE = 'log.txt'則表示將日誌資訊寫入到指定檔案中進行儲存。
二.請求傳參
- 在某些情況下,我們爬取的資料不在同一個頁面中,例如,我們爬取一個電影網站,電影的名稱,評分在一級頁面,而要爬取的其他電影詳情在其二級子頁面中。這時我們就需要用到請求傳參。
- 案例展示:爬取www.id97.com電影網,將一級頁面中的電影名稱,型別,評分一級二級頁面中的上映時間,導演,片長進行爬取。
爬蟲檔案:
# -*- coding: utf-8 -*-
import scrapy
from moviePro.items import MovieproItem
class MovieSpider(scrapy.Spider): name = 'movie' allowed_domains = ['www.id97.com'] start_urls = ['http://www.id97.com/'] def parse(self, response): div_list = response.xpath('//div[@class="col-xs-1-5 movie-item"]') for div in div_list: item = MovieproItem() item['name'] = div.xpath('.//h1/a/text()').extract_first() item['score'] = div.xpath('.//h1/em/text()').extract_first() #xpath(string(.))表示提取當前節點下所有子節點中的資料值(.)表示當前節點 item['kind'] = div.xpath('.//div[@class="otherinfo"]').xpath('string(.)').extract_first() item['detail_url'] = div.xpath('./div/a/@href').extract_first() #請求二級詳情頁面,解析二級頁面中的相應內容,通過meta引數進行Request的資料傳遞 yield scrapy.Request(url=item['detail_url'],callback=self.parse_detail,meta={'item':item}) def parse_detail(self,response): #通過response獲取item item = response.meta['item'] item['actor'] = response.xpath('//div[@class="row"]//table/tr[1]/a/text()').extract_first() item['time'] = response.xpath('//div[@class="row"]//table/tr[7]/td[2]/text()').extract_first() item['long'] = response.xpath('//div[@class="row"]//table/tr[8]/td[2]/text()').extract_first() #提交item到管道 yield item
items檔案:
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html import scrapy class MovieproItem(scrapy.Item): # define the fields for your item here like: name = scrapy.Field() score = scrapy.Field() time = scrapy.Field() long = scrapy.Field() actor = scrapy.Field() kind = scrapy.Field() detail_url = scrapy.Field()
管道檔案:
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import json class MovieproPipeline(object): def __init__(self): self.fp = open('data.txt','w') def process_item(self, item, spider): dic = dict(item) print(dic) json.dump(dic,self.fp,ensure_ascii=False) return item def close_spider(self,spider): self.fp.close()
三.如何提高scrapy的爬取效率
增加併發:
預設scrapy開啟的併發執行緒為32個,可以適當進行增加。在settings配置檔案中修改CONCURRENT_REQUESTS = 100值為100,併發設定成了為100。
降低日誌級別:
在執行scrapy時,會有大量日誌資訊的輸出,為了減少CPU的使用率。可以設定log輸出資訊為INFO或者ERROR即可。在配置檔案中編寫:LOG_LEVEL = ‘INFO’
禁止cookie:
如果不是真的需要cookie,則在scrapy爬取資料時可以進位制cookie從而減少CPU的使用率,提升爬取效率。在配置檔案中編寫:COOKIES_ENABLED = False
禁止重試:
對失敗的HTTP進行重新請求(重試)會減慢爬取速度,因此可以禁止重試。在配置檔案中編寫:RETRY_ENABLED = False
減少下載超時:
如果對一個非常慢的連結進行爬取,減少下載超時可以能讓卡住的連結快速被放棄,從而提升效率。在配置檔案中進行編寫:DOWNLOAD_TIMEOUT = 10 超時時間為10s
測試案例:爬取校花網校花圖片 www.521609.com
# -*- coding: utf-8 -*-
import scrapy
from xiaohua.items import XiaohuaItem
class XiahuaSpider(scrapy.Spider): name = 'xiaohua' allowed_domains = ['www.521609.com'] start_urls = ['http://www.521609.com/daxuemeinv/'] pageNum = 1 url = 'http://www.521609.com/daxuemeinv/list8%d.html' def parse(self, response): li_list = response.xpath('//div[@class="index_img list_center"]/ul/li') for li in li_list: school = li.xpath('./a/img/@alt').extract_first() img_url = li.xpath('./a/img/@src').extract_first() item = XiaohuaItem() item['school'] = school item['img_url'] = 'http://www.521609.com' + img_url yield item if self.pageNum < 10: self.pageNum += 1 url = format(self.url % self.pageNum) #print(url) yield scrapy.Request(url=url,callback=self.parse)
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html import scrapy class XiaohuaItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() school=scrapy.Field() img_url=scrapy.Field()
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import json import os import urllib.request class XiaohuaPipeline(object): def __init__(self): self.fp = None def open_spider(self,spider): print('開始爬蟲') self.fp = open('./xiaohua.txt','w') def download_img(self,item): url = item['img_url'] fileName = item['school']+'.jpg' if not os.path.exists('./xiaohualib'): os.mkdir('./xiaohualib') filepath = os.path.join('./xiaohualib',fileName) urllib.request.urlretrieve(url,filepath) print(fileName+"下載成功") def process_item(self, item, spider): obj = dict(item) json_str = json.dumps(obj,ensure_ascii=False) self.fp.write(json_str+'\n') #下載圖片 self.download_img(item) return item def close_spider(self,spider): print('結束爬蟲') self.fp.close()
配置檔案:
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
CONCURRENT_REQUESTS = 100 COOKIES_ENABLED = False LOG_LEVEL = 'ERROR' RETRY_ENABLED = False DOWNLOAD_TIMEOUT = 3 # Configure a delay for requests for the same website (default: 0) # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 DOWNLOAD_DELAY = 3