1. 程式人生 > >爬蟲-scrapy

爬蟲-scrapy

password fetch urlencode html down nco project sage nds

閱讀目錄

  • 一 介紹
  • 二 安裝
  • 三 命令行工具
  • 四 項目結構以及爬蟲應用簡介
  • 五 Spiders
  • 六 Selectors
  • 七 Items
  • 八 Item Pipeline
  • 九 Dowloader Middeware
  • 十 Spider Middleware
  • 十一 爬取亞馬遜商品信息

一 介紹

Scrapy一個開源和協作的框架,其最初是為了頁面抓取 (更確切來說, 網絡抓取 )所設計的,使用它可以以快速、簡單、可擴展的方式從網站中提取所需的數據。但目前Scrapy的用途十分廣泛,可用於如數據挖掘、監測和自動化測試等領域,也可以應用在獲取API所返回的數據(例如 Amazon Associates Web Services ) 或者通用的網絡爬蟲。

Scrapy 是基於twisted框架開發而來,twisted是一個流行的事件驅動的python網絡框架。因此Scrapy使用了一種非阻塞(又名異步)的代碼來實現並發。整體架構大致如下
技術分享圖片

The data flow in Scrapy is controlled by the execution engine, and goes like this:

  1. The Engine gets the initial Requests to crawl from the Spider.
  2. The Engine schedules the Requests in the Scheduler and asks for the next Requests to crawl.
  3. The Scheduler returns the next Requests to the Engine.
  4. The Engine sends the Requests to the Downloader, passing through the Downloader Middlewares (see process_request()).
  5. Once the page finishes downloading the Downloader generates a Response (with that page) and sends it to the Engine, passing through the Downloader Middlewares (see process_response()
    ).
  6. The Engine receives the Response from the Downloader and sends it to the Spider for processing, passing through the Spider Middleware (see process_spider_input()).
  7. The Spider processes the Response and returns scraped items and new Requests (to follow) to the Engine, passing through the Spider Middleware (see process_spider_output()).
  8. The Engine sends processed items to Item Pipelines, then send processed Requests to the Scheduler and asks for possible next Requests to crawl.
  9. The process repeats (from step 1) until there are no more requests from the Scheduler.

Components:

  1. 引擎(EGINE)

    引擎負責控制系統所有組件之間的數據流,並在某些動作發生時觸發事件。有關詳細信息,請參見上面的數據流部分。

  2. 調度器(SCHEDULER)
    用來接受引擎發過來的請求, 壓入隊列中, 並在引擎再次請求的時候返回. 可以想像成一個URL的優先級隊列, 由它來決定下一個要抓取的網址是什麽, 同時去除重復的網址
  3. 下載器(DOWLOADER)
    用於下載網頁內容, 並將網頁內容返回給EGINE,下載器是建立在twisted這個高效的異步模型上的
  4. 爬蟲(SPIDERS)
    SPIDERS是開發人員自定義的類,用來解析responses,並且提取items,或者發送新的請求
  5. 項目管道(ITEM PIPLINES)
    在items被提取後負責處理它們,主要包括清理、驗證、持久化(比如存到數據庫)等操作
  6. 下載器中間件(Downloader Middlewares)
    位於Scrapy引擎和下載器之間,主要用來處理從EGINE傳到DOWLOADER的請求request,已經從DOWNLOADER傳到EGINE的響應response,你可用該中間件做以下幾件事
    1. process a request just before it is sent to the Downloader (i.e. right before Scrapy sends the request to the website);
    2. change received response before passing it to a spider;
    3. send a new Request instead of passing received response to a spider;
    4. pass response to a spider without fetching a web page;
    5. silently drop some requests.
  7. 爬蟲中間件(Spider Middlewares)
    位於EGINE和SPIDERS之間,主要工作是處理SPIDERS的輸入(即responses)和輸出(即requests)

官網鏈接:https://docs.scrapy.org/en/latest/topics/architecture.html

二 安裝

#Windows平臺
    1、pip3 install wheel #安裝後,便支持通過wheel文件安裝軟件,wheel文件官網:https://www.lfd.uci.edu/~gohlke/pythonlibs
    3、pip3 install lxml
    4、pip3 install pyopenssl
    5、下載並安裝pywin32:https://sourceforge.net/projects/pywin32/files/pywin32/
    6、下載twisted的wheel文件:http://www.lfd.uci.edu/~gohlke/pythonlibs/#twisted
    7、執行pip3 install 下載目錄\Twisted-17.9.0-cp36-cp36m-win_amd64.whl
    8、pip3 install scrapy
  
#Linux平臺
    1、pip3 install scrapy

三 命令行工具

#1 查看幫助
    scrapy -h
    scrapy <command> -h

#2 有兩種命令:其中Project-only必須切到項目文件夾下才能執行,而Global的命令則不需要
    Global commands:
        startproject #創建項目
        genspider    #創建爬蟲程序
        settings     #如果是在項目目錄下,則得到的是該項目的配置
        runspider    #運行一個獨立的python文件,不必創建項目
        shell        #scrapy shell url地址  在交互式調試,如選擇器規則正確與否
        fetch        #獨立於程單純地爬取一個頁面,可以拿到請求頭
        view         #下載完畢後直接彈出瀏覽器,以此可以分辨出哪些數據是ajax請求
        version      #scrapy version 查看scrapy的版本,scrapy version -v查看scrapy依賴庫的版本
    Project-only commands:
        crawl        #運行爬蟲,必須創建項目才行,確保配置文件中ROBOTSTXT_OBEY = False
        check        #檢測項目中有無語法錯誤
        list         #列出項目中所包含的爬蟲名
        edit         #編輯器,一般不用
        parse        #scrapy parse url地址 --callback 回調函數  #以此可以驗證我們的回調函數是否正確
        bench        #scrapy bentch壓力測試

#3 官網鏈接
    https://docs.scrapy.org/en/latest/topics/commands.html
技術分享圖片
#1、執行全局命令:請確保不在某個項目的目錄下,排除受該項目配置的影響
scrapy startproject MyProject

cd MyProject
scrapy genspider baidu www.baidu.com

scrapy settings --get XXX #如果切換到項目目錄下,看到的則是該項目的配置

scrapy runspider baidu.py

scrapy shell https://www.baidu.com
    response
    response.status
    response.body
    view(response)
    
scrapy view https://www.taobao.com #如果頁面顯示內容不全,不全的內容則是ajax請求實現的,以此快速定位問題

scrapy fetch --nolog --headers https://www.taobao.com

scrapy version #scrapy的版本

scrapy version -v #依賴庫的版本


#2、執行項目命令:切到項目目錄下
scrapy crawl baidu
scrapy check
scrapy list
scrapy parse http://quotes.toscrape.com/ --callback parse
scrapy bench
    
示範用法

四 項目結構以及爬蟲應用簡介

project_name/
   scrapy.cfg
   project_name/
       __init__.py
       items.py
       pipelines.py
       settings.py
       spiders/
           __init__.py
           爬蟲1.py
           爬蟲2.py
           爬蟲3.py

文件說明:

  • scrapy.cfg 項目的主配置信息,用來部署scrapy時使用,爬蟲相關的配置信息在settings.py文件中。
  • items.py 設置數據存儲模板,用於結構化數據,如:Django的Model
  • pipelines 數據處理行為,如:一般結構化的數據持久化
  • settings.py 配置文件,如:遞歸的層數、並發數,延遲下載等。強調:配置文件的選項必須大寫否則視為無效,正確寫法USER_AGENT=‘xxxx‘
  • spiders 爬蟲目錄,如:創建文件,編寫爬蟲規則

註意:一般創建爬蟲文件時,以網站域名命名

技術分享圖片
import scrapy
 
class XiaoHuarSpider(scrapy.spiders.Spider):
    name = "xiaohuar"                            # 爬蟲名稱 *****
    allowed_domains = ["xiaohuar.com"]  # 允許的域名
    start_urls = [
        "http://www.xiaohuar.com/hua/",   # 其實URL
    ]
 
    def parse(self, response):
        # 訪問起始URL並獲取結果後的回調函數
爬蟲1.py 技術分享圖片
import sys,os
sys.stdout=io.TextIOWrapper(sys.stdout.buffer,encoding=gb18030)
關於windows編碼

五 Spiders

技術分享圖片
#在項目目錄下新建:entrypoint.py
from scrapy.cmdline import execute
execute([scrapy, crawl, xiaohua])
默認只能在cmd中執行爬蟲,如果想在pycharm中執行需要做

強調:配置文件的選項必須是大寫,如X=‘1‘

技術分享圖片
# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule


class BaiduSpider(CrawlSpider):
    name = xiaohua
    allowed_domains = [www.xiaohuar.com]
    start_urls = [http://www.xiaohuar.com/v/]
    # download_delay = 1

    rules = (
        Rule(LinkExtractor(allow=rp\-\d\-\d+\.html$), callback=parse_item,follow=True,),
    )


    def parse_item(self, response):

        if url:
            print(======下載視頻==============================, url)
            yield scrapy.Request(url,callback=self.save)



    def save(self,response):
        print(======保存視頻==============================,response.url,len(response.body))

        import time
        import hashlib
        m=hashlib.md5()
        m.update(str(time.time()).encode(utf-8))
        m.update(response.url.encode(utf-8))

        filename=rE:\\mv\\%s.mp4 %m.hexdigest()
        with open(filename,wb) as f:
            f.write(response.body)
模版:CrawlSpider

https://docs.scrapy.org/en/latest/topics/spiders.html

六 Selectors

#1 //與/
#2 text
#3、extract與extract_first:從selector對象中解出內容
#4、屬性:xpath的屬性加前綴@
#4、嵌套查找
#5、設置默認值
#4、按照屬性查找
#5、按照屬性模糊查找
#6、正則表達式
#7、xpath相對路徑
#8、帶變量的xpath
技術分享圖片
response.selector.css()
response.selector.xpath()
可簡寫為
response.css()
response.xpath()

#1 //與/
response.xpath(//body/a/)#
response.css(div a::text)

>>> response.xpath(//body/a) #開頭的//代表從整篇文檔中尋找,body之後的/代表body的兒子
[]
>>> response.xpath(//body//a) #開頭的//代表從整篇文檔中尋找,body之後的//代表body的子子孫孫
[<Selector xpath=//body//a data=<a href="image1.html">Name: My image 1 <>, <Selector xpath=//body//a data=<a href="image2.html">Name: My image 2 <>, <Selector xpath=//body//a data=<a href="
image3.html">Name: My image 3 <‘>, <Selector xpath=‘//body//a‘ data=‘<a href="image4.html">Name: My image 4 <‘>, <Selector xpath=‘//body//a‘ data=‘<a href="image5.html">Name: My image 5 <‘>]

#2 text
>>> response.xpath(//body//a/text())
>>> response.css(body a::text)

#3、extract與extract_first:從selector對象中解出內容
>>> response.xpath(//div/a/text()).extract()
[Name: My image 1 , Name: My image 2 , Name: My image 3 , Name: My image 4 , Name: My image 5 ]
>>> response.css(div a::text).extract()
[Name: My image 1 , Name: My image 2 , Name: My image 3 , Name: My image 4 , Name: My image 5 ]

>>> response.xpath(//div/a/text()).extract_first()
Name: My image 1 
>>> response.css(div a::text).extract_first()
Name: My image 1 

#4、屬性:xpath的屬性加前綴@
>>> response.xpath(//div/a/@href).extract_first()
image1.html
>>> response.css(div a::attr(href)).extract_first()
image1.html

#4、嵌套查找
>>> response.xpath(//div).css(a).xpath(@href).extract_first()
image1.html

#5、設置默認值
>>> response.xpath(//div[@id="xxx"]).extract_first(default="not found")
not found

#4、按照屬性查找
response.xpath(//div[@id="images"]/a[@href="image3.html"]/text()).extract()
response.css(#images a[@href="image3.html"]/text()).extract()

#5、按照屬性模糊查找
response.xpath(//a[contains(@href,"image")]/@href).extract()
response.css(a[href*="image"]::attr(href)).extract()

response.xpath(//a[contains(@href,"image")]/img/@src).extract()
response.css(a[href*="imag"] img::attr(src)).extract()

response.xpath(//*[@href="image1.html"])
response.css(*[href="image1.html"])

#6、正則表達式
response.xpath(//a/text()).re(rName: (.*))
response.xpath(//a/text()).re_first(rName: (.*))

#7、xpath相對路徑
>>> res=response.xpath(//a[contains(@href,"3")])[0]
>>> res.xpath(img)
[<Selector xpath=img data=<img src="image3_thumb.jpg">>]
>>> res.xpath(./img)
[<Selector xpath=./img data=<img src="image3_thumb.jpg">>]
>>> res.xpath(.//img)
[<Selector xpath=.//img data=<img src="image3_thumb.jpg">>]
>>> res.xpath(//img) #這就是從頭開始掃描
[<Selector xpath=//img data=<img src="image1_thumb.jpg">>, <Selector xpath=//img data=<img src="image2_thumb.jpg">>, <Selector xpath=//img data=<img src="image3_thumb.jpg">>, <Selector xpa
th=//img data=<img src="image4_thumb.jpg">>, <Selector xpath=//img data=<img src="image5_thumb.jpg">>]

#8、帶變量的xpath
>>> response.xpath(//div[@id=$xxx]/a/text(),xxx=images).extract_first()
Name: My image 1 
>>> response.xpath(//div[count(a)=$yyy]/@id,yyy=5).extract_first() #求有5個a標簽的div的id
images
View Code

https://docs.scrapy.org/en/latest/topics/selectors.html

七 Items

https://docs.scrapy.org/en/latest/topics/items.html

八 Item Pipeline

https://docs.scrapy.org/en/latest/topics/item-pipeline.html

九 Dowloader Middeware

https://docs.scrapy.org/en/latest/topics/downloader-middleware.html

十 Spider Middleware

https://docs.scrapy.org/en/latest/topics/spider-middleware.html

十一 爬取亞馬遜商品信息

技術分享圖片
1、
scrapy startproject Amazon
cd Amazon
scrapy genspider spider_goods www.amazon.cn

2、settings.py
ROBOTSTXT_OBEY = False
#請求頭
DEFAULT_REQUEST_HEADERS = {
    Referer:https://www.amazon.cn/,
    User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.75 Safari/537.36
}
#打開註釋
HTTPCACHE_ENABLED = True
HTTPCACHE_EXPIRATION_SECS = 0
HTTPCACHE_DIR = httpcache
HTTPCACHE_IGNORE_HTTP_CODES = []
HTTPCACHE_STORAGE = scrapy.extensions.httpcache.FilesystemCacheStorage

3、items.py
class GoodsItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    #商品名字
    goods_name = scrapy.Field()
    #價錢
    goods_price = scrapy.Field()
    #配送方式
    delivery_method=scrapy.Field()

4、spider_goods.py
# -*- coding: utf-8 -*-
import scrapy

from Amazon.items import  GoodsItem
from scrapy.http import Request
from urllib.parse import urlencode

class SpiderGoodsSpider(scrapy.Spider):
    name = spider_goods
    allowed_domains = [www.amazon.cn]
    # start_urls = [‘http://www.amazon.cn/‘]


    def __int__(self,keyword=None,*args,**kwargs):
        super(SpiderGoodsSpider).__init__(*args,**kwargs)
        self.keyword=keyword

    def start_requests(self):
        url=https://www.amazon.cn/s/ref=nb_sb_noss_1?
        paramas={
            __mk_zh_CN: 亞馬遜網站,
            url: search - alias = aps,
            field-keywords: self.keyword
        }
        url=url+urlencode(paramas,encoding=utf-8)
        yield Request(url,callback=self.parse_index)


    def parse_index(self, response):
        print(解析索引頁:%s %response.url)

        urls=response.xpath(//*[contains(@id,"result_")]/div/div[3]/div[1]/a/@href).extract()
        for url in urls:
            yield Request(url,callback=self.parse_detail)

        next_url=response.urljoin(response.xpath(//*[@id="pagnNextLink"]/@href).extract_first())
        print(下一頁的url,next_url)
        yield Request(next_url,callback=self.parse_index)

    def parse_detail(self,response):
        print(解析詳情頁:%s %(response.url))

        item=GoodsItem()
        # 商品名字
        item[goods_name] = response.xpath(//*[@id="productTitle"]/text()).extract_first().strip()
        # 價錢
        item[goods_price] = response.xpath(//*[@id="priceblock_ourprice"]/text()).extract_first().strip()
        # 配送方式
        item[delivery_method] = ‘‘.join(response.xpath(//*[@id="ddmMerchantMessage"]//text()).extract())
        return item

5、自定義pipelines
#sql.py
import pymysql
import settings


MYSQL_HOST=settings.MYSQL_HOST
MYSQL_PORT=settings.MYSQL_PORT
MYSQL_USER=settings.MYSQL_USER
MYSQL_PWD=settings.MYSQL_PWD
MYSQL_DB=settings.MYSQL_DB

conn=pymysql.connect(
    host=MYSQL_HOST,
    port=int(MYSQL_PORT),
    user=MYSQL_USER,
    password=MYSQL_PWD,
    db=MYSQL_DB,
    charset=utf8
)
cursor=conn.cursor()

class Mysql(object):
    @staticmethod
    def insert_tables_goods(goods_name,goods_price,deliver_mode):
        sql=insert into goods(goods_name,goods_price,delivery_method) values(%s,%s,%s)
        cursor.execute(sql,args=(goods_name,goods_price,deliver_mode))
        conn.commit()

    @staticmethod
    def is_repeat(goods_name):
        sql=select count(1) from goods where goods_name=%s
        cursor.execute(sql,args=(goods_name,))
        if cursor.fetchone()[0] >= 1:
            return True

if __name__ == __main__:
    cursor.execute(select * from goods;)
    print(cursor.fetchall())


#pipelines.py
from Amazon.mysqlpipelines.sql import Mysql


class AmazonPipeline(object):
    def process_item(self, item, spider):
        goods_name=item[goods_name]
        goods_price=item[goods_price]
        delivery_mode=item[delivery_method]
        if not Mysql.is_repeat(goods_name):
            Mysql.insert_table_goods(goods_name,goods_price,delivery_mode)



6、創建數據庫表
create database amazon charset utf8;
create table goods(
    id int primary key auto_increment,
    goods_name char(30),
    goods_price char(20),
    delivery_method varchar(50)
);

7、settings.py
MYSQL_HOST=localhost
MYSQL_PORT=3306
MYSQL_USER=root
MYSQL_PWD=123
MYSQL_DB=amazon


#數字代表優先級程度(1-1000隨意設置,數值越低,組件的優先級越高)
ITEM_PIPELINES = {
   Amazon.mysqlpipelines.pipelines.mazonPipeline: 1,
}


#8、在項目目錄下新建:entrypoint.py
from scrapy.cmdline import execute
execute([scrapy, crawl, spider_goods,-a,keyword=iphone8])
View Code

https://pan.baidu.com/s/1boCEBT1

爬蟲-scrapy