1. 程式人生 > >爬蟲相關之淺聊爬蟲

爬蟲相關之淺聊爬蟲

1.安裝:要是說到爬蟲,我們不得不提一個大而全的爬蟲元件/框架,這個框架就是scrapy:scrapy是一個為了爬取網站資料,提取結構性資料而編寫的應用框架。 其可以應用在資料探勘,資訊處理或儲存歷史資料等一系列的程式中。那麼我們直接進入正題,先說說這個框架的兩種安裝方式: 第一種:windows環境下的安裝需要以下幾步操作
1.下載twisted:http://www.lfd.uci.edu/~gohlke/pythonlibs/
2.pip3 install wheel
3.pip3 install Twisted‑18.4.0‑cp36‑cp36m‑win_amd64.whl #根據自己的版本找對應的版本進行安裝
4.pip3 install pywin32
5.pip3 install scrapy
第二種:在linux系統下安裝,蘋果的mac下的安裝方式也是一樣
pip3 install scrapy

 

2.scrapy的基本使用:Django與Scrapy的使用對比

Django:

# 建立django project
django-admin startproject mysite 

cd mysite

# 建立app
python manage.py startapp app01 
python manage.py startapp app02 

# 啟動專案
python manage.runserver

Scrapy:

# 建立scrapy project
  scrapy startproject cjk

  cd cjk

# 建立爬蟲
  scrapy genspider chouti chouti.com
  scrapy genspider cnblogs cnblogs.com 

# 啟動爬蟲   scrapy crawl chouti
安裝好scrapy之後進入cmd命令列檢視是否安裝成功:scrapy,如果見到如下的提示就代表安裝好了
 1 Last login: Sat Jan  5 18:14:13 on ttys000
 2 chenjunkandeMBP:~ chenjunkan$ scrapy
 3 Scrapy 1.5.1 - no active project
 4 
 5 Usage:
 6   scrapy <command> [options] [args]
7 8 Available commands: 9 bench Run quick benchmark test 10 fetch Fetch a URL using the Scrapy downloader 11 genspider Generate new spider using pre-defined templates 12 runspider Run a self-contained spider (without creating a project) 13 settings Get settings values 14 shell Interactive scraping console 15 startproject Create new project 16 version Print Scrapy version 17 view Open URL in browser, as seen by Scrapy 18 19 [ more ] More commands available when run from project directory 20 21 Use "scrapy <command> -h" to see more info about a command
View Code scrapy專案目錄的檢視:
建立project
  scrapy startproject 專案名稱
    專案名稱
    專案名稱/
      - spiders # 爬蟲檔案
        - chouti.py
        - cnblgos.py
  - items.py # 持久化
  - pipelines # 持久化
  - middlewares.py # 中介軟體
  - settings.py # 配置檔案(爬蟲)
  scrapy.cfg # 配置檔案(部署)
如何啟動爬蟲?我們來看下面這個簡單的例子:

# -*- coding: utf-8 -*-
import scrapy


class ChoutiSpider(scrapy.Spider):
    # 爬蟲的名稱
    name = 'chouti'
    # 定向爬蟲,只能爬取域名是gig.chouti.com的網站才能爬取
    allowed_domains = ['dig.chouti.com']
    # 起始的url
    start_urls = ['http://dig.chouti.com/']

    # 回撥函式,起始的url執行結束之後就會自動去調這個函式
    def parse(self, response):
        # <200 https://dig.chouti.com/> <class 'scrapy.http.response.html.HtmlResponse'>
        print(response, type(response))
通過看原始碼得知from scrapy.http.response.html import HtmlResponse這個裡面的HtmlResponse這個類是繼承TextResponse,TextResponse同時又繼承了Response,因此HtmlResponse也就具有了父類裡面的方法(text,xpath)等 說明:上面這個舉例簡單的列印了返回的response裡面的資訊以及response的型別,但是在這裡面我們需要注意幾點:
  • 對於chouti這個網站來說設定了反爬蟲的協議,因此我們在setting裡面需要將ROBOTSTXT_OBEY這個配置改為Flase,預設為True,如果改為Flase之後那麼就不會遵循爬蟲協議
  • 對於回撥函式返回的response其實是HtmlResponse這個類的物件,response是一個封裝了所有請求的響應資訊的物件
  • 執行scrapy crawl chouti與scrapy crawl chouti --log的區別?前面是打印出來日誌,後面是不打印出來日誌

 

3.scrapy基本原理了解(後面會詳細的介紹)

scrapy是一個基於事件迴圈的非同步非阻塞的框架:基於twisted實現的一個框架,內部是基於事件迴圈的機制實現爬蟲的併發

原來的我是這樣實現同時爬取多個url的:將請求一個一個發出去
url_list = ['http://www.baidu.com','http://www.baidu.com','http://www.baidu.com',]
                
for item in url_list:
    response = requests.get(item)
    print(response.text)
現在的我可以通過這種方式來實現:
from twisted.web.client import getPage, defer
from twisted.internet import reactor


# 第一部分:代理開始接收任務
def callback(contents):
    print(contents)


deferred_list = []
# 我需要請求的列表
url_list = ['http://www.bing.com', 'https://segmentfault.com/', 'https://stackoverflow.com/']
for url in url_list:
    # 不會馬上就將請求發出去,而是生成一個物件
    deferred = getPage(bytes(url, encoding='utf8'))
    # 請求結束呼叫的回撥函式
    deferred.addCallback(callback)
    # 將所有的物件放入一個列表裡面
    deferred_list.append(deferred)

# 第二部分:代理執行完任務後,停止
dlist = defer.DeferredList(deferred_list)


def all_done(arg):
    reactor.stop()


dlist.addBoth(all_done)

# 第三部分:代理開始去處理吧
reactor.run()

 

4.持久化

我們傳統的持久化一般來說都是將爬到的資料寫到檔案裡面,這樣做有很多缺點:比如
  • 無法完成爬蟲剛開始:開啟連線; 爬蟲關閉時:關閉連線;
  • 分工不明確
傳統的持久化寫檔案:

 1 # -*- coding: utf-8 -*-
 2 import scrapy
 3 
 4 
 5 class ChoutiSpider(scrapy.Spider):
 6     # 爬蟲的名稱
 7     name = 'chouti'
 8     # 定向爬蟲,只能爬取域名是gig.chouti.com的網站才能爬取
 9     allowed_domains = ['dig.chouti.com']
10     # 起始的url
11     start_urls = ['http://dig.chouti.com/']
12 
13     # 回撥函式,起始的url執行結束之後就會自動去調這個函式
14     def parse(self, response):
15         f = open('news.log', mode='a+')
16         item_list = response.xpath('//div[@id="content-list"]/div[@class="item"]')
17         for item in item_list:
18             text = item.xpath('.//a/text()').extract_first()
19             href = item.xpath('.//a/@href').extract_first()
20             print(href, text.strip())
21             f.write(href + '\n')
22         f.close()
View Code 這個時候scrapy裡面的兩個重要的模組就要上場了:分別是 Item與Pipelines

用scrapy實現pipeline步驟:

a.在setting配置檔案中配置ITEM_PIPELINES:在這裡可以寫多個,後面的數字代表優先順序,數字越小代表越優先

ITEM_PIPELINES = {
    'cjk.pipelines.CjkPipeline': 300,
}
b.當我們配置好了ITEM_PIPELINES之後其實我們的pipelines.py檔案裡面的def process_item(self, item, spider)這個方法會被自動的觸發,但是我們僅僅是修改這個我們會發現執行爬蟲的時候並沒有打印出我們想要的資訊chenjunkan;
class CjkscrapyPipeline(object):
    def process_item(self, item, spider):
        print("chenjunkan")
        return item

c.那麼我們先要在Item.py這個檔案裡面新增兩個欄位,用來約束只需要傳這兩個欄位就行

import scrapy


class CjkscrapyItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    href = scrapy.Field()
    title = scrapy.Field()
然後在爬蟲裡面新增:yield CjkscrapyItem(href=href, title=text),例項化CjkItem這個類生成一個物件,給這個物件傳入兩個引數,CjkItem這個類裡面的兩個引數就是用來接收這兩個引數的
 1 # -*- coding: utf-8 -*-
 2 import scrapy
 3 from cjkscrapy.items import CjkscrapyItem
 4 
 5 
 6 class ChoutiSpider(scrapy.Spider):
 7     # 爬蟲的名稱
 8     name = 'chouti'
 9     # 定向爬蟲,只能爬取域名是gig.chouti.com的網站才能爬取
10     allowed_domains = ['dig.chouti.com']
11     # 起始的url
12     start_urls = ['http://dig.chouti.com/']
13 
14     # 回撥函式,起始的url執行結束之後就會自動去調這個函式
15     def parse(self, response):
16         item_list = response.xpath('//div[@id="content-list"]/div[@class="item"]')
17         for item in item_list:
18             text = item.xpath('.//a/text()').extract_first()
19             href = item.xpath('.//a/@href').extract_first()
20             yield CjkscrapyItem(href=href, title=text)
View Code

d.接下來pipelines裡面的process_item這個方法就會被自動的觸發,每yeild一次,process_item這個方法就被觸發一次;這個裡面的item就是我們通過CjkItem建立的物件;那麼為什麼我們需要用CjkItem,因為我們可以通過這個類來給我們CjkPipeline約束要取什麼資料進行持久化,item給我們約定了什麼欄位我們就去取什麼欄位;那麼def process_item(self, item, spider)這個方法裡面的spider值得是什麼呢?這個其實就是例項化的爬蟲,因為class ChoutiSpider(scrapy.Spider):這個爬蟲要想執行的話首先必須先例項化,說白了就是我們當前爬蟲的物件,這個物件裡面有name,allowed_domains等一些屬性

e.因此我們一個爬蟲當yield CjkItem的時候就是交給我們的pipelines去處理,如果我們yield Request那麼就是重新去下載

總結:

 1 a.先寫pipeline類
 2 
 3 
 4 class XXXPipeline(object):
 5     def process_item(self, item, spider):
 6         return item
 7 
 8 
 9 b.寫Item類
10 
11 
12 class XdbItem(scrapy.Item):
13     href = scrapy.Field()
14     title = scrapy.Field()
15 
16 
17 c.配置
18 ITEM_PIPELINES = {
19     'xdb.pipelines.XdbPipeline': 300,
20 }
21 
22 d.爬蟲,yield每執行一次,process_item就呼叫一次。
23 
24 yield Item物件
View Code 現在我們有了初步的瞭解持久化了,但是我們還是存在點問題,就是每yeild一次就會觸發process_item這個方法一次,假設我們在process_item裡面要不斷的開啟和關閉連線,這就對效能上影響比較大了。示例:
def process_item(self, item, spider):
        f = open("xx.log", "a+")
        f.write(item["href"]+"\n")
        f.close()
        return item
那麼在CjkscrapyPipeline這個類裡面其實還有兩個方法open_spider和close_spider,這樣我們就可以將開啟連線的操作放在open_spider裡面,將關閉連線的操作放在close_spider裡面,避免重複的去操作開啟關閉:
class CjkscrapyPipeline(object):

    def open_spider(self, spider):
        print("爬蟲開始了")
        self.f = open("new.log", "a+")

    def process_item(self, item, spider):
        self.f.write(item["href"] + "\n")

        return item

    def close_spider(self, spider):
        self.f.close()
        print("爬蟲結束了")
我們上面確實是完成了功能,但是仔細看看我們上面的這種寫法是不是有點不是很規範,因此:

class CjkscrapyPipeline(object):
    def __init__(self):
        self.f = None

    def open_spider(self, spider):
        print("爬蟲開始了")
        self.f = open("new.log", "a+")

    def process_item(self, item, spider):
        self.f.write(item["href"] + "\n")

        return item

    def close_spider(self, spider):
        self.f.close()
        print("爬蟲結束了")
再看上面的程式碼,我麼會發現我們將寫檔案的目錄放在了程式中,那麼我們能不能在配置檔案中去配置呢?所以在CjkscrapyPipeline這個類裡面有一個方法:from_crawler,這是一個類方法

@classmethod
    def from_crawler(cls, crawler):
        print('File.from_crawler')
        path = crawler.settings.get('HREF_FILE_PATH')
        return cls(path)
說明:crawler.settings.get('HREF_FILE_PATH')代表去所有的配置檔案中去找HREF_FILE_PATH,這個方法裡面的cls指的是當前類CjkscrapyPipeline,返回的cls(path在初始化的時候傳入path):

class CjkscrapyPipeline(object):
    def __init__(self, path):
        self.f = None
        self.path = path

    @classmethod
    def from_crawler(cls, crawler):
        print('File.from_crawler')
        path = crawler.settings.get('HREF_FILE_PATH')
        return cls(path)

    def open_spider(self, spider):
        print("爬蟲開始了")
        self.f = open(self.path, "a+")

    def process_item(self, item, spider):
        self.f.write(item["href"] + "\n")

        return item

    def close_spider(self, spider):
        self.f.close()
        print("爬蟲結束了")
那麼現在我們知道在CjkPipeline這個類裡面有5個方法,他們的執行順序是什麼樣的呢?

"""
原始碼內容:
    1. 判斷當前CjkPipeline類中是否有from_crawler
        有:
            obj = CjkPipeline.from_crawler(....)
        否:
            obj = CjkPipeline()
    2. obj.open_spider()
    
    3. obj.process_item()/obj.process_item()/obj.process_item()/obj.process_item()/obj.process_item()
    
    4. obj.close_spider()
"""
說明:首先先會判斷是否有from_crawler這個方法,如果沒有的話,那麼就會直接例項化CjkscrapyPipeline這個類生成對物件,如果有的話,那麼先執行from_crawler這個方法,他會呼叫settting檔案,因為這個方法的返回值是例項化這個類的物件,例項化的時候需要呼叫__init__方法   在pipelines裡面有一個return item,這個是幹嘛用的呢? from scrapy.exceptions import DropItem # return item # 交給下一個pipeline的process_item方法 # raise DropItem()# 後續的 pipeline的process_item方法不再執行   注意:pipeline是所有爬蟲公用,如果想要給某個爬蟲定製需要使用spider引數自己進行處理。

# if spider.name == 'chouti':

 

5.去重規則

我拿上面之前的程式碼舉例:

# -*- coding: utf-8 -*-
import scrapy
from cjk.items import CjkItem


class ChoutiSpider(scrapy.Spider):
    name = 'chouti'
    allowed_domains = ['dig.chouti.com']
    start_urls = ['http://dig.chouti.com/']

    def parse(self, response):
        print(response.request.url)

        # item_list = response.xpath('//div[@id="content-list"]/div[@class="item"]')
        # for item in item_list:
        #     text = item.xpath('.//a/text()').extract_first()
        #     href = item.xpath('.//a/@href').extract_first()

        page_list = response.xpath('//div[@id="dig_lcpage"]//a/@href').extract()
        for page in page_list:
            from scrapy.http import Request
            page = "https://dig.chouti.com" + page
            yield Request(url=page, callback=self.parse)  # https://dig.chouti.com/all/hot/recent/2
說明:print(response.request.url)這個打印出來的是當前執行的url,當我們執行上面的程式碼的時候,可以發現從1~120頁沒有重複的,其實是scrapy內部自己做了去重的,也就是相當於在記憶體裡面設定了一個集合,因為集合是不可重複的

檢視原始碼: 匯入:from scrapy.dupefilter import RFPDupeFilter RFPDupeFilter這是一個類:

 1 class RFPDupeFilter(BaseDupeFilter):
 2     """Request Fingerprint duplicates filter"""
 3 
 4     def __init__(self, path=None, debug=False):
 5         self.file = None
 6         self.fingerprints = set()
 7         self.logdupes = True
 8         self.debug = debug
 9         self.logger = logging.getLogger(__name__)
10         if path:
11             self.file = open(os.path.join(path, 'requests.seen'), 'a+')
12             self.file.seek(0)
13             self.fingerprints.update(x.rstrip() for x in self.file)
14 
15     @classmethod
16     def from_settings(cls, settings):
17         debug = settings.getbool('DUPEFILTER_DEBUG')
18         return cls(job_dir(settings), debug)
19 
20     def request_seen(self, request):
21         fp = self.request_fingerprint(request)
22         if fp in self.fingerprints:
23             return True
24         self.fingerprints.add(fp)
25         if self.file:
26             self.file.write(fp + os.linesep)
27 
28     def request_fingerprint(self, request):
29         return request_fingerprint(request)
30 
31     def close(self, reason):
32         if self.file:
33             self.file.close()
34 
35     def log(self, request, spider):
36         if self.debug:
37             msg = "Filtered duplicate request: %(request)s"
38             self.logger.debug(msg, {'request': request}, extra={'spider': spider})
39         elif self.logdupes:
40             msg = ("Filtered duplicate request: %(request)s"
41                    " - no more duplicates will be shown"
42                    " (see DUPEFILTER_DEBUG to show all duplicates)")
43             self.logger.debug(msg, {'request': request}, extra={'spider': spider})
44             self.logdupes = False
45 
46         spider.crawler.stats.inc_value('dupefilter/filtered', spider=spider)
View Code 在這個類裡面有一個非常重要的方法:request_seen,這個方法直接確定了是否已經訪問過;當我們在yeild Request的時候在內部先會呼叫一下request_seen這個方法,判斷是否已經訪問過了:單獨來看這個方法

def request_seen(self, request):
        fp = self.request_fingerprint(request)
        if fp in self.fingerprints:
            return True
        self.fingerprints.add(fp)
        if self.file:
            self.file.write(fp + os.linesep)
說明:a.首先執行fp = self.request_fingerprint(request),這個裡面的request就是我們傳進來的request,request裡面應該有url,因為每個url的長短不一,因此我們可以將fp看成是這個url被request_fingerprint這個方法封裝過得一個md5值,這樣每個url的長度是一樣的,便於操作; b.執行if fp in self.fingerprints: self.fingerprints在__init__初始化的時候其實是一個集合:self.fingerprints = set();如果fp在這個集合裡面那麼返回的值是true,如果返回的是true,那麼就代表這個url被訪問過了,那麼再次訪問的時候就不在訪問了 c.如果沒有被訪問過就執行self.fingerprints.add(fp),將fp加入到這個集合裡面 d.我們將訪問過得放在記憶體裡面,但是也可放在檔案裡面執行:if self.file: self.file.write(fp + os.linesep)   如何寫到檔案中呢?
 1 1.首先執行from_setting去配置檔案找路徑,要在配置檔案裡面將debug設定為True;
 2 @classmethod
 3     def from_settings(cls, settings):
 4         debug = settings.getbool('DUPEFILTER_DEBUG')
 5         return cls(job_dir(settings), debug)
 6 2.返回的是return cls(job_dir(settings), debug),
 7 import os
 8 
 9 def job_dir(settings):
10     path = settings['JOBDIR']
11     if path and not os.path.exists(path):
12         os.makedirs(path)
13     return path
14 如果有path那就從setting裡面去取,設定JOBDIR,如果沒有的話那麼建立,最後返回的是path,也就是配置檔案裡面的JOBDIR對應的值
View Code

但是我們一般的話寫在檔案裡面的話意義不是很大,我後面會介紹將其寫在redis裡面

自定義去重規則:因為內建的去重規則RFPDupeFilter這個類是繼承BaseDupeFilter的,因此我們自定義的時候也可以繼承這個基類

from scrapy.dupefilter import BaseDupeFilter
from scrapy.utils.request import request_fingerprint


class CjkDupeFilter(BaseDupeFilter):

    def __init__(self):
        self.visited_fd = set()

    @classmethod
    def from_settings(cls, settings):
        return cls()

    def request_seen(self, request):
        fd = request_fingerprint(request=request)
        if fd in self.visited_fd:
            return True
        self.visited_fd.add(fd)

    def open(self):  # can return deferred
        print('開始')

    def close(self, reason):  # can return a deferred
        print('結束')

    # def log(self, request, spider):  # log that a request has been filtered
    #     print('日誌')

系統預設的去重規則配置:DUPEFILTER_CLASS = 'scrapy.dupefilter.RFPDupeFilter'

我們需要在setting配置檔案裡面將我們自定義的去重規則寫上:DUPEFILTER_CLASS = 'cjkscrapy.dupefilters.CjkDupeFilter'

eg:後面我們會在redis裡面做去重規則

注意:在我們的爬蟲檔案也可以自己設定去重規則,在Request裡面有個dont_filter這個引數,預設為False表示遵循過濾規則,如果將dont_filter這個引數設定為True那麼表示不遵循去重規則

class Request(object_ref):

    def __init__(self, url, callback=None, method='GET', headers=None, body=None,
                 cookies=None, meta=None, encoding='utf-8', priority=0,
                 dont_filter=False, errback=None, flags=None):

 

6.depth深度控制(後面會詳細說明)

# -*- coding: utf-8 -*-
import scrapy
from scrapy.dupefilter import RFPDupeFilter


class ChoutiSpider(scrapy.Spider):
    name = 'chouti'
    allowed_domains = ['dig.chouti.com']
    start_urls = ['http://dig.chouti.com/']

    def parse(self, response):
        print(response.request.url, response.meta.get("depth", 0))

        # item_list = response.xpath('//div[@id="content-list"]/div[@class="item"]')
        # for item in item_list:
        #     text = item.xpath('.//a/text()').extract_first()
        #     href = item.xpath('.//a/@href').extract_first()

        page_list = response.xpath('//div[@id="dig_lcpage"]//a/@href').extract()
        for page in page_list:
            from scrapy.http import Request
            page = "https://dig.chouti.com" + page
            yield Request(url=page, callback=self.parse, dont_filter=False)  # https://dig.chouti.com/all/hot/recent/2

深度指的是爬蟲爬取的深度,如果我們想控制深度,那麼我們可以在setting配置檔案裡面設定DEPTH_LIMIT = 3

 

7.手動處理cookie

Request物件裡面引數檢視:

class Request(object_ref):

    def __init__(self, url, callback=None, method='GET', headers=None, body=None,
                 cookies=None, meta=None, encoding='utf-8', priority=0,
                 dont_filter=False, errback=None, flags=None):

我們暫時先關注Request裡面的這個引數url, callback=None, method='GET', headers=None, body=None,cookies=None;發請求其實就相當於請求頭+請求體+cookie

自動登入抽屜示例:

我們第一次請求抽屜的時候我們會得到一個cookie,我們如果去響應裡面去拿到cookie呢?

首先匯入一個處理cookie的類:from scrapy.http.cookies import CookieJar

# -*- coding: utf-8 -*-
import scrapy
from scrapy.dupefilter import RFPDupeFilter
from scrapy.http.cookies import CookieJar


class ChoutiSpider(scrapy.Spider):
    name = 'chouti'
    allowed_domains = ['dig.chouti.com']
    start_urls = ['http://dig.chouti.com/']

    def parse(self, response):
        cookie_dict = {}

        # 去響應頭中獲取cookie,cookie儲存在cookie_jar物件
        cookie_jar = CookieJar()
        # 去response, response.request裡面拿到cookie
        cookie_jar.extract_cookies(response, response.request)

        # 去物件中將cookie解析到字典
        for k, v in cookie_jar._cookies.items():
            for i, j in v.items():
                for m, n in j.items():
                    cookie_dict[m] = n.value
        print(cookie_dict)

執行上面的程式碼:我們能拿到cookie

chenjunkandeMBP:cjkscrapy chenjunkan$ scrapy crawl chouti --nolog
/Users/chenjunkan/Desktop/scrapytest/cjkscrapy/cjkscrapy/spiders/chouti.py:3: ScrapyDeprecationWarning: Module `scrapy.dupefilter` is deprecated, use `scrapy.dupefilters` instead
  from scrapy.dupefilter import RFPDupeFilter
{'gpsd': 'd052f4974404d8c431f3c7c1615694c4', 'JSESSIONID': 'aaaUxbbxMYOWh4T7S7rGw'}

 自動登入抽屜並且點讚的完整示例:

 1 # -*- coding: utf-8 -*-
 2 import scrapy
 3 from scrapy.dupefilter import RFPDupeFilter
 4 from scrapy.http.cookies import CookieJar
 5 from scrapy.http import Request
 6 
 7 
 8 class ChoutiSpider(scrapy.Spider):
 9     name = 'chouti'
10     allowed_domains = ['dig.chouti.com']
11     start_urls = ['http://dig.chouti.com/']
12     cookie_dict = {}
13 
14     def parse(self, response):
15 
16         # 去響應頭中獲取cookie,cookie儲存在cookie_jar物件
17         cookie_jar = CookieJar()
18         # 去response, response.request裡面拿到cookie
19         cookie_jar.extract_cookies(response, response.request)
20 
21         # 去物件中將cookie解析到字典
22         for k, v in cookie_jar._cookies.items():
23             for i, j in v.items():
24                 for m, n in j.items():
25                     self.cookie_dict[m] = n.value
26         print(self.cookie_dict)
27 
28         yield Request(
29             url='https://dig.chouti.com/login',
30             method='POST',
31             body="phone=8618357186730&password=cjk139511&oneMonth=1",
32             cookies=self.cookie_dict,
33             headers={
34                 'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'
35             },
36             callback=self.check_login
37         )
38 
39     def check_login(self, response):
40         print(response.text)
41         yield Request(
42             url='https://dig.chouti.com/all/hot/recent/1',
43             cookies=self.cookie_dict,
44             callback=self.index
45         )
46 
47     def index(self, response):
48         news_list = response.xpath('//div[@id="content-list"]/div[@class="item"]')
49         for new in news_list:
50             link_id = new.xpath('.//div[@class="part2"]/@share-linkid').extract_first()
51             yield Request(
52                 url='http://dig.chouti.com/link/vote?linksId=%s' % (link_id,),
53                 method='POST',
54                 cookies=self.cookie_dict,
55                 callback=self.check_result
56             )
57         # 所有的頁面全部點贊
58         page_list = response.xpath('//div[@id="dig_lcpage"]//a/@href').extract()
59         for page in page_list:
60             page = "https://dig.chouti.com" + page
61             yield Request(url=page, callback=self.index)  # https://dig.chouti.com/all/hot/recent/2
62 
63     def check_result(self, response):
64         print(response.text)
View Code

執行結果:

 1 chenjunkandeMBP:cjkscrapy chenjunkan$ scrapy crawl chouti --nolog
 2 /Users/chenjunkan/Desktop/scrapytest/cjkscrapy/cjkscrapy/spiders/chouti.py:3: ScrapyDeprecationWarning: Module `scrapy.dupefilter` is deprecated, use `scrapy.dupefilters` instead
 3   from scrapy.dupefilter import RFPDupeFilter
 4 {'gpsd': '78613f08c985435d5d0eedc08b0ed812', 'JSESSIONID': 'aaaTmxnFAJJGn9Muf8rGw'}
 5 {"result":{"code":"9999", "message":"", "data":{"complateReg":"0","destJid":"cdu_53587312848"}}}
 6 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692112533000","lvCount":"6","nick":"chenjunkan","uvCount":"26","voteTime":"小於1分鐘前"}}}
 7 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692112720000","lvCount":"28","nick":"chenjunkan","uvCount":"27","voteTime":"小於1分鐘前"}}}
 8 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692112862000","lvCount":"24","nick":"chenjunkan","uvCount":"33","voteTime":"小於1分鐘前"}}}
 9 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692112849000","lvCount":"29","nick":"chenjunkan","uvCount":"33","voteTime":"小於1分鐘前"}}}
10 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692112872000","lvCount":"48","nick":"chenjunkan","uvCount":"33","voteTime":"小於1分鐘前"}}}
11 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692112877000","lvCount":"23","nick":"chenjunkan","uvCount":"33","voteTime":"小於1分鐘前"}}}
12 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692112877000","lvCount":"69","nick":"chenjunkan","uvCount":"33","voteTime":"小於1分鐘前"}}}
13 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692112877000","lvCount":"189","nick":"chenjunkan","uvCount":"33","voteTime":"小於1分鐘前"}}}
14 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692112926000","lvCount":"98","nick":"chenjunkan","uvCount":"35","voteTime":"小於1分鐘前"}}}
15 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692112951000","lvCount":"61","nick":"chenjunkan","uvCount":"35","voteTime":"小於1分鐘前"}}}
16 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692113086000","lvCount":"13","nick":"chenjunkan","uvCount":"37","voteTime":"小於1分鐘前"}}}
17 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692113097000","lvCount":"17","nick":"chenjunkan","uvCount":"38","voteTime":"小於1分鐘前"}}}
18 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692113118000","lvCount":"21","nick":"chenjunkan","uvCount":"41","voteTime":"小於1分鐘前"}}}
19 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692113155000","lvCount":"86","nick":"chenjunkan","uvCount":"41","voteTime":"小於1分鐘前"}}}
20 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692113140000","lvCount":"22","nick":"chenjunkan","uvCount":"41","voteTime":"小於1分鐘前"}}}
21 {"result":{"code":"9999", "message":"推薦成功", "data":{"jid":"cdu_53587312848","likedTime":"1546692