1. 程式人生 > 其它 >Day20-3 Scrapy整合selenium (拓展)

Day20-3 Scrapy整合selenium (拓展)

技術標籤:爬蟲

前言

-----------------------如何在scrapy中整合selenium?-----------------
-----------------------實現自動翻頁------------

  • 先通過正常思路寫出selenium操作
  • 然後在把邏輯整合到scrapy中
  • 建立一個Scrapy CrawlSpider專案
    scrapy genspider -t crawal 爬蟲名字 域名

---------------------------------程式碼----------------------------------------------------------



--------爬蟲程式碼-----------

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule


class JsSpider(CrawlSpider):
    name = 'js'
    allowed_domains = ['jianshu.com']
    start_urls = ['https://www.jianshu.com/p/06574d4490ba']
    #https://www.jianshu.com/p/6db79ccbb18d
rules = ( Rule(LinkExtractor(allow=r'.*/p/\w+'), callback='parse_item', follow=True), ) def parse_item(self, response): # item = {} # #item['domain_id'] = response.xpath('//input[@id="sid"]/@value').get() # #item['name'] = response.xpath('//div[@id="name"]').get()
# #item['description'] = response.xpath('//div[@id="description"]').get() # return item pass

-----------爬蟲中介軟體程式碼------------

將爬蟲中介軟體中的下載中介軟體那一類進行刪除,只留類名。重新定義類

from scrapy import signals

# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapter
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from scrapy.http.response.html import HtmlResponse

class JianshuDownloaderMiddleware:
    def __init__(self):
        self.driver = webdriver.Chrome()

    def process_request(self, request, spider):
        # 我們需要攔截 用selenium進行請求 把獲取的資料封裝成一個response物件
        self.driver.get(request.url)

        more_btn_xpath="//div[@role='main']/div[position()=1]/section[last()]/div[position()=1]/div[last()]"

        WebDriverWait(self.driver, 12).until(
            EC.element_to_be_clickable((By.XPATH, more_btn_xpath))
        )
        while True:
            try:
                more_btn = self.driver.find_element_by_xpath(more_btn_xpath)
                self.driver.execute_script('arguments[0].click();', more_btn)  # js執行點選的事件
                # more_btn.click()
            except Exception as e:
                break

        response = HtmlResponse(request.url, body=self.driver.page_source, request=request, encoding='utf-8')
        return response

我們整合selenium 需要在下載中介軟體中實現邏輯,在這裡需要返回一個
response物件,簡書的標籤做了處理,我們的解決方式是通過位置來進行標籤定位。

more_btn_xpath="//div[@role=‘main’]/div[position()=1]/section[last()]/div[position()=1]/div[last()]"
position()=1標籤第一個
last() 最後一個

response = HtmlResponse(request.url, body=self.driver.page_source, request=request, encoding=‘utf-8’)
return response

在scrapy設定中開啟爬蟲下載中介軟體

--------------爬蟲scrapy中設定程式碼---------------

# Scrapy settings for jianshu project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'jianshu'

SPIDER_MODULES = ['jianshu.spiders']
NEWSPIDER_MODULE = 'jianshu.spiders'
LOG_LEVEL='WARNING'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jianshu (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  'Accept-Language': 'en',
  "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)      Chrome/86.0.4240.111 Safari/537.36"
}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'jianshu.middlewares.JianshuSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
   'jianshu.middlewares.JianshuDownloaderMiddleware': 543,
}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'jianshu.pipelines.JianshuPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'