1. 程式人生 > 實用技巧 >爬蟲-scrapy的認識(16)

爬蟲-scrapy的認識(16)

## scrapy的安裝

scrapy的底層依賴於lxml, twisted, openssl,涉及到系統C庫,所以有可能會導致安裝失敗。

````
pip install scrapy
apt install python3-scrapy
````

## scrapy命令

###建立專案

```
scrapy startproject qianmu
```

###生成spider檔案

注意:爬蟲名字不要和專案名字重複

```bash
#scrapy genspider [爬蟲名字] [目標網站域名]
scrapy genspider usnews qianmu.iguye.com
```

### 執行爬蟲

```bash
# 執行名為usnewz的爬蟲
scrapy crawl usnews
# 將爬到的資料匯出為Json檔案
scrapy crawl usnews 
-o usnews.json # 匯出為csv檔案 scrapy crawl usnews -o usnews.csv -t csv # 單獨執行爬蟲檔案 scrapy runspider usnews.py ``` 除錯爬蟲 ```bash # 進入到scrapy控制檯,使用的是專案的環境 scrapy shell # 帶一個URL引數,將會自動請求這個url,並在請求成功後進入控制檯 scxrapy shell http://url.com ``` 進入到控制檯以後,可以使用以下函式和物件 | A | B | | -------- | ------------------------------------------------------------ | | fetch | 請求url或者Requesrt物件,注意:請求成功以後會自動將當前作用域內的request和responsne物件重新賦值 | | view | 用瀏覽器開啟response物件內的網頁 | | shelp | 列印幫助資訊 | | spider | 相應的Spider類的例項 | | settings | 儲存所有配置資訊的Settings物件 | | crawler | 當前Crawler物件 | | scrapy | scrapy模組 | ```bash # 用專案配置下載網頁,然後用瀏覽器開啟網頁 scrapy view url # 用專案配置下載網頁,然後輸出至控制檯 scrapy fetch url ```

建立scrapy_test專案,將會生成一下的路徑。

item表示檔案的資料儲存地方
piplines表示資料的處理
setting配置檔案的處理地方

實現爬取top250的頁面資訊:

#使用命令列執行
from scrapy import cmdline
cmdline.execute("scrapy  crawl doubanMovie".split())

配置setting

# -*- coding: utf-8 -*-

# Scrapy settings for douban project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation: # # http://doc.scrapy.org/en/latest/topics/settings.html # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html BOT_NAME = 'douban' SPIDER_MODULES = ['douban.spiders'] NEWSPIDER_MODULE = 'douban.spiders' FEED_URI=u'doubanFile.csv' FEED_FORMAT='CSV' USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:23.0) Gecko/20100101 Firefox/23.0' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'douban (+http://www.yourdomain.com)' # Obey robots.txt rules ROBOTSTXT_OBEY = True # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) #COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'douban.middlewares.DoubanSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'douban.middlewares.MyCustomDownloaderMiddleware': 543, #} # Enable or disable extensions # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html #ITEM_PIPELINES = { # 'douban.pipelines.DoubanPipeline': 300, #} # Enable and configure the AutoThrottle extension (disabled by default) # See http://doc.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
View Code

寫item

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

from scrapy import Item,Field

class doubanItem(Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    title=Field()
    movieInfo=Field()
    star =Field()
    quote =Field()
View Code

主檔案:

#coding=utf-8
from scrapy.spiders import CrawlSpider
from scrapy.http import Request
from scrapy.selector import Selector
from douban.items import doubanItem
'''爬取準備
*目標網站:豆瓣電影TOP250
*目標網址:http://movie.douban.com/top250
*目標內容:
    *豆瓣電影TOP250部電影的以下資訊
    *電影名稱
    *電影資訊
    *電影評分
*輸出結果:生成csv檔案
'''
class Douban(CrawlSpider):
    name = "doubanMovie"
    redis_key='douban:start_urls'
    start_urls=['http://movie.douban.com/top250']
    url='http://movie.douban.com/top250'
    def parse(self,response):
        item=doubanItem()
        selector=Selector(response)
        Movies=selector.xpath('//div[@class="info"]')
        print('Movies',Movies)
        for eachMoive in Movies:
            print('eachMoive',eachMoive)
            title=eachMoive.xpath('div[@class="hd"]/a/span/text()').extract()
            fullTitle=''
            print('title',title)
            for each in title:
                fullTitle+=each
                print('eachtitle', each)
            movieInfo=eachMoive.xpath('div[@class="bd"]/p/text()').extract()
            star=eachMoive.xpath('div[@class="bd"]/div[@class="star"]/span[@class="rating_num"]/text()').extract()[0]
            quote=eachMoive.xpath('div[@class="bd"]/p[@class="quote"]/span/text()').extract()
            if quote:
                quote=quote[0]
            else:
                quote=''
            print('fullTitle',fullTitle)
            print('movieInfo', movieInfo)
            print('star', star)
            print('quote', quote)
            item['title']=fullTitle
            item['movieInfo'] = ';'.join(movieInfo)
            item['star'] = star
            item['quote'] = quote
            yield item
        nextLink=selector.xpath('//span[@class="next"]/link/@href').extract()
        if nextLink:
            nextLink=nextLink[0]
            print(nextLink)
            yield Request(self.url+nextLink,callback=self.parse)
View Code