Scrapy框架之傳智專案整理
===============================================================
scrapy爬蟲框架
===============================================================
1.scrapy-project: itcast (爬蟲中不使用yield,即不啟用pipeline)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.建立專案---- scrapy startproject itcast
| itcast/
| ├── scrapy.cfg
| └── itcast
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── chuanzhi.py
|
| 2.明確目標---- vim items.py
| vim items.py
| import scrapy
|
| class ItcastItem(scrapy.Item): # 建立item模型類,在其中制定要爬取的目標資料
| name = scrapy.Field()
| level = scrapy.Field()
| info = scrapy.Field()
|
| 3.製作爬蟲
| (1)生成爬蟲--- scrapy genspider chuanzhi "itcast.cn" # 注意生成爬蟲時,爬蟲名不能和專案名稱相同,必須設定爬蟲名和爬蟲域
| (2)設定爬蟲--- vim chuanzhi.py
| vim chuanzhi.py
| import scrapy
| from chuanzhi.items import ItcastItem
|
| class ChuanzhiSpider(scrapy.Spider):
| name = "chuanzhi"
| allowed_domains = ["itcast.cn"]
| start_urls=["http://www.itcast.cn/",]
|
| def parse(self,response):
| items = []
| for each in response.xpath("//div[@class='li_txt']"):
| item = ItcastItem() # 例項化items.py中定義的ItcastItem()類---注意爬蟲開頭需要從chuanzhi.items引入ItcastItem模組
| item['name']=each.xpath("h3/text()").extract()[0] # extract()函式返回的是Unicode字串
| item['level']=each.xpath("h4/text()").extract()[0]
| item['info']=each.xpath("p/text()").extract()[0]
| items.append(item)
| return items # 使用return不會將資料交給pipeline,使用yield在for迴圈中則會將每次迴圈處理後的結果交給pipeline處理
|
| 4.執行爬蟲--- scrapy crawl chuanzhi # 執行爬蟲時注意爬蟲名,-o選項可將爬蟲返回結果儲存到指定格式檔案
|
| scrapy儲存資訊最簡單的方式有四種:
| scrapy crawl itcast -o teachers.json # 儲存為json格式檔案,預設為Unicode編碼
| scrapy crawl itcast -o teachers.jsonlines # 儲存為jsonline格式檔案,預設為Unicode編碼
| scrapy crawl itcast -o teachers.csv # 儲存為csv逗號表示式,可用Excel開啟
| scrapy crawl itcast -o teachers.xml # 儲存為xml格式檔案
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
2.scrapy-project: itcast (爬蟲中使用yield,即啟用pipeline)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.建立專案---- scrapy startproject itcast
| itcast/
| ├── scrapy.cfg
| └── itcast
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── chuanzhi.py
|
| 2.明確目標--- vim items.py
| vim items.py
| import scrapy
|
| class ItcastItem(scrapy.Items):
| name = scrapy.Field()
| level = scrapy.Field()
| info = scrapy.Field()
|
| 3.製作爬蟲
| (1)生成爬蟲--- scrapy genspider chuanzhi "itcast.cn"
| (2)設定爬蟲--- vim chuanzhi.py
| vim chuanzhi.py
| import scrapy
| from itcast.items import ItcastItem
|
| class ChuanzhiSpider(scrapy.Spider):
| name = "chuanzhi"
| allowed_domains = ["itcast.cn"]
| start_urls = ["http://www.itcast.cn/",]
|
| def parse(self,response):
| for each in response.xpath("//div[@class='li_txt']")
| item = ItcastItem()
| item['name'] = each.xpath('h3/text()').extract()[0]
| item['leve'] = each.xpath('h4/text()').extract()[0]
| item['info'] = each.xpath('p/text()').extract()[0]
| yield item # 使用yield將每次迴圈的結果item交給pipeline處理
|
| 4.編寫item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
|
| class ItcastJsonPipeline(object): # 必須定義一個pipeline類去處理爬蟲返回的資料,此類中必須定義process_item()函式
| def __init__(self): # 重新定義__init__()方法()可選
| self.filename = 'teachers.json'
| def open_spider(self,spider): # open_spider()方法(可選),必須有spider引數,spider啟動時該open_spider()方法被呼叫
| self.f = open(self.filename,"wb")
| def process_item(self,item,spider): # process_item()方法必須實現,必須有yield傳入的item引數和spider引數
| content = json.dumps(dict(item),ensure_ascii=False) + ",\n"
| self.f.write(content.encode('utf-8'))
| return item
| def close_spider(self,spider): # close_spider()方法(可選),必須有spider引數,spider結束時該close_spider()方法被呼叫
| self.f.close()
|
| 5.啟用上述pipeline元件--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"itcast.pipelines.ItcastJsonPipeline":300}
|
| 6.執行爬蟲--- scrapy crawl chuanzhi # 會在當前執行目錄下生成一個teachers.json的檔案
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
3.scrapy-project: tencent (騰訊招聘scrapy.Spider版本)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.建立專案---- scrapy startproject tencent
| tencent/
| ├── scrapy.cfg
| └── tencent
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── tt.py
|
| 2.明確目標--- vim items.py
| vim items.py
| import scrapy
|
| class TencentItem(scrapy.Item):
| name=scrapy.Field()
| detail_link = scrapy.Field()
| position_info = scrapy.Field()
| people_number = scrapy.Field()
| work_location = scrapy.Field()
| publish_time = scrapy.Field()
|
| 3.製作爬蟲
| (1)生成爬蟲--- scrapy genspider tt "tencent.com"
| (2)設定爬蟲--- vim tt.py
| vim tt.py
| import scrapy
| import re
| from tencent.items import TencentItem
|
| class TtSpider(scrapy.Spider):
| name = "tt"
| allowed_domains = ["tencent.com"]
| start_urls = ["http://hr.tencent.com/position.php?&start=0#a"]
|
| def parse(self,response):
| for each in xpath('//*[@class="even"]'):
| item = TencentItem()
| item['name']=each.xpath('./td[1]/a/text()').extract()[0].encoding('utf-8')
| item['detail_link']=each.xpath('./td[1]/a/@href').extract()[0].encoding('utf-8')
| item['position_info']=each.xpath('./td[2]/a/text()').extract()[0].encoding('utf-8')
| item['people_number']=each.xpath('./td[3]/a/text()').extract()[0].encoding('utf-8')
| item['work_location']=each.xpath('./td[4]/a/text()').extract()[0].encoding('utf-8')
| item['publish_time']=each.xpath('./td[5]/a/text()').extract()[0].encoding('utf-8')
| current_page = re.search('\d+',response.url).group(1) # 取出當前頁面URL中匹配出來的第一個數字(即當前頁的頁碼)
| next_page = int(current_page) + 10 # 下一頁的頁碼 = 當前頁碼 + 10
| next_url = re.sub('\d+',str(next_page),response.url) # 把當前頁面URL中的數字替換為下一頁的頁面,即可得到下一頁的URL
| yield scrapy.Request(next_url,callback=self.parse) # 使用yield函式,呼叫scrapy.Request()方法將下頁URL傳送到請求佇列,並制定回撥函式為parse處理下一頁返回頁面
| yield item # 使用yield函式,將本次迴圈獲取的item資料交給pipeline處理
|
| 4.編寫item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
|
| class TencentJsonPipeline(object):
| def __init__(self):
| self.filename = "tencent.json"
| def open_spider(self,spider):
| self.f = open(self.filename,"wb")
| def process_item(self,item,spider):
| content = json.dumps(dict(item),ensure_ascii=False) + ",\n"
| self.f.write(content)
| retrun item
| def close_spider(self,spider):
| self.f.close()
|
| 5.啟用上述pipeline元件--- vim settings.py
| vim settings.py
| ITEM_PIPELINES={"tencent.pipelines.TencentJsonPipeline":300}
|
| 6.執行爬蟲--- scrapy crawl tt # 執行爬蟲會在當前目錄下生成tencent.json檔案
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
4.scrapy-project: tencent (騰訊招聘CrawlSpider版本)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.建立專案---- scrapy startproject tencent
| tencent/
| ├── scrapy.cfg
| └── tencent
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── tt.py
| 2.明確目標--- vim items.py
| vim items.py
| import scrapy
|
| class TencentItem(scrapy.Item):
| name=scrapy.Field()
| detail_link = scrapy.Field()
| position_info = scrapy.Field()
| people_number = scrapy.Field()
| work_location = scrapy.Field()
| publish_time = scrapy.Field()
|
| 3.製作爬蟲
| (1)生成爬蟲--- scrapy genspider -t crawl tt "tencent.com" # -t 指定模板是CrawlSpider
| (2)設定爬蟲--- vim tt.spider
| vim tt.spider
| import scrapy
| from scrapy.spiders import CrawlSpider,Rule # CrawlSpider版本的scrapy會引入CrawlSpider/Rule模組
| from scrapy.linkextractor import LinkExtractor # 提取連結還需要引入LinkExtractor模組
| from tencent.items import TencentItem # 還需要引入自定義的Item
| class TtSpider(CrawlSpider):
| name = "tt"
| allowed_domains = ["tencent.com"]
| start_urls = ["http://hr.tencent.com/position.php?&start=0#a"]
| page_link = LinkExtractor(allow=('start=\d+')) # 使用LinkExtractor()自動獲取匹配到的連結(匹配包含"start=數字"的連結)
| rules = [
| Rule(page_link,callback='parse_tencent',follow=Ture) # 使用Rule()自動傳送匹配到的頁面連結到請求佇列,並指定回撥函式parse_tencent()處理該請求響應,follow=True會跟進提取處理
| ] # 可以寫多個Rule(),匹配不同的連結並制定不同的回撥函式從而使用不同的處理方法
| def parse_tencent(self,respone):
| for each in response.xpath('//tr[@class="even"]|//tr[@class="odds"]'):
| item = TencentItem()
| item['name']=each.xpath('./td[1]/a/text()').extract()[0].encoding('utf-8')
| item['detail_link']=each.xpath('./td[1]/a/@href').extract()[0].encoding('utf-8')
| item['position_info']=each.xpath('./td[2]/a/text()').extract()[0].encoding('utf-8')
| item['people_number']=each.xpath('./td[3]/a/text()').extract()[0].encoding('utf-8')
| item['work_location']=each.xpath('./td[4]/a/text()').extract()[0].encoding('utf-8')
| item['publish_time']=each.xpath('./td[5]/a/text()').extract()[0].encoding('utf-8')
| yield item
| # 使用CrawlSpider類後,這裡都不需要自己去提取/拼接下頁URL,再發送新連結請求/制定回撥函式處理,而在上述LinkExtractor()和Rule()的協同作用下就完成了URL提取和連結請求傳送跟進處理的全過程
| 4.編寫item pipeline--- vimpipelines.py
| vim pipelines.py
| import json
|
| class TencentJsonPipeline(object):
| def __init__(self):
| self.filename = 'tencent.json'
| def open_spider(self,spider):
| self.f = open(self.filename,"w")
| def process_item(self,item,spider):
| content = json.dumps(dict(item),ensure_ascii=False) + ',\n'
| self.f.write(content)
| return item
| def close_spier(self,spider):
| self.f.close()
|
| 5.啟動上述pipeline--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"tencent.pipelines.TencentJsonPipeline":300}
|
| 6.執行爬蟲--- scrapy crawl tt
|
| # 注意:scrapy.Spider類和CrawlSpider類的上述區別!!!!!
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
5.scrapy-project: dongguan (東莞陽關問政CrawlSpider版本---多Rule)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.建立專案--- scrapy startproject dongguan
| dongguan/
| ├── scrapy.cfg
| └── dongguan
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── sun.py
|
| 2.明確目標--- vim items.py
| vim items.py
| import scrapy
|
| class DongguanItem(scrapy.Item):
| title = scrapy.Field()
| content = scrapy.Field()
| url = scrapy.Field()
| number = scrapy.Field()
|
| 3.製作爬蟲
| (1)生成爬蟲--- scrapy genspider -t crawl sun "wz.sun0769.com"
| (2)設定爬蟲--- vim sun.py
| vim sun.py
| import scrapy
| from scrapy.spider import CrawlSpider,Rule
| from scrapy.linkextractor import LinkExtractor
| from dongguan.items import DongguanItem
| class SunSpider(CrawlSpider):
| name = "sun"
| allowed_domains = ["wz.sun0769.com"]
| start_urls = ["http://wz.sun0769.com/index.php/question/questionType?type=4&page=0"]
| rules = [ # 注意:不寫callback/不寫follow---follow預設為True跟進; 寫callback/不寫follow---follow預設為False不跟進
| Rule(LinkExtractor(allow=r'type=4&page=\d+'),follow=Ture) # 第一個Rule,匹配每一頁,持續跟進,沒有回撥函式
| Rule(LinkExtractor(allow=r'/html/question/\d+/\d+.shtml'),callback='parse_item') # 第二個Rule,匹配每個子頁,並使用回撥函式parse_item()處理響應,不跟進
| ]
| def parse_item(self,response):
| item=DongguanItem()
| item['title'] = response.xpath('//div[contains(@class,"pagecenter p3")]//strong/text()').extract()[0]
| item['number'] = item['title'].split(' ').[-1].split(':')[-1] # 從title中取出數字
| item['content'] = response.xpath('//div[@class="c1 text14_2"]/text()').extract()[0]
| item['url'] = response.url
| yield item
|
| 4.編寫item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
| class DongguanJsonPipeline(object):
| def __init__(self):
| self.f = open("dongguan.json","w")
| def process_item(self,item,spider):
| text = json.dumps(dict(item),ensure_ascii=False) + ',\n'
| self.f.write(text)
| def close_spider(self,spider):
| self.f.close()
|
| 5.啟用上述pipeline元件--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"dongguan.pipelines.DongguanJsonPipeline":300}
|
| 6.執行爬蟲--- scrapy crawl sun
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
6.scrapy-project: dongguan (東莞陽關問政CrawlSpider反爬蟲版本)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.建立專案--- scrapy startproject dongguan
| dongguan/
| ├── scrapy.cfg
| └── dongguan
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── new_dongguan.py
|
| 2.明確目標--- vim items.py
| vim items.py
| import scrapy
|
| class DongguanItem(scrapy.Item):
| title = scrapy.Field()
| content = scrapy.Field()
| url = scrapy.Field()
| number = scrapy.Field()
|
| 3.製作爬蟲
| (1)生成爬蟲--- scrapy genspider -t crawl new_dongguan "wz.sun0769.com"
| (2)設定爬蟲--- vim new_dongguan.py
| vim new_dongguan.py
| import scrapy
| from scrapy.spider import CrawlSpider,Rule
| from scrapy.linkextractor import LinkExtractor
| from dongguan.items import DongguanItem
| class New_dongguanSpider(CrawlSpider):
| name = "new_dongguan"
| allowed_domains = ["wz.sun0769.com"]
| start_urls = ["http://wz.sun0769.com/index.php/question/questionType?type=4&page=0"]
| page_link = LinkExtractor(allow=("type=4")) # 獲取頁面URl
| content_link = LinkExtractor(allow=r'/html/question/\d+/\d+.shtml') # 獲取帖子URL
| rules= [
| Rule(page_link,process_links='deal_links'), # 第一個Rule,匹配每一頁URL,使用pcess_links引數,指定deal_links函式處理該URL列表
| Rule(content_link,callback='parse_item') # 第二個Rule,匹配每個帖子頁URL,並使用回撥函式parse_item處理頁面響應(有callback/沒follow,預設follow=False)
| ]
| def deal_links(self,links):
| for each in links:
| each.url = each.url.replace("?","&").replace("Type&","Type?")
| return links # 逐一修改每個URL,最後返回修改後的URL列表
| def parse_item(self,response):
| item=DongguanItem()
| item['title'] = response.xpath('//div[contains(@class,"pagecenter p3")]//strong/text()').extract()[0]
| item['number'] = item['title'].split(' ').[-1].split(':')[-1] # 從title中取出數字
| #item['content'] = response.xpath('//div[@class="c1 text14_2"]/text()').extract()[0] 這種情況只能爬取沒有圖片的文字(可以進行程式碼優化如下:)
| content = response.xpath('//div[@class="contentext"]/text()').extract() # 匹配有圖片時的文字內容
| if len(content) == 0: # 內容為空,此時無圖片,則按以下規則匹配文字內容
| content = response.xpath('//div[@class="c1 text14_2"]/text()').extract() # 匹配無圖片時的文字內容
| item['content'] = "".join(content).strip() # 使用非空對各段文字進行拼接,並去掉尾部空格
| else:
| item['content'] = "".join(content).strip() # 使用非空對各段文字進行拼接,並去掉尾部空格
| item['url'] = response.url
| yield item
|
| 4.編寫item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
| import codecs
| class New_gongguanJsonPipeline(object):
| def __init__(self):
| self.f = codecs.open("new_dongguan.json","w",encoding="utf-8")
| def process_item(self,item,spider):
| text = json.dumps(dict(item),ensure_ascii=False) + ',\n'
| self.f.write(text)
| def close_spider(self,spider):
| self.f.close()
|
| 5.啟用上述pipeline元件--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"dongguan.pipelines.New_dongguanJsonPipeline":300}
|
| 6.執行爬蟲--- scrapy crawl new_dongguan
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
7.scrapy-project: dongguan (東莞陽關問政CrawlSpider版本--->改寫為Spider版本)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 1.建立專案--- scrapy startproject dongguan
| dongguan/
| ├── scrapy.cfg
| └── dongguan
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── xixi.py
|
| 2.明確目標--- vim items.py
| vim items.py
| import scrapy
|
| class DongguanItem(scrapy.Item):
| title = scrapy.Field()
| content = scrapy.Field()
| url = scrapy.Field()
| number = scrapy.Field()
|
| 3.製作爬蟲
| (1)生成爬蟲--- scrapy genspider xixi "wz.sun0769.com"
| (2)設定爬蟲--- vim xixi.py
| vim xixi.py
| import scrapy
| from dongguan.items import DongguanItem
|
| class XixiSpider(scrapy.Spider):
| name = "xixi"
| allowed_domains = ["wz.sun0769.com"]
| url = "http://wz.sun0769.com/index.php/question/questionType?type=4&page="
| offset = 0
| start_urls = [url + str(offset)]
|
| def parse(self,response):
| tiezi_link_list = response.xpath('//div[class="grepframe"]/table//td/a[@class="news14"]/@href').extract()
| for tiezi_link in tiezi_link_list: # for迴圈提取出帖子的連線,並通過yield函式呼叫scrapy.Reuqest()方法將帖子請求傳送到請求佇列,返回的響應使用回撥函式parse_item()處理
| yield scrapy.Request(tiezi_link,callback=self.parse_item)
| if self.offset <= 71160:
| self.offset +=30 # 主頁自增30,即生成下一頁的URL,並通過yield函式呼叫scrapy.Request()方法將下頁請求傳送到請求佇列,返回的響應使用回撥函式parse()處理
| yield scrapy.Request(self.url+str(offset),callback=self.parse)
|
| def parse_item(self,response):
| item=DongguanItem()
| item['title'] = response.xpath('//div[contains(@class,"pagecenter p3")]//strong/text()').extract()[0]
| item['number'] = item['title'].split(' ').[-1].split(':')[-1] # 從title中取出數字
| #item['content'] = response.xpath('//div[@class="c1 text14_2"]/text()').extract()[0] 這種情況只能爬取沒有圖片的文字(可以進行程式碼優化如下:)
| content = response.xpath('//div[@class="contentext"]/text()').extract() # 匹配有圖片時的文字內容
| if len(content) == 0: # 內容為空,此時無圖片,則按以下規則匹配文字內容
| content = response.xpath('//div[@class="c1 text14_2"]/text()').extract() # 匹配無圖片時的文字內容
| item['content'] = "".join(content).strip() # 使用非空對各段文字進行拼接,並去掉尾部空格
| else:
| item['content'] = "".join(content).strip() # 使用非空對各段文字進行拼接,並去掉尾部空格
| item['url'] = response.url
| yield item
|
| 4.編寫item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
| import codecs
| class XixiJsonPipeline(object):
| def __init__(self):
| self.f = codecs.open("Xixi.json","w",encoding="utf-8")
| def process_item(self,item,spider):
| text = json.dumps(dict(item),ensure_ascii=False) + ',\n'
| self.f.write(text)
| def close_spider(self,spider):
| self.f.close()
|
| 5.啟用上述pipeline元件--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"dongguan.pipelines.XixiJsonPipeline":300}
|
| 6.執行爬蟲--- scrapy crawl xixi
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
8.scrapy-project: renren (scrapy框架模擬登陸人人網三種方式----利用yield scrapy.FormRequest(url,formdata,callback)傳送帶資訊的post請求)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 方法一:最麻煩的方法,使用fiddler獲取成功登陸後的所有cookie資訊,然後將這些資訊拿過來全部post,成功率100%
| yield scrapy.FormRequest(url,cookies=fidder獲取,callback)
|
| 方法二:那些僅僅需要提供post資料的,可以採用這種方法
| yield scrapy.FormRequest(url,formdata=僅需填寫post的資料,callback)
|
| 方法三:正統的scrapy模擬登陸方法,首先發送登陸頁面請求,獲取到登陸頁面的必要引數(如_xsrf),然後和賬戶密碼資訊一起post到伺服器(其它先關資訊預設也被返回),登陸成功
| yield scrapy.FormRequest.from_response(url,formdata=需填寫的post資料+所需獲取的引數,callback)
|
| 1.建立專案---- scrapy startproject renren
| renren/
| ├── scrapy.cfg
| └── renren
| ├── __init__.py
| ├── __init__.pyc
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| ├── settings.pyc
| └── spiders
| ├── __init__.py
| ├── __init__.pyc
| └── renren1/renren2/renren3.py
|
| 2.明確目標--- vim items.py (這裡在spider中直接儲存資料,所有省略此步)
|
| 3.製作爬蟲
| ****************************************************************************************************************
| 方法一:最麻煩的方法,使用fiddler獲取成功登陸後的所有cookie資訊,然後將這些資訊拿過來全部post,成功率100%
| yield scrapy.FormRequest(url,cookies=fidder獲取,callback)
| (1)生成爬蟲--- scrapy genspider renren1 "renren.com"
| (2)設定爬蟲--- vim renren1.py
| vim renren1.py
| import scrapy
|
| class Renren1Spider(scrapy.Spider):
| name = "renren1"
| allowed_domains = ["renren.com"]
| access_urls = ( # 注意這些並不是真正的start_url,而是模擬登陸成功後才能訪問的好友主頁的列表!!
| "http://www.renren.com/54323456/profile",
| "http://www.renren.com/54334456/profile",
| "http://www.renren.com/54366456/profile"
| )
| cookies = { # 這些是fildder抓取的成功登陸的cookie資訊,全部copy到這裡,一會兒帶著這些資訊去登陸
| "anonymid" : "ixrna3fysufnwv",
| "_r01_" : "1",
| "ap" : "327550029",
| "JSESSIONID" : "abciwg61A_RvtaRS3GjOv",
| "depovince" : "GW",
| "springskin" : "set",
| "jebe_key" : "f6fb270b-d06d-42e6-8b53-e67c3156aa7e%7Cc13c37f53bca9e1e7132d4b58ce00fa3%7C1484060607478%7C1%7C1486198628950",
| "t" : "691808127750a83d33704a565d8340ae9",
| "societyguester" : "691808127750a83d33704a565d8340ae9",
| "id" : "327550029",
| "xnsid" : "f42b25cf",
| "loginfrom" : "syshome"
| }
|
| def start_request(self): # 希望程式一開始執行就傳送post請求,需要重寫start_request()方法,並且它不再呼叫start_urls裡的url
| for url in access_urls: # 通過for迴圈去訪問那些成功登陸後才能訪問的好友主頁,去訪問的時候post帶上已填入的相關cookie資訊,最後使用parse_page()回撥函式處理響應
| yield scrapy.FormRequest(url,cookies=self.cookies,callback=self.parse_page)
|
| def parse_page(self,response):
| print "======" + str(response.url) + "======"
| with open("renren1.html","w") as f:
| f.write(response.body)
|
| ****************************************************************************************************************
| 方法二:那些僅僅需要提供post資料的,可以採用這種方法
| yield scrapy.FormRequest(url,formdata=僅需填寫post的資料,callback)
| (1)生成爬蟲--- scrapy genspider renren2 "renren.com"
| (2)設定爬蟲--- vim renren2.py
| vim renren2.py
| import scrapy
|
| class Renren2Spider(scrapy.Spider):
| name = "renren2"
| allowed_domains = ["renren.com"]
|
| def start_request(self): # 希望程式一開始執行就傳送post請求,需要重寫start_request()方法,並且它不再呼叫start_urls裡的url
| url = "http://www.renren.com/PLogin.do" # 這裡沒有其他多餘的資訊,只需要填寫那些post的資料資訊(如這裡的使用者名稱和密碼)
| yield scrapy.FormRequest(url=url,formdata={"email":"[email protected]","password":"alarmachine"},callback=self.parse_page)
|
| def parse_page(self,response):
| with open("renren2.html","w") as f:
| f.write(response.body)
|
| ****************************************************************************************************************
| 方法三:正統的scrapy模擬登陸方法,首先發送登陸頁面請求,獲取到登陸頁面的必要引數(如_xsrf),然後和賬戶密碼資訊一起post到伺服器(其它先關資訊預設也被返回),登陸成功
| yield scrapy.FormRequest.from_response(response,formdata={需填寫的post資料+所需獲取的引數},callback)
|
| (1)生成爬蟲--- scrapy genspider renren3 "tencent.com"
| (2)設定爬蟲--- vim tt.py
| vim renren3.py
| import scrapy
|
| class Renren3Spider(scrapy.Spider):
| name = "renren3"
| allowed_domains=["renren.com"]
| start_urls = ["http://www.renren.com/PLogin.do"]
|
| def parse(self,response):
| _xsrf = response.xpath('//div[@class="...."].....') # 從response中獲取必要引數,例如_xsrf等,這裡的response是"http://www.renren.com/PLogin.do"
| yield scrapy.FormRequest.from_response(response,formdata={"email":"[email protected]","password":"123456",_xsrf=_xsrf,.....},callback=self.parse_page)
| # 這裡首先會去start_url獲取登陸頁面的相關資訊,然後在這裡將使用者名稱/密碼/必要引數等資訊連同登陸頁面相關資訊一起傳送新的登陸請求,登陸成功的返回頁面響應採用parse_page()回撥函式處理
|
| def parse_page(self,response):
| print "===== 1 =====" + str(response.url)
| url = "http://www.renren.com/4234553/profile" # 在該函式中:附帶成功登陸的相關頁面資訊,去訪問好友主頁,並呼叫parse_new_page()回撥函式處理
| yield scrapy.Request(url,callback=self.parse_new_page)
|
| def parse_new_page(self,response):
| print "===== 2 =====" + str(response.url)
| with open("renren3.html","w") as f:
| f.write(response.body)
|
|
| 4.編寫item pipeline(省略)
| 5.啟用上述pipeline元件(省略)
| 6.執行爬蟲--- scrapy crawl renren1/renren2/renren3 # 執行爬蟲會在當前目錄下生成renren1.html/renren2.html/renren3.html檔案
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
9.scrapy-project: renren (scrapy框架模擬登陸知乎網----CrawlSpider+正統模擬登陸方法(利用yield scrapy.FormRequest.from_response(response,formdata,callback)傳送帶資訊的post請求))
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 1.建立專案--- scrapy startproject zhihu
| zhihu/
| ├── scrapy.cfg
| └── zhihu
| ├── __init__.py
| ├── items.py
| ├── middlewares.py
| ├── pipelines.py
| ├── settings.py
| └── spiders
| ├── __init__.py
| └── zh.py
|
| 2.明確目標--- vim items.py
| vim items.py
| import scrapy
|
| class ZhihuItem(scrapy.Item):
| url = scrapy.Field()
| title = scrapy.Field()
| description = scrapy.Field()
| answer = scrapy.Field()
| name = scrapy.Field()
|
| 3.製作爬蟲
| (1)生成爬蟲--- scrapy genspider -t crawl zh "www.zhihu.com"
| (2)設定爬蟲--- vim zh.py
| vim zh.py
| from scrapy import Selector
| from scrapy import CrawlSpider,Rule
| from scrapy import LinkExtractor
| from zhihu.items import ZhihuItem
|
| class ZhSpider(CrawlSpider):
| name = "zh"
| allowed_domains = ["www.zhihu.com"]
| start_urls = ["http://www.zhihu.com"]
| rules = [ Rule(LinkExtractor(allow=('/question/\d+#.*?',)),callback='parse_page',follow=True),
| Rule(LinkExtractor(allow=('/question/\d+',)),callback='parse_page',follow=True),
| ]
| headers = {
| "Accept": "*/*",
| "Accept-Encoding": "gzip,deflate",
| "Accept-Language": "en-US,en;q=0.8,zh-TW;q=0.6,zh;q=0.4",
| "Connection": "keep-alive",
| "Content-Type":" application/x-www-form-urlencoded; charset=UTF-8",
| "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.111 Safari/537.36",
| "Referer": "http://www.zhihu.com/"
| }
| def start_request(self): # 重寫start_request方法,在第一次傳送請求的時候附帶上meta資訊,返回的響應交給post_login()處理
| return [scrapy.Reuqest("http://www.zhihu.com/login",meta={"cookiejar":1},callback=self.post_login)]
|
| def post_login(self,response):
| print "-------preparing login---------"
| xsrf = Selector(response).xpath('//input[@name="_xsrf"]/@value').extract()[0]
| return [ scrapy.FormRequest.from_response( response, # 這裡的response是"http://www.zhihu.com/login"
| meta = {'cookiejar' : response.meta['cookiejar']},
| headers = self.headers, # 注意此處的headers
| formdata = {
| '_xsrf': xsrf,
| 'email': '[email protected]', # 填上要傳送的賬戶/密碼/必要引數
| 'password': '123456'
| },
| callback = self.after_login, # 重新發送post請求後返回的成功登陸頁交給after_login()處理
| dont_filter = True
| ) ]
| def after_login(self,response):
| for url in self.start_urls:
| yield self.make_requests_from_url(url)
| # 登陸成功後,重新發送request請求獲取知乎首頁,然後根據Rule獲取相關問題URL連結,接著傳送這些問題URL的連結,返回的響應交給parse_page()處理
| def parse_page(self,response):
| problem = Selector(response)
| item = ZhihuItem()
| item['url'] = response.url
| item['title'] = problem.xpath('//h2[@class="zm-item-title zm-editable-content"]/text()').extract()
| item['description'] = problem.xpath('//div[@class="zm-editable-content"]/text()').extract()
| item['answer'] = problem.xpath('//div[@class="zm-editable-content clearfix"]/text()').extract()
| item['name'] = problem.xpath('//span[@class="name"]/text()').extract()
| yield item
|
| 4.編寫item pipeline--- vim pipelines.py
| vim pipelines.py
| import json
| import codecs
|
| class ZhihuJsonPipeline(object):
| def __init__(self):
| self.f = codecs.open("zhiju.json","w",encoding='utf-8')
| def process_item(self,item,spider):
| text = json.dumps(dict(item),ensure_ascii=False) + ",\n"
| self.f.write(text)
| return item
| def close_spider(self,spider):
| self.f.close()
|
| 5.啟動上述pipeline--- vim settings.py
| vim settings.py
| ITEM_PIPELINES = {"zhihu.pipelines.ZhihuJsonPipeline":300}
| DOWNLOAD_DELAY = 0.25
|
| 6.執行爬蟲--- scrapy crawl zh # 執行成功會在執行目錄下生成zhuhu.json檔案
|
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
10.scrapy-project: douban (scrapy框架爬取豆瓣電影top250並存入MongoDB----scrapy.Spider)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
| 1.建立專案--- scrapy startproject douban
| douban/
| ├── scrapy.cfg
| └── douban
|