1. 程式人生 > 實用技巧 >scrapy框架攜帶cookie訪問淘寶購物車

scrapy框架攜帶cookie訪問淘寶購物車

我們知道,有的網頁必須要登入才能訪問其內容。scrapy登入的實現一般就三種方式。

1.在第一次請求中直接攜帶使用者名稱和密碼。

2.必須要訪問一次目標地址,伺服器返回一些引數,例如驗證碼,一些特定的加密字串等,自己通過相應手段分析與提取,第二次請求時帶上這些引數即可。可以參考https://www.cnblogs.com/bertwu/p/13210539.html

3.不必花裡胡哨,直接手動登入成功,然後提取出cookie,加入到訪問頭中即可。

本文以第三種為例,實現scrapy攜帶cookie訪問購物車。

1.先手動登入自己的淘寶賬號,從中提取出cookie,如下圖中所示。

2.cmd中workon自己的虛擬環境,建立專案 (scrapy startproject taobao)

3.pycharm開啟專案目錄 ,在terminal中輸入(scrapy genspider itaobao taobao.com),得到如下的目錄結構

4.setting中設定相應配置

5. 在itaobao中寫業務程式碼。我們先不加人cookie直接訪問購物車,程式碼如下:

import scrapy

class ItaobaoSpider(scrapy.Spider):
name = 'itaobao'
allowed_domains = ['taobao.com']
start_urls = [
'https://cart.taobao.com/cart.htm?spm=a1z02.1.a2109.d1000367.OOeipq&nekot=1470211439694'] # 第一次就直接訪問購物車 def parse(self, response):
print(response.text)

響應回來資訊如下

明顯是跳轉到登入頁面的意思。

6.言歸正傳,正確的程式碼如下,需要重寫start_requests()方法,此方法可以返回一個請求給爬蟲的起始網站,這個返回的請求相當於start_urls,start_requests()返回的請求會替代start_urls裡的請求。

 import scrapy

 class ItaobaoSpider(scrapy.Spider):
name = 'itaobao'
allowed_domains = ['taobao.com'] # start_urls = ['https://cart.taobao.com/cart.htm?spm=a1z02.1.a2109.d1000367.OOeipq&nekot=1470211439694']
# 需要重寫start_requests方法
def start_requests(self):
url = "https://cart.taobao.com/cart.htm?spm=a1z02.1.a2109.d1000367.OOeipq&nekot=1470211439694"
# 此處的cookie為手動登入後從瀏覽器貼上下來的值
cookie = "thw=cn; cookie2=16b0fe13709f2a71dc06ab1f15dcc97b; _tb_token_=fe3431e5fe755;" \
" _samesite_flag_=true; ubn=p; ucn=center; t=538b39347231f03177d588275aba0e2f;" \
" tk_trace=oTRxOWSBNwn9dPyorMJE%2FoPdY8zfvmw%2Fq5hoqmmiKd74AJ%2Bt%2FNCZ%" \
"2FSIX9GYWSRq4bvicaWHhDMtcR6rWsf0P6XW5ZT%2FgUec9VF0Ei7JzUpsghuwA4cBMNO9EHkGK53r%" \
"2Bb%2BiCEx98Frg5tzE52811c%2BnDmTNlzc2ZBkbOpdYbzZUDLaBYyN9rEdp9BVnFGP1qVAAtbsnj35zfBVfe09E%" \
"2BvRfUU823q7j4IVyan1lagxILINo%2F%2FZK6omHvvHqA4cu2IaVAhy5MzzodyJhmXmOpBiz9Pg%3D%3D; " \
"cna=5c3zFvLEEkkCAW8SYSQ2GkGo; sgcookie=E3EkJ6LRpL%2FFRZIBoXfnf; unb=578051633; " \
"uc3=id2=Vvl%2F7ZJ%2BJYNu&nk2=r7kpR6Vbl9KdZe14&lg2=URm48syIIVrSKA%3D%3D&vt3=F8dBxGJsy36E3EwQ%2BuQ%3D;" \
" csg=c99a3c3d; lgc=%5Cu5929%5Cu4ED9%5Cu8349%5Cu5929%5Cu4ED9%5Cu8349; cookie17=Vvl%2F7ZJ%2BJYNu;" \
" dnk=%5Cu5929%5Cu4ED9%5Cu8349%5Cu5929%5Cu4ED9%5Cu8349; skt=4257a8fa00b349a7; existShop=MTU5MzQ0MDI0MQ%3D%3D;" \
" uc4=nk4=0%40rVtT67i5o9%2Bt%2BQFc65xFQrUP0rGVA%2Fs%3D&id4=0%40VH93OXG6vzHVZgTpjCrALOFhU4I%3D;" \
" tracknick=%5Cu5929%5Cu4ED9%5Cu8349%5Cu5929%5Cu4ED9%5Cu8349; _cc_=W5iHLLyFfA%3D%3D; " \
"_l_g_=Ug%3D%3D; sg=%E8%8D%893d; _nk_=%5Cu5929%5Cu4ED9%5Cu8349%5Cu5929%5Cu4ED9%5Cu8349;" \
" cookie1=VAmiexC8JqC30wy9Q29G2%2FMPHkz4fpVNRQwNz77cpe8%3D; tfstk=cddPBI0-Kbhyfq5IB_1FRmwX4zaRClfA" \
"_qSREdGTI7eLP5PGXU5c-kQm2zd2HGhcE; mt=ci=8_1; v=0; uc1=cookie21=VFC%2FuZ9ainBZ&cookie15=VFC%2FuZ9ayeYq2g%3D%3D&cookie" \
"16=WqG3DMC9UpAPBHGz5QBErFxlCA%3D%3D&existShop=false&pas=0&cookie14=UoTV75eLMpKbpQ%3D%3D&cart_m=0;" \
" _m_h5_tk=cbe3780ec220a82fe10e066b8184d23f_1593451560729; _m_h5_tk_enc=c332ce89f09d49c68e13db9d906c8fa3; " \
"l=eBxAcQbPQHureJEzBO5aourza7796IRb8sPzaNbMiInca6MC1hQ0PNQD5j-MRdtjgtChRe-PWBuvjdeBWN4dbNRMPhXJ_n0xnxvO.; " \
"isg=BJ2drKVLn8Ww-Ht9N195VKUWrHmXutEMHpgqKF9iKfRAFrxIJAhD3DbMRAoQ1unE"
cookies = {}
# 提取鍵值對 請求頭中攜帶cookie必須是一個字典,所以要把原生的cookie字串轉換成cookie字典
for cookie in cookie.split(';'):
key, value = cookie.split("=", 1)
cookies[key] = value
yield scrapy.Request(url=url, cookies=cookies, callback=self.parse) def parse(self, response):
print(response.text)

響應資訊如下(部分片段):

很明顯這是自己購物車的真實原始碼。

好了,大功告成啦,接下來就可以按照業務需求用xpath(自己喜歡用這種方式)提取自己想要的資訊了。