1. 程式人生 > >python爬取動態網頁

python爬取動態網頁

還記得在之前一篇python開發電影查詢系統(一)—python實現後臺資料中,對電影的下載地址無法進行爬取,原因是下載地址在網頁原始碼中無法檢視,而是存放在js中,動態載入了。所以在爬取時,我在文章中寫道
這裡寫圖片描述

現在,我們找到了攻破他反爬的方法。下面我來詳細介紹一下。

robobrowser庫所做的事情就是模擬你真實的瀏覽器,並可載入動態js頁面,從而爬取資料。是不是很牛逼啊。

一、robobrowser庫的下載安裝。

直接用python的pip安裝即可

pip3 install robobrowser

二、使用方法

安裝完成後,使用help檢視使用方法。
這裡寫圖片描述

  • 進入以後,我們開啟F12,檢視網頁原始碼。重新整理頁面,檢視network
    這裡寫圖片描述

    將General和Request headers複製下來。
# -*- coding: utf-8 -*-
import robobrowser
import time
from requests import Session
    urls = []
    ua = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36'
    session = Session()
    # 直接從瀏覽器的F12取的headers,不這樣的話,網站有反爬蟲機制
# 資料爬了幾十條後就返回無資料內容的頁面了 session.headers = { "Request URL": film_url, "Request Method": "GET", #"Remote Address": "", "Referrer Policy": "no-referrer-when-downgrade", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8", "Accept-Encoding"
: "gzip, deflate", "Accept-Language": "zh-CN,zh;q=0.9", "Cache-Control": "max-age=0", "Cookie": "Hm_lvt_0fae9a0ed120850d7a658c2cb0783b55=1527565708,1527653577,1527679892,1527729123; Hm_lvt_cdce8cda34e84469b1c8015204129522=1527565709,1527653577,1527679892,1527729124; _site_id_cookie=1; clientlanguage=zh_CN; JSESSIONID=5AA866B8CDCDC49CA4B13D041E02D5E1; yunsuo_session_verify=c1b9cd7af99e39bbeaf2a6e4127803f1; Hm_lpvt_0fae9a0ed120850d7a658c2cb0783b55=1527731668; Hm_lpvt_cdce8cda34e84469b1c8015204129522=1527731668", "Host": "www.bd-film.co", "Proxy-Connection": "keep-alive", "Upgrade-Insecure-Requests": "1", "User-Agent": ua }
  • 檢視每個下載url的原始碼,藉助css選擇器,把url的selecter地址複製下來。
    這裡寫圖片描述

我們多複製幾個看看

#downlist > div > div > div:nth-child(1) > div
#downlist > div > div > div:nth-child(2) > div
#downlist > div > div > div:nth-child(3) > div

發現規律,所有下載地址的selecter地址中都有downlist ,所以我們會有下面程式碼中處理機制。

rb = robobrowser.RoboBrowser(parser="html.parser", session=session)
rb.open(url=film_url)
r = rb.select('#downlist')
if not r:
    # print(rb.response.content.decode())
    raise RuntimeError("獲取網頁內容失敗")
  • 根據“複製地址”所對應的url(已找到規律),來獲取其後面的迅雷、小米等具體的下載連結。
    現在我們來看看他們具體對應到迅雷,小米,百度雲盤的下載連結。
    這裡寫圖片描述

程式碼如下:


r = r[0]
for v in range(128):#這裡迴圈次數根據你想爬取的數目為準
    id_name = '#real_address_%d' % v
    dl = r.select(id_name)
    if not dl:
        break
    dl = dl[0].select('.form-control')[0].text
    #這裡dl就是具體下載地址了

OK,完整程式碼如下:

# -*- coding: utf-8 -*-
import robobrowser
import time

def get_bd_film_download_urls(film_url):
    from requests import Session
    urls = []
    try:
        ua = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36'
        session = Session()
        # 直接從瀏覽器的F12取的headers,不這樣的話,網站有反爬蟲機制
        # 資料爬了幾十條後就返回無資料內容的頁面了
        session.headers = {
        "Request URL": film_url,
        "Request Method": "GET",
        #"Remote Address": "",
        "Referrer Policy": "no-referrer-when-downgrade",
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
        "Accept-Encoding": "gzip, deflate",
        "Accept-Language": "zh-CN,zh;q=0.9",
        "Cache-Control": "max-age=0",
        "Cookie": "Hm_lvt_0fae9a0ed120850d7a658c2cb0783b55=1527565708,1527653577,1527679892,1527729123; Hm_lvt_cdce8cda34e84469b1c8015204129522=1527565709,1527653577,1527679892,1527729124; _site_id_cookie=1; clientlanguage=zh_CN; JSESSIONID=5AA866B8CDCDC49CA4B13D041E02D5E1; yunsuo_session_verify=c1b9cd7af99e39bbeaf2a6e4127803f1; Hm_lpvt_0fae9a0ed120850d7a658c2cb0783b55=1527731668; Hm_lpvt_cdce8cda34e84469b1c8015204129522=1527731668",
        "Host": "www.bd-film.co",
        "Proxy-Connection": "keep-alive",
        "Upgrade-Insecure-Requests": "1",
        "User-Agent": ua
        }
        rb = robobrowser.RoboBrowser(parser="html.parser", session=session)
        rb.open(url=film_url)
        if rb.response.status_code != 200:
            return  urls
        r = rb.select('#downlist')#使用css過濾器篩選出下載連結的關鍵欄位
        if not r:
            # print(rb.response.content.decode())
            raise RuntimeError("獲取網頁內容失敗")

        r = r[0]
        for v in range(128):
            id_name = '#real_address_%d' % v
            dl = r.select(id_name)
            if not dl:
                break
            dl = dl[0].select('.form-control')[0].text
            urls.append(dl)
    except Exception as err:
        print('error:',film_url, err)
    return urls
if __name__ == '__main__':

    for i in range(25000, 25700):
        ul = 'http://www.bd-film.co/zx/%d.htm' % i
        down_urls = get_bd_film_download_urls(ul)
        if down_urls:
            s = '-->'
            print(ul, s, ','.join(down_urls))
        time.sleep(1)
        # break

效果展示:
這裡寫圖片描述

將–>後面的地址複製迅雷,就可以下載了~~快去試試吧!