1. 程式人生 > 實用技巧 >爬蟲3 request3高階 代理操作、模擬登入、單執行緒+多工非同步協程

爬蟲3 request3高階 代理操作、模擬登入、單執行緒+多工非同步協程

- HttpConnectinPool:
- 原因:
- 1.短時間內發起了高頻的請求導致ip被禁
- 2.http連線池中的連線資源被耗盡
- 解決:
- 1.代理
- 2.headers中加入Conection:“close”

- 代理:代理伺服器,可以接受請求然後將其轉發。
- 匿名度
- 高匿:啥也不知道
- 匿名:知道你使用了代理,但是不知道你的真實ip
- 透明:知道你使用了代理並且知道你的真實ip
- 型別:
- http
- https
- 免費代理:
- www.goubanjia.com

- 快代理
- 西祠代理
- http://http.zhiliandaili.cn/ 智聯HTTP的代理精靈

- cookie的處理

代理的寫法示例:

import requests
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'
}
url = 'https://www.baidu.com/s?wd=ip'
page_text1 = requests.get(url,headers=headers,proxies={'
https':'183.166.171.51:8888'}).text with open('ip.html','w',encoding='utf-8') as fp: fp.write(page_text1)

一個代理很容易被封,這時候我們要構造一個代理池

#代理池:列表
import random
proxy_list = [
    {'https':'121.231.94.44:8888'},
    {'https':'131.231.94.44:8888'},
    {'https':'141.231.94.44:8888'}
]
url = 'https://www.baidu.com/s?wd=ip
' page_text = requests.get(url,headers=headers,proxies=random.choice(proxy_list)).text with open('ip.html','w',encoding='utf-8') as fp: fp.write(page_text)

如何構造代理池呢?其中一個方法如下

from lxml import etree
ip_url = 'http://t.11jsq.com/index.php/api/entry?method=proxyServer.generate_api_url&packid=1&fa=0&fetch_key=&groupid=0&qty=4&time=1&pro=&city=&port=1&format=html&ss=5&css=&dt=1&specialTxt=3&specialJson=&usertype=2'
page_text = requests.get(ip_url,headers=headers).text
tree = etree.HTML(page_text)
ip_list = tree.xpath('//body//text()')
print(ip_list)
#從代理精靈中提取代理ip

然後

import random
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36',
    'Connection':"close"
}
url = 'https://www.xicidaili.com/nn/%d'
proxy_list_http = []
proxy_list_https = []
for page in range(1,20):
    new_url = format(url%page)
    ip_port = random.choice(ip_list)
    page_text = requests.get(new_url,headers=headers,proxies={'https':ip_port}).text
    tree = etree.HTML(page_text)
    #tbody不可以出現在xpath表示式中
    tr_list = tree.xpath('//*[@id="ip_list"]//tr')[1:]
    for tr in tr_list:
        ip = tr.xpath('./td[2]/text()')[0]
        port = tr.xpath('./td[3]/text()')[0]
        t_type = tr.xpath('./td[6]/text()')[0]
        ips = ip+':'+port
        if t_type == 'HTTP':
            dic = {
                t_type: ips
            }
            proxy_list_http.append(dic)
        else:
            dic = {
                t_type:ips
            }
            proxy_list_https.append(dic)
print(len(proxy_list_http),len(proxy_list_https))
#爬取西祠代理
#檢測
for ip in proxy_list_http:
    response = requests.get('https://www/sogou.com',headers=headers,proxies={'https':ip})
    if response.status_code == '200':
        print('檢測到了可用ip')

模擬登入!!!

cookie的處理

  • 手動處理:將cookie封裝到headers中
  • 自動處理:session物件。可以建立一個session物件,改物件可以像requests一樣進行請求傳送。不同之處在於如果在使用session進行請求傳送的過程中產生了cookie,則cookie會被自動儲存在session物件中。

手動加上cookie:

#對雪球網中的新聞資料進行爬取https://xueqiu.com/
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36',
#     'Cookie':'aliyungf_tc=AQAAAAl2aA+kKgkAtxdwe3JmsY226Y+n; acw_tc=2760822915681668126047128e605abf3a5518432dc7f074b2c9cb26d0aa94; xq_a_token=75661393f1556aa7f900df4dc91059df49b83145; xq_r_token=29fe5e93ec0b24974bdd382ffb61d026d8350d7d; u=121568166816578; device_id=24700f9f1986800ab4fcc880530dd0ed'
}
url = 'https://xueqiu.com/v4/statuses/public_timeline_by_category.json?since_id=-1&max_id=20349203&count=15&category=-1'
page_text = requests.get(url=url,headers=headers).json()
page_text

自動新增cookie:

#建立session物件
session = requests.Session()
session.get('https://xueqiu.com',headers=headers)

url = 'https://xueqiu.com/v4/statuses/public_timeline_by_category.json?since_id=-1&max_id=20349203&count=15&category=-1'
page_text = session.get(url=url,headers=headers).json()
page_text

- 驗證碼的識別
- 超級鷹:
- 註冊:(使用者中心身份)
- 登陸:
- 建立一個軟體:899370
- 下載示例程式碼
- 打碼兔
- 雲打碼

超級鷹示例

import requests
from hashlib import md5

class Chaojiying_Client(object):

    def __init__(self, username, password, soft_id):
        self.username = username
        password =  password.encode('utf8')
        self.password = md5(password).hexdigest()
        self.soft_id = soft_id
        self.base_params = {
            'user': self.username,
            'pass2': self.password,
            'softid': self.soft_id,
        }
        self.headers = {
            'Connection': 'Keep-Alive',
            'User-Agent': 'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)',
        }

    def PostPic(self, im, codetype):
        """
        im: 圖片位元組
        codetype: 題目型別 參考 http://www.chaojiying.com/price.html
        """
        params = {
            'codetype': codetype,
        }
        params.update(self.base_params)
        files = {'userfile': ('ccc.jpg', im)}
        r = requests.post('http://upload.chaojiying.net/Upload/Processing.php', data=params, files=files, headers=self.headers)
        return r.json()

    def ReportError(self, im_id):
        """
        im_id:報錯題目的圖片ID
        """
        params = {
            'id': im_id,
        }
        params.update(self.base_params)
        r = requests.post('http://upload.chaojiying.net/Upload/ReportError.php', data=params, headers=self.headers)
        return r.json()

#識別古詩文網中的驗證碼
def tranformImgData(imgPath,t_type):
    chaojiying = Chaojiying_Client('bobo328410948', 'bobo328410948', '899370')
    im = open(imgPath, 'rb').read()
    return chaojiying.PostPic(im, t_type)['pic_str']

url = 'https://so.gushiwen.org/user/login.aspx?from=http://so.gushiwen.org/user/collect.aspx'
page_text = requests.get(url,headers=headers).text
tree = etree.HTML(page_text)
img_src = 'https://so.gushiwen.org'+tree.xpath('//*[@id="imgCode"]/@src')[0]
img_data = requests.get(img_src,headers=headers).content
with open('./code.jpg','wb') as fp:
    fp.write(img_data)
    
tranformImgData('./code.jpg',1004)
超級鷹

然後就可以輕鬆登入古詩文 網站啦!(注意驗證碼的重新整理的機制和動態變化的請求引數)

    - 動態變化的請求引數
        - 通常情況下動態變化的請求引數都會被隱藏在前臺頁面原始碼中

        (這裡直接在頁面搜__VIEWSTATE值,然後抓下來用它)

      (用session 傳送請求,保持驗證碼的一致性!)

s = requests.Session()
url = 'https://so.gushiwen.org/user/login.aspx?from=http://so.gushiwen.org/user/collect.aspx'
page_text = s.get(url,headers=headers).text
tree = etree.HTML(page_text)
img_src = 'https://so.gushiwen.org'+tree.xpath('//*[@id="imgCode"]/@src')[0]
img_data = s.get(img_src,headers=headers).content
with open('./code.jpg','wb') as fp:
    fp.write(img_data)
    
#動態獲取變化的請求引數
__VIEWSTATE = tree.xpath('//*[@id="__VIEWSTATE"]/@value')[0]
__VIEWSTATEGENERATOR = tree.xpath('//*[@id="__VIEWSTATEGENERATOR"]/@value')[0]
    
code_text = tranformImgData('./code.jpg',1004)
print(code_text)
login_url = 'https://so.gushiwen.org/user/login.aspx?from=http%3a%2f%2fso.gushiwen.org%2fuser%2fcollect.aspx'
data = {
    '__VIEWSTATE': __VIEWSTATE,
    '__VIEWSTATEGENERATOR': __VIEWSTATEGENERATOR,
    'from':'http://so.gushiwen.org/user/collect.aspx',
    'email': '[email protected]',
    'pwd': 'bobo328410948',
    'code': code_text,
    'denglu': '登入',
}
page_text = s.post(url=login_url,headers=headers,data=data).text
with open('login.html','w',encoding='utf-8') as fp:
    fp.write(page_text)

# 普通單執行緒 和執行緒池的速度對比

from time import sleep
import time
from multiprocessing.dummy import Pool
start = time.time()
urls = [
'http://www.baidu.com',
'http://www.sougou.com',
'http://www.qq.com',
'https://www.iqiyi.com/'



]
def get_request(url):
print('正在下載',url)

time.sleep(2)
print('OK了',url)


# pool = Pool(3)
# pool.map(get_request,urls)
for url in urls:
get_request(url)

print('總耗時:',time.time()-start)

單執行緒+多工非同步協程

  • 協程
    • 在函式(特殊的函式)定義的時候,如果使用了async修飾的話,則改函式呼叫後會返回一個協程物件,並且函式內部的實現語句不會被立即執行
  • 任務物件
    • 任務物件就是對協程物件的進一步封裝。任務物件==高階的協程物件==特殊的函式
    • 任務物件時必須要註冊到事件迴圈物件中
    • 給任務物件繫結回撥:爬蟲的資料解析中
  • 事件迴圈
    • 當做是一個容器,容器中必須存放任務物件。
    • 當啟動事件迴圈物件後,則事件迴圈物件會對其內部儲存任務物件進行非同步的執行。
  • aiohttp:支援非同步網路請求的模組

模板如下

import asyncio
def callback(task): #作為任務物件的回撥函式
    print('i am callback and ',task.result())   # task.result()就是非同步函式的返回值

async def test():
    print('i am test()')
    return 'bobo'

c = test()
#封裝了一個任務物件
task = asyncio.ensure_future(c)
task.add_done_callback(callback)  # 繫結回撥
#建立一個事件迴圈的物件
loop = asyncio.get_event_loop()
loop.run_until_complete(task)

非同步 I/O

asyncio 是用來編寫併發程式碼的庫,使用async/await語法。

asyncio 被用作多個提供高效能 Python 非同步框架的基礎,包括網路和網站服務,資料庫連線庫,分散式任務佇列等等。

asyncio 往往是構建 IO 密集型和高層級結構化網路程式碼的最佳選擇。

import asyncio
import time
start = time.time()
#在特殊函式內部的實現中不可以出現不支援非同步的模組程式碼
async def get_request(url):
    await asyncio.sleep(2)
    print('下載成功:',url)

urls = [
    'www.1.com',
    'www.2.com'
]
tasks = []
for url in urls:
    c = get_request(url)
    task = asyncio.ensure_future(c)
    tasks.append(task)

loop = asyncio.get_event_loop()
#注意:掛起操作需要手動處理
loop.run_until_complete(asyncio.wait(tasks))
print(time.time()-start)

爬蟲應用:

import requests
import aiohttp
import time
import asyncio
s = time.time()
urls = [
    'http://127.0.0.1:5000/bobo',
    'http://127.0.0.1:5000/jay'
]

# async def get_request(url):
#     page_text = requests.get(url).text
#     return page_text
async def get_request(url):
   async with aiohttp.ClientSession() as s:    #這邊不能用不支援非同步的requests
       async with await s.get(url=url) as response:
           page_text = await response.text()
           print(page_text)
   return page_text
tasks = []
for url in urls:
    c = get_request(url)
    task = asyncio.ensure_future(c)
    tasks.append(task)

loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(tasks))

print(time.time()-s)

multiprocessing包是Python中的多程序管理包。

1、示例:

爬蟲指令碼:

from time import sleep
import time
from multiprocessing.dummy import Pool
start = time.time()
urls = [
    'http://127.0.0.1:5000/bobo',
    'http://127.0.0.1:5000/jay'
]
def get_request(url):
    page_text = requests.get(url).text
    print(page_text)
pool = Pool(3)
pool.map(get_request,urls)

print('總耗時:',time.time()-start)

示例伺服器:

from flask import Flask
from time import sleep
app = Flask(__name__)
@app.route('/index')
def index():
    sleep(2)
    return 'hello'
@app.route('/index1')
def index1():
    sleep(2)
    return 'hello1'
if __name__ == '__main__':
    app.run()