1. 程式人生 > 程式設計 >Python 爬蟲效能相關總結

Python 爬蟲效能相關總結

這裡我們通過請求網頁例子來一步步理解爬蟲效能

當我們有一個列表存放了一些url需要我們獲取相關資料,我們首先想到的是迴圈

簡單的迴圈序列

這一種方法相對來說是最慢的,因為一個一個迴圈,耗時是最長的,是所有的時間總和
程式碼如下:

import requests

url_list = [
  'http://www.baidu.com','http://www.pythonsite.com','http://www.cnblogs.com/'
]

for url in url_list:
  result = requests.get(url)
  print(result.text)

通過執行緒池

通過執行緒池的方式訪問,這樣整體的耗時是所有連線裡耗時最久的那個,相對迴圈來說快了很多

import requests
from concurrent.futures import ThreadPoolExecutor

def fetch_request(url):
  result = requests.get(url)
  print(result.text)

url_list = [
  'http://www.baidu.com','http://www.bing.com','http://www.cnblogs.com/'
]
pool = ThreadPoolExecutor(10)

for url in url_list:
  #去執行緒池中獲取一個執行緒,執行緒去執行fetch_request方法
  pool.submit(fetch_request,url)

pool.shutdown(True)

執行緒池+回撥函式

這裡定義了一個回撥函式callback

from concurrent.futures import ThreadPoolExecutor
import requests


def fetch_async(url):
  response = requests.get(url)

  return response


def callback(future):
  print(future.result().text)


url_list = [
  'http://www.baidu.com','http://www.cnblogs.com/'
]

pool = ThreadPoolExecutor(5)

for url in url_list:
  v = pool.submit(fetch_async,url)
  #這裡呼叫回撥函式
  v.add_done_callback(callback)

pool.shutdown()

通過程序池

通過程序池的方式訪問,同樣的也是取決於耗時最長的,但是相對於執行緒來說,程序需要耗費更多的資源,同時這裡是訪問url時IO操作,所以這裡執行緒池比程序池更好

import requests
from concurrent.futures import ProcessPoolExecutor

def fetch_request(url):
  result = requests.get(url)
  print(result.text)

url_list = [
  'http://www.baidu.com','http://www.cnblogs.com/'
]
pool = ProcessPoolExecutor(10)

for url in url_list:
  #去程序池中獲取一個執行緒,子程序程去執行fetch_request方法
  pool.submit(fetch_request,url)

pool.shutdown(True)

程序池+回撥函式

這種方式和執行緒+回撥函式的效果是一樣的,相對來說開程序比開執行緒浪費資源

from concurrent.futures import ProcessPoolExecutor
import requests


def fetch_async(url):
  response = requests.get(url)

  return response


def callback(future):
  print(future.result().text)


url_list = [
  'http://www.baidu.com','http://www.cnblogs.com/'
]

pool = ProcessPoolExecutor(5)

for url in url_list:
  v = pool.submit(fetch_async,url)
  # 這裡呼叫回撥函式
  v.add_done_callback(callback)

pool.shutdown()

主流的單執行緒實現併發的幾種方式

  1. asyncio
  2. gevent
  3. Twisted
  4. Tornado

下面分別是這四種程式碼的實現例子:

asyncio例子1:

import asyncio


@asyncio.coroutine #通過這個裝飾器裝飾
def func1():
  print('before...func1......')
  # 這裡必須用yield from,並且這裡必須是asyncio.sleep不能是time.sleep
  yield from asyncio.sleep(2)
  print('end...func1......')


tasks = [func1(),func1()]

loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(*tasks))
loop.close()

上述的效果是同時會列印兩個before的內容,然後等待2秒列印end內容
這裡asyncio並沒有提供我們傳送http請求的方法,但是我們可以在yield from這裡構造http請求的方法。

asyncio例子2:

import asyncio


@asyncio.coroutine
def fetch_async(host,url='/'):
  print("----",host,url)
  reader,writer = yield from asyncio.open_connection(host,80)

  #構造請求頭內容
  request_header_content = """GET %s HTTP/1.0\r\nHost: %s\r\n\r\n""" % (url,)
  request_header_content = bytes(request_header_content,encoding='utf-8')
  #傳送請求
  writer.write(request_header_content)
  yield from writer.drain()
  text = yield from reader.read()
  print(host,url,text)
  writer.close()

tasks = [
  fetch_async('www.cnblogs.com','/zhaof/'),fetch_async('dig.chouti.com','/pic/show?nid=4073644713430508&lid=10273091')
]

loop = asyncio.get_event_loop()
results = loop.run_until_complete(asyncio.gather(*tasks))
loop.close()

asyncio + aiohttp 程式碼例子:

import aiohttp
import asyncio


@asyncio.coroutine
def fetch_async(url):
  print(url)
  response = yield from aiohttp.request('GET',url)
  print(url,response)
  response.close()


tasks = [fetch_async('http://baidu.com/'),fetch_async('http://www.chouti.com/')]

event_loop = asyncio.get_event_loop()
results = event_loop.run_until_complete(asyncio.gather(*tasks))
event_loop.close()

asyncio+requests程式碼例子

import asyncio
import requests


@asyncio.coroutine
def fetch_async(func,*args):
  loop = asyncio.get_event_loop()
  future = loop.run_in_executor(None,func,*args)
  response = yield from future
  print(response.url,response.content)


tasks = [
  fetch_async(requests.get,'http://www.cnblogs.com/wupeiqi/'),fetch_async(requests.get,'http://dig.chouti.com/pic/show?nid=4073644713430508&lid=10273091')
]

loop = asyncio.get_event_loop()
results = loop.run_until_complete(asyncio.gather(*tasks))
loop.close()

gevent+requests程式碼例子

import gevent

import requests
from gevent import monkey

monkey.patch_all()


def fetch_async(method,req_kwargs):
  print(method,req_kwargs)
  response = requests.request(method=method,url=url,**req_kwargs)
  print(response.url,response.content)

# ##### 傳送請求 #####
gevent.joinall([
  gevent.spawn(fetch_async,method='get',url='https://www.python.org/',req_kwargs={}),gevent.spawn(fetch_async,url='https://www.yahoo.com/',url='https://github.com/',])

# ##### 傳送請求(協程池控制最大協程數量) #####
# from gevent.pool import Pool
# pool = Pool(None)
# gevent.joinall([
#   pool.spawn(fetch_async,#   pool.spawn(fetch_async,url='https://www.github.com/',# ])

grequests程式碼例子
這個是講requests+gevent進行了封裝

import grequests


request_list = [
  grequests.get('http://httpbin.org/delay/1',timeout=0.001),grequests.get('http://fakedomain/'),grequests.get('http://httpbin.org/status/500')
]


# ##### 執行並獲取響應列表 #####
# response_list = grequests.map(request_list)
# print(response_list)


# ##### 執行並獲取響應列表(處理異常) #####
# def exception_handler(request,exception):
# print(request,exception)
#   print("Request failed")

# response_list = grequests.map(request_list,exception_handler=exception_handler)
# print(response_list)

twisted程式碼例子

#getPage相當於requets模組,defer特殊的返回值,rector是做事件迴圈
from twisted.web.client import getPage,defer
from twisted.internet import reactor

def all_done(arg):
  reactor.stop()

def callback(contents):
  print(contents)

deferred_list = []

url_list = ['http://www.bing.com','http://www.baidu.com',]
for url in url_list:
  deferred = getPage(bytes(url,encoding='utf8'))
  deferred.addCallback(callback)
  deferred_list.append(deferred)
#這裡就是進就行一種檢測,判斷所有的請求知否執行完畢
dlist = defer.DeferredList(deferred_list)
dlist.addBoth(all_done)

reactor.run()

tornado程式碼例子

from tornado.httpclient import AsyncHTTPClient
from tornado.httpclient import HTTPRequest
from tornado import ioloop


def handle_response(response):
  """
  處理返回值內容(需要維護計數器,來停止IO迴圈),呼叫 ioloop.IOLoop.current().stop()
  :param response: 
  :return: 
  """
  if response.error:
    print("Error:",response.error)
  else:
    print(response.body)


def func():
  url_list = [
    'http://www.baidu.com',]
  for url in url_list:
    print(url)
    http_client = AsyncHTTPClient()
    http_client.fetch(HTTPRequest(url),handle_response)


ioloop.IOLoop.current().add_callback(func)
ioloop.IOLoop.current().start()

以上就是Python 爬蟲效能相關總結的詳細內容,更多關於Python 爬蟲效能的資料請關注我們其它相關文章!