1. 程式人生 > >scrapy中使用代理

scrapy中使用代理

Scrapy中有多個內建的下載器中介軟體,HttpProxyMiddleware 就是其中的代理中介軟體。

在scrapy中使用自己的代理中介軟體主要有2個步驟

    1:編寫自己的代理中介軟體:

# -*- coding: utf-8 -*-

import base64
import random
import logging

from dcs.settings import PROXIES


class ProxyMiddleware(object):
    """cover scrapy's HttpProxyMiddleware.
       if 'proxy' in request.meta, HttpProxyMiddleware don't do anything.
     """
    def process_request(self, request, spider):
        """overwrite method"""
        if 'proxy' in request.meta:
            return
        proxy = random.choice(PROXIES)
        request.meta['proxy'] = "http://%s" % proxy['ip_port']
        encoded_user_pass = base64.encodestring(proxy['user_pass'])
        request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
        logging.info('[ProxyMiddleware] proxy:%s is used', proxy)

    2:在配置settings.py檔案中啟用自己的代理中介軟體,且配置的執行順序要在HttpProxyMiddleware 前面。(配置為dict, key為類路徑,value為執行順序。if 'proxy' in request.meta 內建的代理中介軟體就不會做操作了。內建中介軟體都是預設開啟的。)

DOWNLOADER_MIDDLEWARES = { 'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware'110, 'pythontab.middlewares.ProxyMiddleware'
100, }