scrapy:使用response.follow()方法時出現AttributeError: 'HtmlResponse' object has no attribute 'follow'
阿新 • • 發佈:2019-02-03
執行scrapy出現AttributeError: ‘HtmlResponse’ object has no attribute ‘follow’
詳細錯誤:
2017-05-20 22:58:44 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: my_project)
2017-05-20 22:58:44 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'my_project', 'FEED_FORMAT': 'jl', 'FEED_URI': 'author.jl', 'NEWSPIDER_MODULE' : 'my_project.spiders', 'ROBOTS
TXT_OBEY': True, 'SPIDER_MODULES': ['my_project.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0'}
Traceback (most recent call last):
File "I:\Anaconda3\lib\site-packages\scrapy\spiderloader.py", line 53, in load
return self._spiders[spider_name]
KeyError : 'author'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "I:\Anaconda3\Scripts\scrapy-script.py", line 5, in <module>
sys.exit(scrapy.cmdline.execute())
File "I:\Anaconda3\lib\site-packages\scrapy\cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "I:\Anaconda3\lib\site-packages\scrapy\cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "I:\Anaconda3\lib\site-packages\scrapy\cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "I:\Anaconda3\lib\site-packages\scrapy\commands\crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "I:\Anaconda3\lib\site-packages\scrapy\crawler.py", line 162, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "I:\Anaconda3\lib\site-packages\scrapy\crawler.py", line 190, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "I:\Anaconda3\lib\site-packages\scrapy\crawler.py", line 194, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
File "I:\Anaconda3\lib\site-packages\scrapy\spiderloader.py", line 55, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: author'
E:\python\my_project>scrapy crawl author -o author.jl
2017-05-20 22:59:30 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: my_project)
2017-05-20 22:59:30 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'my_project', 'FEED_FORMAT': 'jl', 'FEED_URI': 'author.jl', 'NEWSPIDER_MODULE': 'my_project.spiders', 'ROBOTS
TXT_OBEY': True, 'SPIDER_MODULES': ['my_project.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0'}
2017-05-20 22:59:30 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2017-05-20 22:59:31 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-05-20 22:59:31 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-05-20 22:59:31 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-05-20 22:59:31 [scrapy.core.engine] INFO: Spider opened
2017-05-20 22:59:31 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-20 22:59:31 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-05-20 22:59:32 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quotes.toscrape.com/robots.txt> (referer: None)
2017-05-20 22:59:32 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/> (referer: None)
2017-05-20 22:59:32 [scrapy.core.scraper] ERROR: Spider error processing <GET http://quotes.toscrape.com/> (referer: None)
Traceback (most recent call last):
File "I:\Anaconda3\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "I:\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
for x in result:
File "I:\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "I:\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "I:\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "E:\python\my_project\spiders\author_spider.py", line 12, in parse
yield response.follow(href, self.parse_author)
AttributeError: 'HtmlResponse' object has no attribute 'follow'
2017-05-20 22:59:32 [scrapy.core.engine] INFO: Closing spider (finished)
2017-05-20 22:59:32 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 504,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 2701,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/404': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 5, 20, 14, 59, 32, 573099),
'log_count/DEBUG': 3,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/AttributeError': 1,
'start_time': datetime.datetime(2017, 5, 20, 14, 59, 31, 285241)}
2017-05-20 22:59:32 [scrapy.core.engine] INFO: Spider closed (finished)
解決方法:
請檢查您當前使用的scrapy版本。我查看了官方的幾個介紹文件,對比後發現response.follow()是在 scrapy1.4以後才有的,所以,如果你的scrapy版本低於1.4的話則不能使用此方法。你可以檢視對應版本的官方文件以瞭解怎麼使用。
比如我用的是1.3.3版本,則應該類似這樣寫(使用scrapy.Request()方法)
for next_page in response.css('li.nexta::attr(href)').extract():
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, self.parse)