已經快一個月了。python小有進展。現在研究scrapy項目。
阿新 • • 發佈:2017-11-17
callback color first allow font one don llb tar
剛剛研究學習了一個新的技能,覺得自己水平又升了一級。就是跨頁面item 抓取的問題。以前一直不明白。代碼如下!
item申明如下:
import scrapy class QuotesItem(scrapy.Item): quote = scrapy.Field() author = scrapy.Field() tags = scrapy.Field() author_born_date = scrapy.Field() author_born_location = scrapy.Field() author_description = scrapy.Field() author_full_url= scrapy.Field()
spider.py如下
import scrapy from quotes_2.items import QuotesItem class QuotesSpider(scrapy.Spider): name = ‘quotes_2_6‘ start_urls = [ ‘http://quotes.toscrape.com‘, ] allowed_domains = [ ‘toscrape.com‘, ] def parse(self,response): forquote in response.css(‘div.quote‘): item = QuotesItem() item[‘quote‘] = quote.css(‘span.text::text‘).extract_first() item[‘author‘] = quote.css(‘small.author::text‘).extract_first() item[‘tags‘] = quote.css(‘div.tags a.tag::text‘).extract() author_page= response.css(‘small.author+a::attr(href)‘).extract_first() item[‘author_full_url‘] = response.urljoin(author_page) yield scrapy.Request(url=item[‘authro_full_url‘], meta={‘item‘:item},callback=self.parse_author,dont_filter=True) next_page = response.css(‘li.next a::attr("href")‘).extract_first() if next_page is not None: next_full_url = response.urljoin(next_page) yield scrapy.Request(next_full_url, callback=self.parse) def parse_author(self,response): item = response.meta[‘item‘] item[‘author_born_date‘] = response.css(‘.author-born-date::text‘).extract_first() item[‘author_born_location‘] = response.css(‘.author-born-location::text‘).extract_first() item[‘author_description‘] = response.css(‘.author-born-location::text‘).extract_first() yield item
"""通過meta參數,把item這個字典,賦值給meta中的‘item‘鍵(記住meta本身也是一個字典)。
Scrapy.Request請求url後生成一個"Request對象",這個meta字典(含有鍵值‘key‘,‘key‘的值也是一個字典,即item)
會被“放”在"Request對象"裏一起發送給parse2()函數 """
item = response.meta[‘item‘]# """這個response已含有上述meta字典,此句將這個字典賦值給item,
dont_filter=True 將去重關閉。
已經快一個月了。python小有進展。現在研究scrapy項目。