Scrapy的Item是进行数据保存不可缺少的步骤,通过它进行数据的整理并通过Pipelines进行数据的数据库保存,图片下载等,它只有一种类型scrapy.Field()
。
由于需要添加一个封面图,对上面的爬虫添加一个front_image_url
字段对parse
函数进行修改
def parse(self, response):
"""
1、获取文章列表页url并交给scrapy进行解析
2、获取下一个文章列表页
"""
article_list = response.css('#archive .floated-thumb .post-thumb a')
for article in article_list:
image_url = article.css("img::attr(src)").extract_first("")
article = article.css("::attr(href)").extract_first("")
yield Request(url=parse.urljoin(response.url, article), meta={"front_image_url":image_url}, callback=self.parse_detail)
print(parse.urljoin(response.url, article))
其中的meta
字段是传递值的方法。在调试时返回的response
中会出现meta
的内容,它是一个字典,故在传递时可以直接通过 response.meta['front_image_url']
进行引用(也可以使用get的方法,附默认值防止出现异常):
def parse_detail(self, response):
front_image_url = response.meta.get("front_image_url", "") #文章封面图
title = response.css('div.entry-header h1::text').extract_first("0")
create_date = response.css('p.entry-meta-hide-on-mobile::text').extract()[0].replace('·','').strip()
fav_nums = response.css("span.bookmark-btn::text").extract_first("0")
praise_nums = response.css("div.post-adds h10::text").extract_first("0")
在Items.py
中,定义一个Item并声明它的字段内容:
class JobBoleArticleItem(scrapy.Item):
title = scrapy.Field()
create_date = scrapy.Field()
url = scrapy.Field() # 保存url
url_object_id = scrapy.Field() # 保存url的md5值去重等
front_image_url = scrapy.Field()
front_image_path = scrapy.Field() # 保存封面图的本地路径
praise_nums = scrapy.Field()
comment_nums = scrapy.Field()
fav_nums = scrapy.Field()
tags = scrapy.Field()
content = scrapy.Field()
在jobbole.py
中from articlespider.items import JobBoleArticleItem
进行引用
并在parse_detail
中进行初始化article_item = JobBoleArticleItem()
,总体代码如下:
def parse_detail(self, response):
article_item = JobBoleArticleItem()
front_image_url = response.meta.get("front_image_url", "") #文章封面图
title = response.css('div.entry-header h1::text').extract_first("0")
create_date = response.css('p.entry-meta-hide-on-mobile::text').extract()[0].replace('·','').strip()
fav_nums = response.css("span.bookmark-btn::text").extract_first("0")
praise_nums = response.css("div.post-adds h10::text").extract_first("0")
match_re = re.match(".*?(\d+).*", fav_nums)
if match_re:
fav_nums = match_re.group(1)
else:
fav_nums = 0
comment_nums = response.css("a[href='#article-comment'] span::text").extract_first("0")
match_re = re.match(".*?(\d+).*", comment_nums)
if match_re:
comment_nums = match_re.group(1)
else:
comment_nums = 0
content = response.css(".entry").extract_first("0")
tag_list = response.css('p.entry-meta-hide-on-mobile a::text').extract()
tag_list = [e for e in tag_list if not e.strip().endswith("评论")]
tags = ",".join(tag_list)
article_item['title'] = title
article_item['url'] = response.url
article_item['create_date'] = create_date
article_item['front_image_url'] = front_image_url
article_item['praise_nums'] = praise_nums
article_item['comment_nums'] = comment_nums
article_item['fav_nums'] = fav_nums
article_item['tags'] = tags
article_item['content'] = content
yield article_item
最后一句yield article_item
会自动提交到settings
中的ITEM_PIPELINES
进行处理。
此时在pipelines.py
中设置断点调试,可以看到article_item
中的值已经传递到这里了。