现在你已经对选择器和提取内容有一定的认识,让我们通过写代码完成我们的爬虫来从网页中提取语录。
每条在http://quotes.toscrape.com网站中的语录,都是用HTML元素来表示的,就像是这样:
<div class="quote">
<span class="text">“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”</span>
<span>
by <small class="author">Albert Einstein</small>
<a href="/author/Albert-Einstein">(about)</a>
</span>
<div class="tags">
Tags:
<a class="tag" href="/tag/change/page/1/">change</a>
<a class="tag" href="/tag/deep-thoughts/page/1/">deep-thoughts</a>
<a class="tag" href="/tag/thinking/page/1/">thinking</a>
<a class="tag" href="/tag/world/page/1/">world</a>
</div>
</div>
让我们打开scrapy shell来操作如何提取我们想要的数据:
$ scrapy shell "http://quotes.toscrape.com"
我们得到了引用HTML元素的选择器列表:
>>>response.css("div.quote")
上面的查询返回的每个选择器都允许我们在子元素上运行更多的查询。让我们指定第一个选择器为变量,如此一来我们就可以在指定的quote变量中直接运行我们的CSS选择器。
>>>quote = response.css("div.quote")[0]
现在让我们从刚创建的quote对象中提取title,author,tags:
>>> title = quote.css("span.text::text").extract_first()
>>>title
'"The world as we have create it is a process of our thinking.It cannot be changed without changing our thinking."'
>>>author = quote.css("samll.author::text").extract_first()
>>>author
'Albert Einstein'
因为标签是字符串列表,我们可以用.extract()方法来提取它们:
>>>tags = quote.css("div.tags a.tag::text").extract()
>>>tags
['change','deep-thoughts','thinking','world']
理解了如何提取quote对象中的每条信息,我们现在可以提取全部信息并把它们放入Python的字典中。
>>>for quote in response.css("div.quote"):
text = quote.css("span.text::text").extract_first()
author = quote.css("small.author::text").extract_first()
tags = quote.css("div.tags a.tag::text").extract()
print(dict(text = text, author = author, tags = tags))
{'tags': ['change', 'deep-thoughts', 'thinking', 'world'],'author': 'Albert Einstein', 'text': '“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”'}
{'tags': ['abilities', 'choices'], 'author': 'J.K. Rowling', 'text': '“It is ourchoices, Harry, that show what we truly are, far more than our abilities.”'}
... a few more of these, omitted for brevity
>>>
在我们的爬虫中提取数据:
让我们回到爬虫。直到现在,还未提取详细的数据,我们仅仅保存了HTML页面在电脑中。让我们在爬虫中整合具有逻辑的数据。
一个Scrapy爬虫生成了许多包含在页面中数据的字典。要做到这一点,我们在回调函数中使用yield(Python的关键字),正如你所见:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
def parse(self,response):
for quote in qutoes.css('div.quote'):
yield{
'text':quote.css('span.text::text').extract_first(),
'author':quote.css('small.author:text').extract_first()
'tags':quote.css('div.tags a.tag:text').extract(),
}
如果你运行此爬虫,它将会输出爬取的数据和输入日志:
2016-09-19 18:57:19 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes. toscrape.com/page/1/>
{'tags': ['life', 'love'], 'author': 'André Gide', 'text': '“It is better to be hated for what you are than to be loved for what you are not.”'}
2016-09-19 18:57:19 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes.toscrape.com/page/1/>
{'tags': ['edison', 'failure', 'inspirational', 'paraphrased'], 'author': 'Thomas A.Edison', 'text': "“I have not failed. I've just found 10,000 ways that won't work.”"}