Date:2016-12-1
By:Black Crow
前言:
本次作业为课程第四部分的作业,爬取动态加载数据。爬取下来的数据存储为CSV文件,然后通过EXCEL做了几张简单的图,爬了淘宝‘bra’关键词的6800多个的商品。
作业效果:
我的代码:
import requests,json,csv
class Taobao():
'''通过关键词获取淘宝页面,通过页面中的json数据提取商品信息'''
def init(self,start_page,end_page,keyword):
self.urls =['https://s.taobao.com/search?data-key=s&data-value={}&ajax=true&_ksTS=1480429993551_840&callback=&q={}'.format(str(i*44),keyword) for i in range(start_page,end_page+1)]
self.write_file_head()#一次写入头信息
for url in self.urls:#循环控制多页内容追加写入
data = self.get_data(url)
product_info_list = self.get_product_info(data)
self.write_file(product_info_list)
def get_data(self,url):
r=requests.get(url)
# data = r.text
data =json.loads(r.text)#转换为dict
return data
def get_product_info(self,data):
porduct_info_list =[]
for i in data['mods']['itemlist']['data']['auctions']:#数据较多,可以通过json在线解析网站切换视图
product_info={
'detail_url':i['detail_url'].replace('//',''),
'location' :i['item_loc'].replace('//',''),
'shoplink' :i['shopLink'].replace('//',''),
'reserve_price':i['reserve_price'],
'fee':i['view_fee'],
'raw_title' :i['raw_title'],
'view_price':i['view_price'],
'pic_url':i['pic_url'].replace('//',''),
'shop_owner':i['nick'],
'user_id':i['user_id'],
}
porduct_info_list.append(product_info)
return porduct_info_list
def write_file_head(self):
with open('product_info.csv', 'a+',newline='') as file:
fieldnames = ['raw_title','view_price','reserve_price','location',
'fee','detail_url', 'shoplink','pic_url','shop_owner' ,'user_id']
writer = csv.DictWriter(file,fieldnames=fieldnames)
writer.writeheader()#因为涉及多页时重复写入头信息会浪费,故一次写入,后续只写入产品信息
def write_file(self,product_info_list):
with open('product_info.csv','a+',newline='') as file:#采用a+在底部写入,'newline'不加会多出空行,原理暂不知道
fieldnames = ['raw_title','view_price','reserve_price','location','fee','detail_url', 'shoplink','pic_url','shop_owner' ,'user_id']
writer= csv.DictWriter(file,fieldnames=fieldnames)
# writer.writeheader()
writer.writerows(product_info_list)#writerows与writerow不同,writerows可以写入字典,相对高效
if name == 'main':
keyword = input('Keyword:') # 输入关键词,返回数据的基础依据
start_page = int(input('From page:'))#因为每页有44个,要控制页码,先转为int
end_page = int(input('To page:'))#同上
file = Taobao(start_page,end_page,keyword)
print('Done!')
####总结:
>1. 淘宝商品页显示的只有100页(每页44个商品),做测试的时候曾做了300页,爬取6.8K商品时报错了。
2. 链接中有&callback=json*,如果不在请求链接中删除,会存在返回的数据不是json格式的问题。
3. csv文件写入的时候用了a+追加写入,所以代码对头信息单独做了处理的,写入信息过程中在加newline=‘’之前是会出现乱码,原因暂未查明。