Python Crawler program for Taobao and DGBB sales analysis

Taobao Crawler and report

I planned 1 month ago I should write an article for Python. This is the first time I met a new language after I left IT 5 years ago. I know how to start a new languarge qucikly because I am familiar with Java, C++,C# etc. So I decided to develop a web crawler program instead of the standard “hello world”. At the same time I'd like to do some sales analysis for DGBB(Deep groove ball bearing) which is the catalog bearing for retail market. So I will combine these 2 things together and make an analysis tool for this business by the data from Taobao.

So the contents are listed below:

1) Result : Data analysis and reports.

2) What's the logic of this tool?

3) Source code of Python.

1)Data analysis and reports.

Step 1: Data crawled from mobile apps of taobao.

At the begining I want to get the data from taobao website, but they are anti-crawler. So I check the experience from internet. Some guys said maybe can get the data through the mobile app. It's a HTML document also. It works at last but the data maybe not completed enough, but it's enough for some static analysis.

Step 2: Reports

Report 1

You can find that the top 3 are Shanghai, Zhejiang and Jiangsu. That means the major market is in the Yangtze River Delta
(YRD).

Report 2

This market share report is base on the sales. We can find the most sales are happed in YRD also. Why there is almost nothing saled in Pearl River Delta? I guess maybe YRD is focus on the upstream and mid-stream industry and PRD is focus on the downstream induatry.

Report 3

From this brand report, we can find that the biggest local brand(HRB) has already got 39% market share. NSK is the second one.

2) Logic of  of this tool

Process

Step 1: Use the tool of explorer to idendify the connection with server. As the header elements the IE will send to server together with the URL. So for avoiding the mis-connection with server, Python will prepare the header information in advance.

Step 2: Generate the URL for taobao to crawl the necessary data for DGBB such as location, production name, store name and sales quantity etc. If the program will dig in deeper we will keep the URL in the list. (In excel or JSON)

Step 3: If the URLs are not finished all, program will get one URL to download the page. And all the new URLs will be append to the URL list if the URL has not been included yet. Program will adapt the pages to JSON structure and write the necessary fields in the EXCEL file.

Step 4: Create sales report by Python.

3) Source code of Python.

Step 1: Pre-condition of the program.

1. Excel enhancement package xlsxwriter.  Intall method:

PIP3 install xlsxwriter

2. If you want to store the images please include the image package. I installed failure so I skip the pictures of the products.

Step 2: Define a function for download the pages.

def getHtml(url,pro='',postdata={}):

#download the html:support cookie

#first argument is the url ,second argument is the post data.

filename = 'cookie.txt'

# declare a MozillaCookieJar object in the file

cj = http.cookiejar.MozillaCookieJar(filename)

proxy_support = urllib.request.ProxyHandler({'http':'http://'+pro})

# open the header information to cheat the server of taobao.

opener.addheaders = [('User-Agent','Mozilla/5.0 (iPad; U; CPU OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5'),('Referer','http://s.m.taobao.com'),('Host', 'h5.m.taobao.com'),('Cookie',cookie)]

# open the url

urllib.request.install_opener(opener)

if postdata:

postdata = urllib.parse.urlencode(postdata)

html_bytes = urllib.request.urlopen(url, postdata.encode()).read()

else:

html_bytes = urllib.request.urlopen(url).read()

cj.save(ignore_discard=True, ignore_expires=True)

return html_bytes

Step 3: Define a function to write data to Excel file.

def writeexcel(path,dealcontent):

workbook = wx.Workbook(path)

worksheet = workbook.add_worksheet()

for j in range(0,len(dealcontent[i])):

if i!=0 and j==len(dealcontent[i])-1:

if dealcontent[i][j]=='':

worksheet.write(i,j,' ',)

else:

try:

worksheet.insert_image(i,j,dealcontent[i][j])

except:

worksheet.write(i,j,' ',)

else:

if dealcontent[i][j]:

worksheet.write(i,j,dealcontent[i][j].replace(' ',''),)

else:

worksheet.write(i,j,'',)

workbook.close()

Step 4: Write a main program.

def begin():

if __name__ == '__main__':

begin()

today=time.strftime('%Y%m%d', time.localtime())

a=time.clock()

keyword = input('Key words:')

sort = input('Sort by sales 1,Sort by price 2,Sort by price 3,Sort by credit 4,Sort by overall 5:')

try:

pages =int(input('Pages want to crawl(default 100 pages):'))

if pages>100 or pages<=0:

print('Page number should be in 1 to 100)

pages=100

except:

pages=100

try:

man=int(input(time suspend:default 4 seconds(4):'))

if man<=0:

man=4

except:

man=4

if sort == '1':

sortss = '_sale'

elif sort == '2':

sortss = 'bid'

elif sort=='3':

sortss='_bid'

elif sort=='4':

sortss='_ratesum'

elif sort=='5':

sortss=''

else:

sortss = '_sale'

namess=time.strftime('%Y%m%d%H%S', time.localtime())

root = '../data/'+today+'/'+namess+keyword

roota='../excel/'+today

mulu='../image/'+today+'/'+namess+keyword

createjia(root)

createjia(roota)

for page in range(0, pages):

time.sleep(man)

print('Suspend+str(man)+'second)

if sortss=='':

postdata = {

'event_submit_do_new_search_auction': 1,

'search': 'provide the search,

'_input_charset': 'utf-8',

'topSearch': 1,

'atype': 'b',

'searchfrom': 1,

'action': 'home:redirect_app_action',

'from': 1,

'q': keyword,

'sst': 1,

'n': 20,

'buying': 'buyitnow',

'm': 'api4h5',

'abtest': 16,

'wlsort': 16,

'style': 'list',

'closeModues': 'nav,selecthot,onesearch',

'page': page

}

else:

postdata = {

'event_submit_do_new_search_auction': 1,

'search': 'provide the searches,

'_input_charset': 'utf-8',

'topSearch': 1,

'atype': 'b',

'searchfrom': 1,

'action': 'home:redirect_app_action',

'from': 1,

'q': keyword,

'sst': 1,

'n': 20,

'buying': 'buyitnow',

'm': 'api4h5',

'abtest': 16,

'wlsort': 16,

'style': 'list',

'closeModues': 'nav,selecthot,onesearch',

'sort': sortss,

'page': page

}

postdata = urllib.parse.urlencode(postdata)

taobao = "http://s.m.taobao.com/search?" + postdata

print(taobao)

try:

content1 = getHtml(taobao)

file = open(root + '/' + str(page) + '.json', 'wb')

file.write(content1)

except Exception as e:

if hasattr(e, 'code'):

print('Pages not exist or timeout.')

print('Error code:', e.code)

elif hasattr(e, 'reason'):

print("Can't connect the server.")

print('Reason:  ', e.reason)

else:

print(e)

files = listfiles(root, '.json')

total = []

total.append(['页数', '店名', '商品标题', '商品打折价', '发货地址', '评论数', '原价', '售出件数', '政策享受', '付款人数', '金币折扣','URL地址','图像URL','图像'])

for filename in files:

try:

doc = open(filename, 'rb')

doccontent = doc.read().decode('utf-8', 'ignore')

product = doccontent.replace(' ', '').replace('\n', '')

product = json.loads(product)

onefile = product['listItem']

except:

print("Can't get files"+ filename)

continue

for item in onefile:

itemlist = [filename, item['nick'], item['title'], item['price'], item['location'], item['commentCount']]

itemlist.append(item['originalPrice'])

# itemlist.append(item['mobileDiscount'])

itemlist.append(item['sold'])

itemlist.append(item['zkType'])

itemlist.append(item['act'])

itemlist.append(item['coinLimit'])

itemlist.append('http:'+item['url'])

total.append(itemlist)

if len(total) > 1:

writeexcel(roota +'/'+namess+keyword+ 'taobao.xlsx', total)

else:

print('nothing got from server')

b=time.clock()

print('run time:'+timetochina(b-a))

Refer to source code from "一只尼玛"

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,590评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 86,808评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,151评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,779评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,773评论 5 367
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,656评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,022评论 3 398
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,678评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 41,038评论 1 299
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,659评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,756评论 1 330
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,411评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,005评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,973评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,203评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,053评论 2 350
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,495评论 2 343

推荐阅读更多精彩内容