惭愧!没有坚持每天写日记,违背了自己许下的承诺。国庆中秋假期玩疯了,一个奖学金申请表写了7天,自己都对自己无语了。不管怎样,爬了泰山,去了泰山上的孔庙,也算是一次朝圣(?)之旅吧!什么时候我要认认真真写一篇游记,今天先写一点python金融大数据分析的学习笔记。
Python的I/O,几种操作的对比
去年我刚开始做天池竞赛的时候,就注意到python的I/O是多么缓慢(当然还是比R要快),这一方面是因为自己对python及pandas库不了解,另一方面是因为硬件本身的限制。后来也接触到一些庞大的数据,比如腾讯的商品推荐赛题,不可避免地需要更大的内存和更频繁的I/O,才去想到怎样优化性能。然而我的解决方法真是比较野——使用SAS代替python进行数据的初步处理,然后仍然使用最初级的I/O导入python,谁叫SAS速度快不占内存呢!现在想想当时的自己,真是有点哭笑不得。
事实上,当时的那些工作,python完全能够处理,只是自己技术太差,知识太浅,思而不学则殆。下面就来举例说明一下各种各样的python I/O以及性能对比。
我们使用这样的数据进行导出和读入:
import numpy as np
import pandas as pd
n = 1000000
c1= pd.date_range(start='2000-01-01',periods=n,freq='2min')
ci= pd.DataFrame(np.random.randint(0,100,(n,2)),c1).add_prefix('Int')
cf= pd.DataFrame(np.random.randn(n,2),c1).add_prefix('Float')
c = pd.concat([ci,cf],axis=1).reset_index().rename(columns={'index':'Date'})
为以后的方便,我们首先生成数据框c的两个ndarray版本。ndarray格式可以使用
pd.DataFrame()
函数转换成数据框,但这要求格外的操作和资源,所以可能鸡肋一些,但是不管怎样,这里还是放出ndarray的操作。
dtimes = np.arange('2000-01-01 00:00:00','2003-10-20 21:20:00',dtype='datetime64[2m]')
dty = np.dtype([('Date','datetime64[m]'),('Int0','int64'),('Int1','int64'),('Float0','float64'),('Float1','float64')])
d = np.zeros(len(dtimes),dtype=dty)
for i in ['Int0','Int1','Float0','Float1']: d[i] = c[i]
d['Date'] = dtimes
dtimes = np.arange('2000-01-01 00:00:00','2003-10-20 21:20:00',dtype='datetime64[2m]')
dty = np.dtype([('Date','S19'),('Int0','int64'),('Int1','int64'),('Float0','float64'),('Float1','float64')])
dl = np.zeros(len(dtimes),dtype=dty)
for i in ['Int0','Int1','Float0','Float1']: dl[i] = c[i]
dl['Date'] = dtimes
pickle模块
pickle是python的标准库。使用pickle模块的好处是可以存储大部分python的对象到磁盘上,这里提供一个简单的例子,直接存储数据框c。考虑到读入的对象就是写出的对象,因此操作简便没有中间步骤无需前后处理,可以说非常好用。
import pickle
pkl_file = open('data.pkl','wb')
%time pickle.dump(c,pkl_file)
pkl_file.close()
pkl_file = open('data.pkl','rb')
%time b = pickle.load(pkl_file)
pkl_file.close()
读写文本文件
使用.write
和.readlines
读写,固定了写出的浮点型格式为17位小数,这样做可以保证精度不损失。这种处理方法复杂且缓慢,但由于写出的是.csv
格式,可以使用其他文本编辑器浏览查看,有很强的通用性,因此也还算好。
csv_file = open('data.txt','wb')
csv_file.write(b'Date,Int0,Int1,Float0,Float1\n')
ca = c.values
%time for x in ca: csv_file.write(('%s,%g,%g,%.17f,%.17f\n'%tuple(x)).encode())
csv_file.close()
csv_file = open('data.txt','rb')
%time ba = csv_file.readlines()
csv_file.close()
SQL数据库
用python自带的数据库进行查询。读入也可以使用pandas.io.sql
模块。缓慢。读入时也十分缓慢。
import sqlite3 as sq3
query = 'create table data (Data date, Int0 int, Int1 int, Float0 float, Float1 float)'
!rm -rf data.db
con = sq3.connect('data.db')
con.execute(query)
con.commit()
%time for x in ca: con.execute('insert into data values(?,?,?,?,?)',(str(x[0]),x[1],x[2],x[3],x[4]))
con.commit()
pointer = con.execute('select * from data')
%time da = pointer.fetchall()
da = np.array(da)
con.close()
import pandas.io.sql as pds
con = sq3.connect('data.db')
%time data = pds.read_sql('select * from data',con)
con.close()
使用pandas的HDF5格式
相较于pickle模块更快,更简便,缺点是只能存储数据框数据。
h5s = pd.HDFStore('data.h5s','w')
%time h5s['data'] = c
h5s.close()
h5s = pd.HDFStore('data.h5s','r')
%time f = h5s['data']
h5s.close()
使用pandas的to_csv和to_excel
以前只知道.to_csv
,受制于文本格式限制,其速度和.write
差不多,很慢。.to_excel
只测试导出了1/10的数据,实在太慢了。
%time c.to_csv('data.csv',index=False)
%time b = pd.read_csv('data.csv')
%time c.iloc[:int(n/10),:].to_excel('data.xlsx',index=False)
%time g = pd.read_excel('data.xlsx')
使用numpy的save
numpy的高性能有目共睹。可惜写出读入的都是ndarray,需要转化,有些麻烦。优于pickle但不如pandas的HDF5格式存储。
%time np.save('data',d)
%time ea = np.load('data.npy')
使用PyTables
这就是Python和HDF5的结合了,很快。在complevel=0
下,与pandas的HDF5存储不分伯仲。可惜写出读入的都是ndarray,需要转化,有些麻烦。还有一些基于PyTables的数据分析操作,这里就不介绍了。
import tables as tb
h5 = tb.open_file('data.h5','w')
filters = tb.Filters(complevel=0)
%time tab = h5.create_table('/','data',dl,title='data',expectedrows=n,filters=filters)
h5.close()
h5 = tb.open_file('data.h5','r')
%time h = h5.get_node('/','data').read()
h = pd.DataFrame(h)
h['Date'] = h['Date'].astype('datetime64[m]')
h5.close()
几种I/O操作的对比
这里使用time模块计时,重复若干次操作并取中位数。给出了byte(占用磁盘大小)、easy(代码复杂度)、read(读入用时)、write(写出用时)4个指标,其中easy只有1,2,3三个取值,打分标准为:1.无需转换格式,不使用循环;2.需转换格式,不使用循环;3.需转换格式,使用循环。
我们排除太慢的.to_excel
,于是剩下7种I/O操作方法。考虑到实际工作中对代码简洁和读入性能的要求,对这7种方法进行排序,排序结果如下表:
可以看到,pickle模块优势明显。如果方便使用ndarray格式的话,np.save
也是一个不错的选择。如果考虑文本格式的读写,则应当使用.write
和.readlines
而不是pandas的.to_csv
。
附上比较的代码
import matplotlib.pyplot as plt
import time
tl = {}
!mkdir tryio
cd tryio
# pickle
tl[('write','pickle')] = []
tl[('read','pickle')] = []
import pickle
for i in range(50):
pkl_file = open('data.pkl','wb')
tt = time.time()
pickle.dump(c,pkl_file)
tl[('write','pickle')].append(time.time()-tt)
pkl_file.close()
pkl_file = open('data.pkl','rb')
tt = time.time()
b = pickle.load(pkl_file)
tl[('read','pickle')].append(time.time()-tt)
pkl_file.close()
plt.plot(tl[('write','pickle')])
plt.plot(tl[('read','pickle')])
tmp = !powershell dir
tl[('byte','pickle')] = int(re.findall(r' ([0-9]+) ',' '.join(tmp))[0])
tl[('easy','pickle')] = 1
!rm -rf *.*
# txt
tl[('write','txt')] = []
tl[('read','txt')] = []
for i in range(3):
csv_file = open('data.txt','wb')
csv_file.write(b'Date,Int0,Int1,Float0,Float1\n')
ca = c.values
tt = time.time()
for x in ca: csv_file.write(('%s,%g,%g,%.17f,%.17f\n'%tuple(x)).encode())
tl[('write','txt')].append(time.time()-tt)
csv_file.close()
csv_file = open('data.txt','rb')
tt = time.time()
b = csv_file.readlines()
tl[('read','txt')].append(time.time()-tt)
csv_file.close()
plt.plot(tl[('write','txt')])
plt.plot(tl[('read','txt')])
tmp = !powershell dir
tl[('byte','txt')] = int(re.findall(r' ([0-9]+) ',' '.join(tmp))[0])
tl[('easy','txt')] = 3
!rm -rf *.*
# sql
tl[('write','sql')] = []
tl[('read','sql')] = []
import sqlite3 as sq3
for i in range(3):
query = 'create table data (Data date, Int0 int, Int1 int, Float0 float, Float1 float)'
!rm -rf data.db
con = sq3.connect('data.db')
con.execute(query)
con.commit()
tt = time.time()
for x in ca: con.execute('insert into data values(?,?,?,?,?)',(str(x[0]),x[1],x[2],x[3],x[4]))
tl[('write','sql')].append(time.time()-tt)
con.commit()
pointer = con.execute('select * from data')
tt = time.time()
da = pointer.fetchall()
tl[('read','sql')].append(time.time()-tt)
da = np.array(da)
con.close()
# import pandas.io.sql as pds
# con = sq3.connect('data.db')
# %time data = pds.read_sql('select * from data',con)
# con.close()
plt.plot(tl[('write','sql')])
plt.plot(tl[('read','sql')])
tmp = !powershell dir
tl[('byte','sql')] = int(re.findall(r' ([0-9]+) ',' '.join(tmp))[0])
tl[('easy','sql')] = 3
!rm -rf *.*
# pdHDF5
tl[('write','pdHDF5')] = []
tl[('read','pdHDF5')] = []
for i in range(50):
h5s = pd.HDFStore('data.h5s','w')
tt = time.time()
h5s['data'] = c
tl[('write','pdHDF5')].append(time.time()-tt)
h5s.close()
h5s = pd.HDFStore('data.h5s','r')
tt = time.time()
f = h5s['data']
tl[('read','pdHDF5')].append(time.time()-tt)
h5s.close()
plt.plot(tl[('write','pdHDF5')])
plt.plot(tl[('read','pdHDF5')])
tmp = !powershell dir
tl[('byte','pdHDF5')] = int(re.findall(r' ([0-9]+) ',' '.join(tmp))[0])
tl[('easy','pdHDF5')] = 1
!rm -rf *.*
# pdcsv
tl[('write','pdcsv')] = []
tl[('read','pdcsv')] = []
for i in range(3):
tt = time.time()
c.to_csv('data.csv',index=False)
tl[('write','pdcsv')].append(time.time()-tt)
tt = time.time()
b = pd.read_csv('data.csv')
tl[('read','pdcsv')].append(time.time()-tt)
plt.plot(tl[('write','pdcsv')])
plt.plot(tl[('read','pdcsv')])
tmp = !powershell dir
tl[('byte','pdcsv')] = int(re.findall(r' ([0-9]+) ',' '.join(tmp))[0])
tl[('easy','pdcsv')] = 1
!rm -rf *.*
# pdexcel
tl[('write','pdexcel')] = []
tl[('read','pdexcel')] = []
for i in range(3):
tt = time.time()
c.iloc[:int(n/10),:].to_excel('data.xlsx',index=False)
tl[('write','pdexcel')].append(time.time()-tt)
tt = time.time()
g = pd.read_excel('data.xlsx')
tl[('read','pdexcel')].append(time.time()-tt)
plt.plot(tl[('write','pdexcel')])
plt.plot(tl[('read','pdexcel')])
tmp = !powershell dir
tl[('byte','pdexcel')] = int(re.findall(r' ([0-9]+) ',' '.join(tmp))[0])
tl[('easy','pdexcel')] = 1
!rm -rf *.*
# npsave
tl[('write','npsave')] = []
tl[('read','npsave')] = []
for i in range(50):
tt = time.time()
np.save('data',d)
tl[('write','npsave')].append(time.time()-tt)
tt = time.time()
ea = np.load('data.npy')
tl[('read','npsave')].append(time.time()-tt)
plt.plot(tl[('write','npsave')])
plt.plot(tl[('read','npsave')])
tmp = !powershell dir
tl[('byte','npsave')] = int(re.findall(r' ([0-9]+) ',' '.join(tmp))[0])
tl[('easy','npsave')] = 2
!rm -rf *.*
# pytable
tl[('write','pytable')] = []
tl[('read','pytable')] = []
import tables as tb
for i in range(50):
h5 = tb.open_file('data.h5','w')
filters = tb.Filters(complevel=0)
tt = time.time()
tab = h5.create_table('/','data',dl,title='data',expectedrows=n,filters=filters)
tl[('write','pytable')].append(time.time()-tt)
h5.close()
h5 = tb.open_file('data.h5','r')
tt = time.time()
h = h5.get_node('/','data').read()
tl[('read','pytable')].append(time.time()-tt)
h = pd.DataFrame(h)
h['Date'] = h['Date'].astype('datetime64[m]')
h5.close()
plt.plot(tl[('write','pytable')])
plt.plot(tl[('read','pytable')])
tmp = !powershell dir
tl[('byte','pytable')] = int(re.findall(r' ([0-9]+) ',' '.join(tmp))[0])
tl[('easy','pytable')] = 2
!rm -rf *.*
# comparison
ptl = pd.Series(tl)
ptl.loc[['write','read']] = ptl[['write','read']].apply(np.median)
ptl.loc[['byte']] = ptl[['byte']]/1000**2
ptl = ptl.unstack(level=0)
ptl.drop('pdexcel',inplace=True)
ptl.sort_values(by=['easy','read'])
cd ..
!rm -rf tryio