背景:
DMP项目数据每秒写入数据量达到20-30M(峰值),可持续2个小时左右,mongo性能原因,查询效率很低。故考虑用hive替换DMP的mongo仓库。
周一:线上环境搭建hive,调试
遇到坑:由于要和mongo整合,需要额外几个jar包,放入$hive/lib和$hadoop/share/hadoop/yarn/lib下
mongo-hadoop-core
mongo-hadoop-hive
mongo-java-driver
json-serde
周二:从DMP的mongo库全量导出结果集。
遇到坑:
1)运维导出Mongo结果集合速度奇慢,预计花费2-3天时间。
考虑不从mongo结果库导出,而从DMP的spark程序修改入手,将写入mongo的前一步改为写入hdfs。重跑DMP项目,并且由于前期已经有临时数据,直接读取再做处理,大约耗时1.5hour。
2)RDD转化为json数据
采用jackson包
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.4.4</version>
</dependency>
//将K,V对写入java.util.HashMap中,V若是array,将其转为java.util.ArrayList结构
val mapper = new ObjectMapper()
val maps = new util.HashMap[String,java.lang.Object]()
maps.put("uuid",s._1)
//RDD遍历,将k,v对装入map。(maps.put方法)
s._2.foreach(v =>
{
//自定义函数 K,V对装入map
FormatUser(maps,titlesets,v)
})
val jstring = mapper.writeValueAsString(maps)
//jstring直接打印即是json字符串
3)导入hive,创建外部表
mongo数据并不是太规范,有149个key字段
hive创建表格:
create external table if not exists user_profile_dmp_all(
uuid STRING,
isreg INT,
isalive INT,
ispaid INT,
isintent INT,
province ARRAY<STRING>,
city ARRAY<STRING>,
online_m INT,
online_pc INT,
online_o INT,
os_win INT,
os_linux INT,
os_mac INT,
os_ios INT,
os_android INT,
os_o INT,
activity INT,
xf_last_time INT,
xf_ut_news INT,
xf_ut_house INT,
xf_ut_regv INT,
xf_ut_paid INT,
xf_ut_act INT,
xf_ubt_91 INT,
xf_ubt_yche INT,
xf_ubt_im INT,
xf_ubt_400 INT,
xf_ubt_ejq INT,
xf_ubt_kft INT,
xf_hp_a INT,
xf_hp_b INT,
xf_hp_c INT,
xf_hp_d INT,
xf_hp_e INT,
xf_hp_f INT,
xf_hp_g INT,
xf_hp_h INT,
xf_hp_i INT,
xf_province ARRAY<STRING>,
xf_city ARRAY<STRING>,
xf_district ARRAY<STRING>,
xf_bt_1 INT,
xf_bt_2 INT,
xf_bt_3 INT,
xf_bt_4 INT,
xf_bt_5 INT,
xf_bt_6 INT,
xf_bt_7 INT,
xf_bt_8 INT,
xf_bt_9 INT,
xf_bt_10 INT,
xf_bt_11 INT,
xf_bt_12 INT,
xf_op_1 INT,
xf_op_2 INT,
xf_op_3 INT,
xf_op_4 INT,
xf_op_5 INT,
xf_ht_1 INT,
xf_ht_2 INT,
xf_ht_3 INT,
xf_ht_4 INT,
xf_ht_5 INT,
xf_ht_6 INT,
xf_ht_7 INT,
xf_ht_8 INT,
xf_ht_9 INT,
xf_ht_10 INT,
xf_ht_11 INT,
xf_ht_12 INT,
xf_fitment_1 INT,
xf_fitment_2 INT,
xf_fitment_3 INT,
xf_fitment_4 INT,
xf_fitment_5 INT,
xf_dt_1 INT,
xf_dt_2 INT,
xf_dt_3 INT,
xf_dt_4 INT,
e_last_time INT,
e_ut_news INT,
e_ut_house INT,
e_ut_reg INT,
e_ut_paid INT,
e_ut_act INT,
e_ubt_im INT,
e_ubt_400 INT,
e_ubt_kft INT,
e_tt_lease INT,
e_tt_sale INT,
e_hp_a INT,
e_hp_b INT,
e_hp_c INT,
e_hp_d INT,
e_hp_e INT,
e_hp_f INT,
e_hp_g INT,
e_hp_h INT,
e_area_a INT,
e_area_b INT,
e_area_c INT,
e_area_d INT,
e_area_e INT,
e_area_f INT,
e_area_g INT,
e_area_h INT,
e_province ARRAY<STRING>,
e_city ARRAY<STRING>,
e_district ARRAY<STRING>,
e_room_0 INT,
e_room_1 INT,
e_room_2 INT,
e_room_3 INT,
e_room_4 INT,
e_room_5 INT,
e_room_6 INT,
e_hall_1 INT,
e_hall_2 INT,
e_hall_3 INT,
e_hall_4 INT,
e_balcony_1 INT,
e_balcony_2 INT,
e_balcony_3 INT,
e_toilet_1 INT,
e_toilet_2 INT,
e_toilet_3 INT,
e_toilet_4 INT,
e_propertype_1 INT,
e_propertype_2 INT,
e_propertype_3 INT,
e_propertype_4 INT,
e_propertype_5 INT,
e_propertype_6 INT,
e_propertype_7 INT,
e_propertype_8 INT,
e_propertype_9 INT,
e_fitment_1 INT,
e_fitment_2 INT,
e_fitment_3 INT,
e_deliverdate_1 INT,
e_deliverdate_2 INT,
e_deliverdate_3 INT,
e_deliverdate_4 INT,
e_deliverdate_5 INT,
ju_last_time INT,
ju_ut_reg INT,
ju_ut_act INT,
ju_ut_paid INT,
ju_ubt_order INT
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
STORED AS TEXTFILE
location '/WareHouse/HiveSource/DMP/user_profile/';
一定要创建外部表,以防不小心删除。
4)测试查询性能
js端查询
hive语句:
select count(1) from user_profile_dmp_all where (online_m = 1 or online_pc = 1 or online_o = 1) and (os_win = 1 or os_linux = 1 or os_mac = 1 or os_ios = 1 or os_android = 1);