hbase compression benchmark
Author: sel-fish
Date: 2016/05/16
Background
- Use hbase to store cold data. In the meantime, provide realtime search service
- Less disk space is better. To achieve that target, I need to open compression in hbase and erasure code in hdfs
- The result I can get from google is too old, like 3 or 4 years ago
Compression Algorithm
create 'table1', {NAME => 'cf', COMPRESSION => ‘ZLIB'}, {SPLITS => (1..n_splits).map {|i| "user#{1000+i*(9999-1000)/n_splits}"}}
ERROR: Compression ZLIB is not supported. Use one of LZ4 SNAPPY LZO GZ NONE
As I tried to use ZLIB as my compression alg, I got such error. I hope to test all those options, but I choose SNAPPY/GZ/NONE first to get an optional solution ASAP.
- NONE
- SNAPPY
- GZ
First
I use YCSB to insert 1000000 rows to 'table1', each of them has 8 fields, and the length of field is 256 Bytes. Thus, every row is 2 Kilobytes.
The following is my YCSB workload.
So, the total size of user data is 100000 * 2 KB = 2 Gigabytes
recordcount=1000000
operationcount=5000000
workload=com.yahoo.ycsb.workloads.CoreWorkload
fieldlength=256
fieldcount=8
readallfields=true
readproportion=1
updateproportion=0
scanproportion=0
insertproportion=0
requestdistribution=zipfian
columnfamily=cf
table=table1
The command I use to create table with compression GZ :
n_splits = 40
create 'table1', {NAME => 'cf', COMPRESSION => ‘GZ'}, {SPLITS => (1..n_splits).map {|i| "user#{1000+i*(9999-1000)/n_splits}"}}
with None :
n_splits = 40
create 'table1', {NAME => 'cf'}, {SPLITS => (1..n_splits).map {|i| "user#{1000+i*(9999-1000)/n_splits}"}}
The result I got is :
| CompressionAlg |DFS used|Storage amplification factor|
|----------|:------:|
|NONE|15.21 GB|7.1|
|GZ|12.66 GB|6.3|
So confused about that result, even I have 3 replications in dfs, that's far more beyond acceptable.
I wonder maybe my data is too small and the meta info ocuppy a lot of space.
I got the following questions in my mind:
- after I inserted 1M rows with no compression, the dfs used is only 7G, but after I disable the table and restart hbase, the usage grows to 15.22G
- after I drop the table, the usage not release until a period of time passed
- why the storage amplification factor so big
Hope that I can get the answer.
Second
As I increased my row counts to 10000000, this problem exists as well, so I started to wonder maybe I shouldn't control the split..
$ du -sh hbase
42G hbase
DFS Used: 124.5 GB (3.46%)
restart test, create table without pre split :
create 'table1', {NAME => 'cf'}
Problem still exists. Then I directly test dd on hdfs :
dd if=/dev/zero bs=1024 count=1000000 of=file_1GB
The usage is precisely 3GB. So I think that's a problem inside hbase, nothing to do with dfs.
But after a very long period of time :
$ du -sh hbase
2.4G hbase
DFS Used: 7.13 GB (0.2%)
So maybe I should wait for a while after insert ?
create 'table1', {NAME => 'cf', COMPRESSION => 'GZ'}
Right after inserted rows, got the usage :
[fenqi@guomai031119 /home/fenqi/hdfs_mount_point] 13:40
$ du -sh hbase/
6.8G hbase/
[fenqi@guomai031119 /home/fenqi/hdfs_mount_point] 13:40
$ cd hbase/
[fenqi@guomai031119 /home/fenqi/hdfs_mount_point/hbase] 13:40
$ du -sh *
1.5G archive
3.1G data
0 hbase.id
0 hbase.version
7.0K MasterProcWALs
2.1G oldWALs
244M WALs
Then, it seems some data move from 'data' to 'archive' :
3.4G archive
1.6G data
0 hbase.id
0 hbase.version
4.0K MasterProcWALs
2.1G oldWALs
244M WALs
$ du -sh hbase/
1.9G hbase/
DFS Used: 5.86 GB (0.16%)
Continue with snappy :
create 'table1', {NAME => 'cf', COMPRESSION => 'SNAPPY'}
But the disk usage is as same as when use no compresssion :
$ du -sh hbase/
2.2G hbase/
DFS Used: 6.85 GB (0.19%)
| CompressionAlg |DFS used|Storage amplification factor|
|----------|:------:|
|NONE|7.43 GB|3.7|
|GZ|5.86 GB|2.9|
|SNAPPY|6.85 GB|3.4|