Druid集群搭建

测试环境采用了三台虚拟机,每台4G内存
用到的软件有

mysql, hadoop-2.7.1, kafka_2.11-0.9.0.1, imply-2.2.3(druid的套件,集成了dsql像mysql一样去查询数据,pivot数据可视化组件)
Java 8 or better
Node.js 4.x or better

Master Server(master)

1527 (Derby; not needed if you are using a separate metadata store like MySQL or PostgreSQL)
2181 (ZooKeeper; not needed if you are using a separate ZooKeeper cluster)
8081 (Druid Coordinator)
8090 (Druid Overlord)

#启动master
bin/supervise -c conf/supervise/master-with-zk.conf

Query Server (slave1)

8082 (Druid Broker)
9095 (Pivot)
#启动query
bin/supervise -c conf/supervise/query.conf

Data Server(slave2)
8083 (Druid Historical)
8091 (Druid Middle Manager)
8100–8199 (Druid Task JVMs, spawned by Middle Managers)
8200 (Tranquility Server; optional)
#启动data
bin/supervise -c conf/supervise/data.conf

Download the distribution

tar -xzf imply-2.2.3.tar.gz
cd imply-2.2.3

[全局Common配置]

vim conf/druid/_common/common.runtime.properties
#
# Extensions
#

druid.extensions.directory=dist/druid/extensions
druid.extensions.hadoopDependenciesDir=dist/druid/hadoop-dependencies
druid.extensions.loadList=["druid-caffeine-cache","druid-lookups-cached-global","druid-histogram","druid-datasketches","mysql-metadata-storage","druid-hdfs-storage","druid-kafka-indexing-service"]

#
# Logging
#

# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true

#
# Zookeeper
#
druid.zk.service.host=192.168.31.162
druid.zk.paths.base=/druid

# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://192.168.31.162:3306/druid
druid.metadata.storage.connector.user=root
druid.metadata.storage.connector.password=root

#
# Deep storage 
#
# For local disk (only viable in a cluster if this is a network mount):
# For HDFS:
druid.storage.type=hdfs
druid.storage.storageDirectory=hdfs://master:9000/druid/segments

Configure Master server address
协调节点:
vim conf/druid/coordinator/jvm.config

-server
-Xms500m
-Xmx500m
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
-Dderby.stream.error.file=var/druid/derby.log

vim conf/druid/coordinator/runtime.properties

druid.service=druid/coordinator
druid.host=master
druid.port=8081

druid.coordinator.startDelay=PT30S
druid.coordinator.period=PT30S

统治节点:-server
vim conf/druid/overlord/jvm.config

-Xms500m
-Xmx500m
-XX:NewSize=256m
-XX:MaxNewSize=256m
-XX:+UseConcMarkSweepGC
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

vim conf/druid/overlord/runtime.properties

druid.service=druid/overlord
druid.host=192.168.31.162
druid.port=8090
druid.indexer.queue.startDelay=PT30S
druid.indexer.runner.type=remote
druid.indexer.storage.type=metadata

Configure query storage
查询节点
vim conf/druid/broker/jvm.config

-server
-Xms1g
-Xmx1g
-XX:NewSize=256m
-XX:NewSize=256m
-XX:MaxDirectMemorySize=1g
-XX:+UseConcMarkSweepGC
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

vim conf/druid/broker/runtime.properties

druid.service=druid/broker
druid.host=master
druid.port=8082

# HTTP server threads
druid.broker.http.numConnections=5
druid.server.http.numThreads=40

# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numMergeBuffers=2
druid.processing.numThreads=7
druid.processing.tmpDir=var/druid/processing

# Query cache disabled -- push down caching and merging instead
druid.broker.cache.useCache=false
druid.broker.cache.populateCache=false
#druid.broker.cache.unCacheable=[]
#JVM堆内LUR缓存大小,单位Byte
#druid.cache.sizeInBytes=60000000 

# SQL
druid.sql.enable=true

# Query config
# 查询节点请求历史节点方式, 有random和connectionCount两种连接方式
druid.broker.balancer.type=connectionCount

pivot配置
vim conf/pivot/config.yaml

# The port on which the Pivot server will listen on.
port: 9095

# Pivot runtime directory
varDir: var/pivot

settingsLocation:
location: file
format: 'json-pretty'
initialSettings:
  clusters:
    - name: druid
      type: druid
      host: localhost:8082

Configure deep storage
历史节点
vim conf/druid/historical/jvm.config

-server
-Xms1g
-Xmx1g
-XX:NewSize=256m
-XX:MaxNewSize=256m
-XX:MaxDirectMemorySize=4096m
-XX:UseConcMarkSweepGC
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

vim conf/druid/historical/runtime.properties

druid.service=druid/historical
druid.host=master
druid.port=8083

# HTTP server threads
druid.server.http.numThreads=40

# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numMergeBuffers=2
druid.processing.numThreads=7
druid.processing.tmpDir=var/druid/processing

# Segment storage
# Segment 本地加载路径与最大存储空间大小,单位为Byte
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
# 最大存储空间大小,该值只用作Coordinator调配
Segment加载的依据
druid.server.maxSize=130000000000

# Query cache
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
druid.cache.sizeInBytes=2000000000

#Tier
# 自定义数据层名称,默认为_default_tier, 不同的数据层的Segment数据无法相互复制
#druid.server.tier=hot
# 自定义数据层优化级, 默认值为0, 值越大优先级越高, 该功能用于冷热数据层的划分.
#druid.server.priority=10

MiddleManager配置
vim conf/druid/middleManager/jvm.config

-server
-Xms64m
-Xmx64m
-XX:+UseConcMarkSweepGC
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

vim conf/druid/middleManager/runtime.properties

druid.service=druid/middlemanager
druid.host=slave2
druid.port=8091

# Number of tasks per middleManager
druid.worker.capacity=3

# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
druid.indexer.task.restoreTasksOnRestart=true

# HTTP server threads
druid.server.http.numThreads=40

# Processing threads and buffers
druid.processing.buffer.sizeBytes=100000000
druid.processing.numMergeBuffers=2
druid.processing.numThreads=2
druid.processing.tmpDir=var/druid/processing

# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.7.1"]

启动三个节点相应服务

master:
    cd /opt/zookeeper-3.4.6
    bin/zkServer.sh start
    cd /opt/kafka_2.11-0.9.0.1/
    bin/kafka-server-start.sh config/server.properties &
    nohup bin/supervise -c conf/supervise/master-no-zk.conf > master.log &
slave1:
    nohup bin/supervise -c conf/supervise/query.conf > query.log &
slave2:
    nohup bin/supervise -c conf/supervise/data.conf > data.log &

Start Kafka

./bin/kafka-topics.sh --create --zookeeper master:2181 --replication-factor 1 --partitions 1 --topic wikiticker

Enable Druid Kafka ingestion

curl -XPOST -H'Content-Type: application/json' -d @quickstart/wikiticker-kafka-supervisor.json http://master:8090/druid/indexer/v1/supervisor

Load historical data

export KAFKA_OPTS="-Dfile.encoding=UTF-8"
/opt/kafka_2.11-0.9.0.1/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic wikiticker < /opt/imply-2.2.3/quickstart/wikiticker-2016-06-27-sampled.json

Load real-time data

curl -O https://static.imply.io/quickstart/wikiticker-0.4.tar.gz
tar -xzf wikiticker-0.4.tar.gz
cd wikiticker-0.4
bin/wikiticker -J-Dfile.encoding=UTF-8 -out kafka -topic wikiticker

Query data

 bin/dsql
 dsql> SELECT FLOOR(__time TO DAY) AS "Day", SUM("count") AS Edits FROM "wikiticker-kafka" GROUP BY FLOOR(__time TO DAY);

┌──────────────────────────┬───────┐
│ Day │ Edits │
├──────────────────────────┼───────┤
│ 2016-06-27T00:00:00.000Z │ 24433 │
│ 2017-03-07T00:00:00.000Z │ 642 │
└──────────────────────────┴───────┘
Retrieved 2 rows in 0.04s.

注意事项:

1. Not enough direct memory.  Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, druid.processing.numThreads, or druid.processing.numMergeBuffers: maxDirectMemory[268,435,456], memoryNeeded[1,342,177,280] = druid.processing.buffer.sizeBytes[268,435,456] * (druid.processing.numMergeBuffers[2] + druid.processing.numThreads[2] + 1)
堆外内存的设置需要根据设置的线程数和MergeBuffers的大小来确定.
2. 因为metadata保存在mysql,所以在初始化mysql数据库的时候一定要设置utf8的编码;
   CREATE DATABASE druid DEFAULT CHARACTER SET utf8;
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 206,214评论 6 481
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 88,307评论 2 382
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 152,543评论 0 341
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 55,221评论 1 279
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 64,224评论 5 371
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,007评论 1 284
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,313评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,956评论 0 259
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 43,441评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,925评论 2 323
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,018评论 1 333
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,685评论 4 322
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,234评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,240评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,464评论 1 261
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,467评论 2 352
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,762评论 2 345

推荐阅读更多精彩内容