Helm 是 Kubernetes 集群上的包管理工,使用 Helm 可以极大地降低使用 Kubernetes 部署应用的门槛。本文不做 Helm 详细介绍,有兴趣的小伙伴可自行阅读《Helm 入门指南》。
下载安装 Helm
使用下面命令在终端执行即可安装 Helm
$ wget https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz
$ tar -zxvf helm/helm-v3.5.2-linux-amd64.tgz
$ mv linux-amd64/helm /usr/bin/helm
查看 Helm 版本
执行 helm version 命令即可查看对应的 Helm 版本,以文本为例,以下为输出结果:
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
设置本地磁盘
在每台机器上做如下配置
创建 mount 目录
$ sudo mkdir -p /mnt/disks
格式化数据盘
$ sudo mkfs.ext4 /dev/diskA
$ sudo mkfs.ext4 /dev/diskB
挂载数据盘
$ DISKA_UUID=$(blkid -s UUID -o value /dev/diskA)
$ DISKB_UUID=$(blkid -s UUID -o value /dev/diskB)
$ sudo mkdir /mnt/disks/$DISKA_UUID
$ sudo mkdir /mnt/disks/$DISKB_UUID
$ sudo mount -t ext4 /dev/diskA /mnt/disks/$DISKA_UUID
$ sudo mount -t ext4 /dev/diskB /mnt/disks/$DISKB_UUID
$ echo UUID=`sudo blkid -s UUID -o value /dev/diskA` /mnt/disks/$DISKA_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab
$ echo UUID=`sudo blkid -s UUID -o value /dev/diskB` /mnt/disks/$DISKB_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab
部署本地卷插件
$ curl https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/archive/v2.4.0.zip
$ unzip v2.4.0.zip
修改 v2.4.0/helm/provisioner/values.yaml classes 部分:
将 hostDir: /mnt/fast-disks 改成 hostDir: /mnt/disks,# storageClass: true 改成 storageClass: true。然后执行:
$ helm install local-static-provisioner --namespace default sig-storage-local-static-provisioner/helm/provisioner
# 查看 local-static-provisioner 部署情况
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
local-volume-provisioner default 1 2021-02-10 11:06:34.3540341 +0800 CST deployed provisioner-2.4.0 2.4.0
部署 nebula 集群
下载 nebula helm chart
# 下载 nebula chart
$ helm repo add nebula-charts https://vesoft-inc.github.io/nebula-docker-compose
$ helm pull nebula-charts/nebula
$ tar -zxvf nebula-v2.0.0.tgz
设置 Kubernetes node节点
下面是 Kubernetes 节点列表,我们需要设置 node 节点的调度标签。可以将 192.168.0.2,192.168.0.3,192.168.0.4 打上 nebula: "cloud" 的标签。
服务器 IP kubernetes roles nodeName
192.168.0.1 master 192.168.0.1
192.168.0.2 node 192.168.0.2
192.168.0.3 node 192.168.0.3
192.168.0.4 node 192.168.0.4
具体操作如下:
$ kubectl label node 192.168.0.2 nebula="cloud" --overwrite
$ kubectl label node 192.168.0.3 nebula="cloud" --overwrite
$ kubectl label node 192.168.0.4 nebula="cloud" --overwrite
调整 values 默认值
nebula helm chart 目录如下:
nebula
├── Chart.yaml
├── README.md
├── templates
│ ├── configmap.yaml
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── NOTES.txt
│ ├── pdb.yaml
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── statefulset.yaml
└── values.yaml
1 directory, 11 files
可以根据运行环境需求修改 charts/nebula/values.yaml 里面的默认值
通过 helm 安装 nebula
$ helm install nebula charts/nebula
# 查看部署状态
$ helm status nebula
NAME: nebula
LAST DEPLOYED: Fri Feb 19 12:58:16 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Nebula Graph Cluster installed!
1. Watch all containers come up.
$ kubectl get pods --namespace=default -l app.kubernetes.io=nebula -w
# 查看 K8s 集群上 nebula 部署情况
$ kubectl get pods --namespace=default -l app.kubernetes.io=nebula
NAME READY STATUS RESTARTS AGE
nebula-graphd-676cfcf797-4q7mk 1/1 Running 0 6m
nebula-graphd-676cfcf797-whwqp 1/1 Running 0 6m
nebula-graphd-676cfcf797-zn5l6 1/1 Running 0 6m
nebula-metad-0 1/1 Running 0 6m
nebula-metad-1 1/1 Running 0 6m
nebula-metad-2 1/1 Running 0 6m
nebula-storaged-0 1/1 Running 0 6m
nebula-storaged-1 1/1 Running 0 6m
nebula-storaged-2 1/1 Running 0 6m
访问 nebula 集群:
$ kubectl get service nebula-graphd
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nebula-graphd NodePort 10.105.47.116 <none> 9669:31646/TCP,19669:30554/TCP,19670:32386/TCP 22m
# 使用 nebula-console 测试
$ docker run --rm -ti --entrypoint=/bin/sh vesoft/nebula-console:v2-nightly
# 通过 NodePort 连接到 graphd 组件
/ $ nebula-console -addr 192.168.0.4 -port 31646 -u root -p vesoft
2021/02/19 05:04:55 [INFO] connection pool is initialized successfully
Welcome to Nebula Graph v2.0.0-rc1!
(root@nebula) [(none)]> show hosts;
+---------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| Host | Port | Status | Leader count | Leader distribution | Partition distribution |
+---------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-0.nebula-storaged.default.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+---------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-1.nebula-storaged.default.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+---------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "nebula-storaged-2.nebula-storaged.default.svc.cluster.local" | 9779 | "ONLINE" | 0 | "No valid partition" | "No valid partition" |
+---------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
| "Total" | | | 0 | | |
+---------------------------------------------------------------+------+----------+--------------+----------------------+------------------------+
Got 4 rows (time spent 2608/4258 us)
FAQ
如何搭建一套 Kubernetes 集群?
搭建高可用的 Kubernetes 可以参考社区文档:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
如何调整 nebula 集群的部署参数?
在使用 helm install 时,使用 --set 可以设置部署参数,从而覆盖掉 helm chart 中 values.yaml 中的变量。参考文档:https://helm.sh/docs/intro/using_helm/
能否兼容 nebula v1.0.0+ 集群部署?
v1.0.0+ 不支持内部域名解析,需要修改 charts/nebula/values.yaml,配置如下:
hostNetwork: true
metadEndpoints: []
- 192.168.0.2:9559
- 192.168.0.3:9559
- 192.168.0.4:9559
深圳网站建设www.sz886.com