一、实验背景
1. 内网环境下,无法连接互联网,需要搭建ceph,为分布式集群提供ceph文件系统
2. 要实现脚本的自动化安装,shell脚本或者ansible playbook,不使用ceph-deploy工具
我们需要在一台能联网的实验机机器上,将ceph集群安装所需的主包及其依赖一次性下载,编写安装脚本,然后在目标机器上搭建本地yum源,实现离线安装。
我们先实现搭建本地仓库,在目标机器上手动安装。
二、实验环境
操作系统:CentOS7.5 Minimal
联网的实验机: 192.168.1.101
cephServer(node01): 192.168.1.103
cephServer(node01)数据盘:/dev/sdb 100G
cephClient: 192.168.1.106
三、在联网的实验机下载ceph主包及其依赖
添加ceph官方yum镜像仓库
# vi /etc/yum.repos.d/ceph.repo
##################################################
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
##################################################
# yum clean all
# yum repolist
# yum list all |grep ceph
# yum -y install epel-release
# yum -y install yum-utils
# yum -y install createrepo
# mkdir /root/cephDeps
# repotrack ceph ceph-mgr ceph-mon ceph-mds ceph-osd ceph-fuse ceph-radosgw -p /root/cephDeps
# createrepo -v /root/cephDeps
# tar -zcf cephDeps.tar.gz /root/cephDeps
四、在cephServer(node01)上搭建 本地yum源
将cephDeps.tar.gz拷贝到cephServer(node01)服务器
# tar -zxf cephDeps.tar.gz
# vim build_localrepo.sh
##################################################
#!/bin/bash
parent_path=$( cd "$(dirname "${BASH_SOURCE}")" ; pwd -P )
cd "$parent_path"
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup
# create local repositry
rm -rf /tmp/localrepo
mkdir -p /tmp/localrepo
cp -rf ./cephDeps/* /tmp/localrepo
echo "
[localrepo]
name=Local Repository
baseurl=file:///tmp/localrepo
gpgcheck=0
enabled=1" > /etc/yum.repos.d/ceph.repo
yum clean all
##################################################
# sh -x build_localrepo.sh
# yum repolist
五、在cephServer(node01)上离线安装单机ceph
关闭selinux
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
设置防火墙,放行相关端口
# systemctl start firewalld
# systemctl enable firewalld
# firewall-cmd --zone=public --add-port=6789/tcp --permanent
# firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
# firewall-cmd --reload
用本地yum源安装ceph组件
# yum -y install ceph ceph-mds ceph-mgr ceph-osd ceph-mon
# yum list installed | grep ceph
# ll /etc/ceph/
# ll /var/lib/ceph/
配置ceph组件
创建集群id
# uidgen
用uidgen 生成一个uuid 例如 ee741368-4233-4cbc-8607-5d36ab314dab
创建ceph主配置文件
# vim /etc/ceph/ceph.conf
######################################
[global]
fsid = ee741368-4233-4cbc-8607-5d36ab314dab
mon_initial_members = node01
mon_host = 192.168.1.103
mon_max_pg_per_osd = 300
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_journal_size = 1024
osd_crush_chooseleaf_type = 0
public_network = 192.168.1.0/24
cluster_network = 192.168.1.0/24
[mon]
mon allow pool delete = true
###################################
1.部署mon
创建mon密钥
# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
# cat /tmp/ceph.mon.keyring
创建管理密钥
# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
# cat /etc/ceph/ceph.client.admin.keyring
# cat /var/lib/ceph/bootstrap-osd/ceph.keyring
将管理密钥都导入到mon密钥中
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
# cat /tmp/ceph.mon.keyring
创建monitor map
# monmaptool --create --add node01 192.168.1.103 --fsid ee741368-4233-4cbc-8607-5d36ab314dab /tmp/monmap
创建mon的目录,启动mon
# mkdir /var/lib/ceph/mon/ceph-node1
# chown -R ceph:ceph /var/lib/ceph/
# chown ceph:ceph /tmp/monmap /tmp/ceph.mon.keyring
# sudo -u ceph ceph-mon --mkfs -i node01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
# ll /var/lib/ceph/mon/ceph-node01/
启动mon服务
# systemctl start ceph-mon@node01.service
# systemctl enable ceph-mon@node01.service
# systemctl status ceph-mon@node01.service
# ceph -s
2.部署osd
cephServer(node01)数据盘:/dev/sdb 100G
# lsblk
创建osd
# ceph-volume lvm create --data /dev/sdb
# ll /dev/mapper/
# ll /var/lib/ceph/osd/ceph-0/
# ceph auth list
启动osd服务
# systemctl start ceph-osd@0.service
# systemctl enable ceph-osd@0.service
# systemctl status ceph-osd@0.service
# ceph -s
3.部署mgr
创建密钥
# mkdir /var/lib/ceph/mgr/ceph-node01
# ceph auth get-or-create mgr.node01 mon 'allow profile mgr' osd 'allow *' mds 'allow *' > /var/lib/ceph/mgr/ceph-node01/keyring
# chown -R ceph:ceph /var/lib/ceph/mgr
启动mgr服务
# systemctl start ceph-mgr@node01.service
# systemctl enable ceph-mgr@node01.service
# systemctl status ceph-mgr@node01.service
# ceph -s
查看mgr模块
# ceph mgr module ls
4.部署mds
创建mds数据目录
# mkdir -p /var/lib/ceph/mds/ceph-node01
创建秘钥
# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-node01/keyring --gen-key -n mds.node01
导入秘钥
# ceph auth add mds.node01 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-node01/keyring
# chown -R ceph:ceph /var/lib/ceph/mds
# ceph auth list
启动mds服务
# systemctl start ceph-mds@node01.service
# systemctl enable ceph-mds@node01.service
# systemctl status ceph-mds@node01.service
# ceph osd tree
5. 创建Ceph Pool
一个ceph集群可以有多个pool,每个pool是逻辑上的隔离单位,不同的pool可以有完全不一样的数据处理方式,比如Replica Size(副本数)、Placement Groups、CRUSH Rules、快照、所属者等。
pg_num设置参考:https://ceph.com/pgcalc
# ceph osd pool create cephfs_data 128
# ceph osd pool create cephfs_metadata 128
# ceph fs new cephfs cephfs_metadata cephfs_data
# ceph fs ls
# ceph -s
# ceph --show-config | grep mon_max_pg_per_osd
集群osd 数量较少,如果创建了大量的pool,每个pool要占用一些pg ,ceph集群默认每块磁盘都有默认值,为250 pgs,不过这个默认值是可以调整的,但调整得过大或者过小都会对集群的性能产生一定影响。
# vim /etc/ceph/ceph.conf
################################
mon_max_pg_per_osd = 300
################################
# systemctl restart ceph-mgr@node01.service
# systemctl status ceph-mgr@node01.service
# ceph --show-config | grep "mon_max_pg_per_osd"
# ceph osd lspools
cephServer节点 服务正常启动后各服务状态,服务进程、日志文件、端口监听一览
# ll /etc/ceph/
# ll /var/lib/ceph/
# tree /var/lib/ceph/
# cd /var/lib/ceph/
# ll bootstrap-*
六、安装配置cephClient
客户端要挂载使用cephfs的目录,有两种方式:
1. 使用linux kernel client
2. 使用ceph-fuse
这两种方式各有优劣势,kernel client的特点在于它与ceph通信大部分都在内核态进行,因此性能要更好,缺点是L版本的cephfs要求客户端支持一些高级特性,ceph FUSE就是简单一些,还支持配额,缺点就是性能比较差,实测全ssd的集群,性能差不多为kernel client的一半。
关闭selinux
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
方式一:使用linux kernel client
在cephSever服务器上获取admin认证key
# cat /etc/ceph/ceph.client.admin.keyring
默认采用ceph-deploy部署ceph集群是开启了cephx认证,需要挂载secret-keyring,即集群mon节点/etc/ceph/ceph.client.admin.keyring文件中的”key”值,采用secretfile可不用暴露keyring,但有1个bug,始终报错:libceph: bad option at 'secretfile=/etc/ceph/admin.secret'
Bug地址:https://bugzilla.redhat.com/show_bug.cgi?id=1030402
# mount -t ceph 192.168.1.103:6789:/ /mnt -o name=admin,secret=AQDZRfJcn4i0BRAAAHXMjFmkEZX2oO/ron1mRA==
# mount -l | grep ceph
# df -hT
方式二:使用ceph-fuse
在cephClient上搭建 本地yum源
将cephDeps.tar.gz拷贝到cephClient)服务器
# tar -zxf cephDeps.tar.gz
# vim build_localrepo.sh
##################################################
#!/bin/bash
parent_path=$( cd "$(dirname "${BASH_SOURCE}")" ; pwd -P )
cd "$parent_path"
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup
# create local repositry
rm -rf /tmp/localrepo
mkdir -p /tmp/localrepo
cp -rf ./cephDeps/* /tmp/localrepo
echo "
[localrepo]
name=Local Repository
baseurl=file:///tmp/localrepo
gpgcheck=0
enabled=1" > /etc/yum.repos.d/ceph.repo
yum clean all
##################################################
# sh -x build_localrepo.sh
# yum repolist
安装ceph-fuse 相关组件
# yum -y install ceph-fuse
# rpm -ql ceph-fuse
创建ceph-fuse 相关目录,从cephServer拷贝配置文件和秘钥
# mkdir /etc/ceph
# scp root@192.168.1.103:/etc/ceph/ceph.client.admin.keyring /etc/ceph
# scp root@192.168.1.103:/etc/ceph/ceph.conf /etc/ceph
# chmod 600 /etc/ceph/ceph.client.admin.keyring
创建ceph-fuse的service文件
# cp /usr/lib/systemd/system/ceph-fuse@.service /etc/systemd/system/ceph-fuse.service
# vim /etc/systemd/system/ceph-fuse.service
##############################################
[Unit]
Description=Ceph FUSE client
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target
Conflicts=umount.target
PartOf=ceph-fuse.target
[Service]
EnvironmentFile=-/etc/sysconfig/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/ceph-fuse -f -o rw,noexec,nosuid,nodev /mnt
TasksMax=infinity
Restart=on-failure
StartLimitInterval=30min
StartLimitBurst=3
[Install]
WantedBy=ceph-fuse.target
########################################################
我们将cephfs挂载在客户端/mnt下
# systemctl daemon-reload
# systemctl start ceph-fuse.service
# systemctl enable ceph-fuse.service
# systemctl status ceph-fuse.service
# systemctl start ceph-fuse.target
# systemctl enable ceph-fuse.target
# systemctl status ceph-fuse.target
# df -hT
测试写入一个大文件
# dd if=/dev/zero of=/mnt/test bs=1M count=10000
# df -hT
设置cephFS 挂载子目录
从上面的可以看出,挂载cephfs的时候,源目录使用的是/,如果一个集群只提供给一个用户使用就太浪费了,能不能把集群切分成多个目录,多个用户自己挂载自己的目录进行读写呢?
# ceph-fuse --help
使用admin挂载了cephfs的/之后,只需在/中创建目录,这些创建后的目录就成为cephFS的子树,其他用户经过配置,是可以直接挂载这些子树目录的,具体步骤为:
1. 使用admin挂载了/之后,创建了/ceph
# mkdir -p /opt/tmp
# ceph-fuse /opt/tmp
# mkdir /opt/tmp/ceph
# umount /opt/tmp
# rm -rf /opt/tmp
2. 设置ceph-fuse.service,挂载子目录
# vim /etc/systemd/system/ceph-fuse.service
################################################
[Unit]
Description=Ceph FUSE client
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target
Conflicts=umount.target
PartOf=ceph-fuse.target
[Service]
EnvironmentFile=-/etc/sysconfig/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/ceph-fuse -f -o rw,noexec,nosuid,nodev /mnt -r /ceph
TasksMax=infinity
Restart=on-failure
StartLimitInterval=30min
StartLimitBurst=3
[Install]
WantedBy=ceph-fuse.target
###################################################################
# systemctl daemon-reload
# systemctl start ceph-fuse.service
# systemctl enable ceph-fuse.service
# systemctl status ceph-fuse.service
# systemctl start ceph-fuse.target
# systemctl enable ceph-fuse.target
# systemctl status ceph-fuse.target
# df -hT
cephClient节点 服务正常启动后各服务状态,服务进程、日志文件、端口监听一览
当然,这篇文章我们只讲了ceph的文件系统cephFS,关于另外两种存储 块存储和对象存储,大家可以参阅相关资料,自行解决!
七、参考
Ceph基础知识
https://www.cnblogs.com/zywu-king/p/9064032.html
centos7离线搭建Ceph块存储和对象存储
https://pianzong.club/2018/11/05/install-ceph-offline/
分布式文件系统Ceph
https://blog.csdn.net/dapao123456789/article/category/2197933
ceph-deploy v2.0.0初始化磁盘
https://blog.51cto.com/3168247/2088865
Ceph告警:too many PGs per OSD处理
https://www.jianshu.com/p/f2b20a175702
ceph (luminous 版) pool 管理
https://blog.csdn.net/signmem/article/details/78594340
ceph集群添加了一个osd之后,该osd的状态始终为down
https://blog.51cto.com/xiaowangzai/2173309?source=dra
CentOS7.x上ceph的单机部署和cephFS文件系统的使用
https://www.jianshu.com/p/736fc03bd164
Ceph bluestore 和 ceph-volume
http://xcodest.me/ceph-bluestore-and-ceph-volume.html
Ceph PGs per Pool Calculator
https://ceph.com/pgcalc
MANUAL DEPLOYMENT
http://docs.ceph.com/docs/master/install/manual-deployment/#manager-daemon-configuration
CEPH-MGR ADMINISTRATOR’S GUIDE
http://docs.ceph.com/docs/master/mgr/administrator/#mgr-administrator-guide?tdsourcetag=s_pcqq_aiomsg
CREATE A CEPH FILESYSTEM
http://docs.ceph.com/docs/master/cephfs/createfs
http://docs.ceph.org.cn/cephfs/createfs
Redhat/MANUALLY INSTALLING RED HAT CEPH STORAGE
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/installation_guide_for_red_hat_enterprise_linux/manually-installing-red-hat-ceph-storage
WHAT IS RED HAT CEPH STORAGE?
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/installation_guide_for_red_hat_enterprise_linux/what_is_red_hat_ceph_storage