Ceph FS是一个支持POSIX接口的文件系统,它使用 Ceph 存储集群来存储数据。文件系统对于客户端来说可以方便的挂载到本地使用。
Ceph FS构建在RADOS之上,继承RADOS的容错性和扩展性,支持冗余副本和数据高可靠性。
一、实验环境
操作系统:CentOS7.5 Minimal
cephServer(ceph01):192.168.1.106 /dev/sda /dev/sdb /dev/sdc
cephClient:192.168.1.104 /dev/sda
我们实验环境的ceph是用ceph-deploy部署的单机版,也就是说存储并不具备高可用性,主要用于实验cephFS。
我们后续在此基础上,将ceph存储做成集群,再测试ceph的其他存储类型。
本次安装的ceph版本为:ceph version 12.2.11 luminous (stable)
二、安装配置cephServer
更改主机名,添加主机名映射
# hostnamectl set-hostname ceph01
# echo "192.168.1.106 ceph01" >>/etc/hosts
将 /dev/sdc分区,作为OSD的journal日志盘
# parted -s /dev/sdc "mklabel gpt"
# parted -s /dev/sdc "mkpart primary 0% 100%"
设置本机免密登录
# ssh-keygen
# ssh-copy-id root@192.168.1.106
关闭selinux和firewalld
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
# systemctl stop firewalld
# systemctl disable firewalld
添加ceph yum仓库
# vim /etc/yum.repos.d/ceph.repo
####################################################
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=0
#####################################################
# yum -y install epel-release
# yum clean all
# yum repolist
安装ceph 相关组件
# yum -y install ceph-deploy
# ceph-deploy --version
# yum -y install ceph-mds ceph-mgr ceph-osd ceph-mon
# mkdir mycluster
# cd mycluster
# ceph-deploy new ceph01
# vim ceph.conf
增加如下字段:
#############################
osd_pool_default_size = 1
osd_pool_default_min_size = 1
public_network = 192.168.1.0/24
cluster_network = 192.168.1.0/24
################################
# ceph-deploy mon create ceph01
# ceph-deploy mon create-initial
# ceph-deploy admin ceph01
# ceph-deploy disk list ceph01
# ceph-deploy disk zap ceph01 /dev/sdb
# ceph-deploy osd create --data /dev/sdb --journal /dev/sdc ceph01
# ceph-deploy mgr create ceph01
# ceph-deploy mds create ceph01
# cd mycluster/
# ll
# lsblk
# ll /etc/ceph/
# systemctl status ceph*
为cephfs创建存储池
# ceph osd pool create cephfs_data 128
# ceph osd pool create cephfs_metadata 128
# ceph fs new cephfs cephfs_metadata cephfs_data
# ceph fs ls
# ceph status
# ceph -w
配置 MGR dashboard
# ceph mgr module enable dashboard
# vim /etc/ceph/ceph.conf
添加如下字段:
#######################
[mgr]
mgr modules = dashboard
########################
设置dashboard的IP和端口
# ceph config-key put mgr/dashboard/server_addr 192.168.1.106
# ceph config-key put mgr/dashboard/server_port 7000
# systemctl restart ceph-mgr@ceph01.service
# systemctl status ceph-mgr@ceph01.service
# ss -tan
浏览器访问: http://192.168.1.106:7000
如果部署过程中有问题,实在无法解决,你也可以推倒重来:
# cd mycluster
# ceph-deploy purge ceph01
# ceph-deploy purgedata ceph01
# ceph-deploy forgetkeys
# rm -rf mycluster/*
三、安装配置cephClient
关闭selinux
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
客户端要挂载使用cephfs的目录,有两种方式:
1. 使用linux kernel client
2. 使用ceph-fuse
这两种方式各有优劣势,kernel client的特点在于它与ceph通信大部分都在内核态进行,因此性能要更好,缺点是L版本的cephfs要求客户端支持一些高级特性,ceph FUSE就是简单一些,还支持配额,缺点就是性能比较差,实测全ssd的集群,性能差不多为kernel client的一半。
方式一:使用linux kernel client
为了让低版本内核可以使用cephfs,需要手动将cephSever的一些特性关闭
获取到crushmap并解码:
# ceph osd getcrushmap -o crush.map
# crushtool -d crush.map -o crush.txt
在crumap中关闭两个特性:
tunable chooseleaf_vary_r 0
tunable chooseleaf_stable 0
重新编译并插入crushmap
# crushtool -c crush.txt -o newcrush.map
#ceph osd setcrushmap -i newcrush.map
# cat /etc/ceph/ceph.client.admin.keyring
默认采用ceph-deploy部署ceph集群是开启了cephx认证,需要挂载secret-keyring,即集群mon节点/etc/ceph/ceph.client.admin.keyring文件中的”key”值,采用secretfile可不用暴露keyring,但有1个bug,始终报错:libceph: bad option at 'secretfile=/etc/ceph/admin.secret'
Bug地址: https://bugzilla.redhat.com/show_bug.cgi?id=1030402
# mount -t ceph 192.168.1.106:6789:/ /mnt -o name=admin,secret=AQDalqxcT2yKOBAAJvtN5ARgpcZLXWN6rdhovw==
# df -hT
方式二:使用ceph-fuse
添加ceph yum仓库
# vim /etc/yum.repos.d/ceph.repo
####################################################
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=0
#####################################################
# yum -y install epel-release
# yum clean all
# yum repolist
安装ceph-fuse 相关组件
# yum -y install ceph-fuse
# rpm -ql ceph-fuse
创建ceph-fuse 相关目录
# mkdir /etc/ceph
# scp root@192.168.1.106: /etc/ceph/ceph.client.admin.keyring /etc/ceph
# scp root@192.168.1.106: /etc/ceph/ceph.conf /etc/ceph
# chmod 640 /etc/ceph/ceph.client.admin.keyring
创建ceph-fuse的service文件
# cp /usr/lib/systemd/system/ceph-fuse@.service /etc/systemd/system/ceph-fuse.service
# vim /etc/systemd/system/ceph-fuse.service
##############################################
[Unit]
Description=Ceph FUSE client
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target
Conflicts=umount.target
PartOf=ceph-fuse.target
[Service]
EnvironmentFile=-/etc/sysconfig/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/ceph-fuse -f -o rw,noexec,nosuid,nodev /mnt
TasksMax=infinity
Restart=on-failure
StartLimitInterval=30min
StartLimitBurst=3
[Install]
WantedBy=ceph-fuse.target
########################################################
我们将cephfs挂载在客户端/mnt下
# systemctl daemon-reload
# systemctl start ceph-fuse
# systemctl status ceph-fuse
# df -hT
测试写入一个大文件
# dd if=/dev/zero of=/mnt/test bs=1M count=10000
# df -hT
四、参考
理解Ceph的三种存储接口:块设备、文件系统、对象存储
https://www.jianshu.com/p/7656fe528488
Ceph FS 基本操作
https://k2r2bai.com/2015/11/21/ceph/cephfs
NFS+CephFS构建基于Ceph的NAS服务
http://www.bocloud.com.cn/news/show-193.html
Ceph Filesystem ( CephFS) :: Step by Step Configuration
https://ceph.com/geen-categorie/ceph-filesystem-cephfs-step-by-step-configuration/
How to build a Ceph Distributed Storage Cluster on CentOS 7
https://www.howtoforge.com/tutorial/how-to-build-a-ceph-cluster-on-centos-7
How to Mount CephFS on CentOS 7
https://www.howtoforge.com/tutorial/how-to-mount-cephfs-on-centos-7
Ceph Storage for Oracle® Linux Release 2.0
https://docs.oracle.com/cd/E52668_01/E66514/html/ceph-cephfS.html
CentOS7.2部署Luminous版Ceph-12.2.0
http://www.hl10502.com/2017/09/13/ceph-deploy-luminous
通过ceph-fuse访问文件系统
https://github.com/unitedstack/UDS-Documentation/blob/master/%E9%80%9A%E8%BF%87ceph-fuse%E8%AE%BF%E9%97%AE%E6%96%87%E4%BB%B6%E7%B3%BB%E7%BB%9F.md
CEPH-FUSE – FUSE-BASED CLIENT FOR CEPH
http://docs.ceph.com/docs/master/man/8/ceph-fuse/
MOUNT CEPHFS USING FUSE
http://docs.ceph.com/docs/master/man/8/ceph-fuse/
Accessing CephFS via FUSE client
https://www.oreilly.com/library/view/ceph-cookbook/9781784393502/ch04s05.html
CenOS7 下搭建ceph luminous(12.2.1)
http://www.cnblogs.com/sisimi/p/7717297.html
CentOS7部署CephFS
https://www.cnblogs.com/sisimi/p/7976154.html?tdsourcetag=s_pcqq_aiomsg
ceph (分布式文件系统)
https://www.kancloud.cn/ningjing_home/ceph/458296
CEPHFS 最佳实践
https://ceph.seekerliu.com/cephfs/best-practices
Ceph文件系统存储之Ceph FS
https://blog.frognew.com/2017/02/ceph-fs.html
CephFS 使用
https://www.jianshu.com/p/c22ff79c4452
使用ceph的文件存储CephFS
https://blog.csdn.net/zzq900503/article/details/80470785
cephfs初探
http://strugglesquirrel.com/2018/07/17/cephfs%E5%88%9D%E6%8E%A2-%E5%B0%8F%E8%AF%95%E7%89%9B%E5%88%80/
CephFS 介绍及使用经验分享
https://juejin.im/entry/5c3c19466fb9a049db735f2d
基于Ceph分布式存储的Harbor部署方案
https://blog.csdn.net/m0_37886429/article/details/89084549
CEPH FILESYSTEM
https://www.cnblogs.com/netonline/p/10485176.html