【Ceph】基于VMware Workstation虚拟机Ceph集群安装配置笔记

网友投稿 293 2022-11-02

【Ceph】基于VMware Workstation虚拟机Ceph集群安装配置笔记

1.写在前面

本文主要内容为搭建三节点Ceph集群的过程。

这两天捣鼓了一下Ceph,准备做个底层基于ceph的nas存储,首先网上看了一些教程,还有对照了一下书籍,依然在安装配置时ceph还是遇到很多问题,所以写此贴记录一下自己遇到的问题和安装过程,同时也希望能够帮助一些小伙伴搭建Ceph集群。

本文使用 ceph-deploy 官方工具部署。

2.安装环境

linux版本,精简版

[root@node1 ~]# cat /etc/centos-releaseCentOS Linux release 7.9.2009 (Core)

Ceph版本 Giant

[root@node1 ~]# ceph -vceph version 0.87.2 (87a7cec9ab11c677de2ab23a7668a77d2f5b955e)

VMware workstation版本

网络(桥接物理主机)

node1 192.168.68.11

node2 192.168.68.12

node3 192.168.68.13

网关 192.168.68.1

3.安装步骤

3.1 VMware Workstation 安装虚机步骤

3.2 centos7安装过程

node2、node3也是如此,局部将1改为2或3即可(或者使用Vmware Workstation克隆功能,这里不再演示)

给三个节点虚机分别增加3块硬盘

3.3 安装Ceph前虚机配置(三个节点都需要)

配置hosts访问

vi /etc/hosts

配置node1免密访问node2、3

ssh-keygen

ssh-copy-id node2

ssh-copy-id node3

关闭防火墙

systemctl stop firewalldsystemctl disable firewalld

关闭selinux

setenforce 0sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

安装并配置ntp时钟服务

yum install -y ntp ntpdate -yntpdate pool.net.orgsystemctl restart ntpdate.servicesystemctl restart ntp.servicesystemctl enable ntp.servicesystemctl enable ntpdate.service

添加Ceph Giant 版本包 更新yum

rpm -Uhv /etc/yum.repos.d/ceph.repo

[Ceph]name=Ceph packages for $basearchbaseurl=noarch packagesbaseurl=source packagesbaseurl=-y update

3.4 安装和配置Ceph

强烈建议此步骤前将三台虚机快照备份

3.4.1 在node1上创建Ceph集群

安装ceph-deploy

yum install ceph-deploy -y

创建一个ceph集群

mkdir /etc/cephcd /etc/cephceph-deploy new node1

在node1节点给个节点安装ceph

ceph-deploy install node1 node2 node3

查看安装ceph版本

ceph -v

在node1上创建第一个ceph monitor

ceph-deploy mon create-initial

检查集群状态

ceph -s

在node1上创建OSD(在ceph目录下执行)

列出可用磁盘

ceph-deploy disk list node1

删除选择磁盘的分区和内容

ceph-deploy disk zap node1:sdb node1:sdc node1:sdd

创建osd

ceph-deploy osd create node1:sdb node1:sdc node1:sdd

检查集群状态

ceph -s

此时单节点配置完毕

3.4.2 拓展Ceph集群

在node1上将共有网络地址添加到文件/etc/ceph/ceph.conf

public network = 192.168.68.0/24

分别在node2、3上创建monitor

ceph-deploy mon create node2ceph-deploy mon create node3

查看集群状态

ceph -s

此时另外两个节点已成功加入

处理node2、3节点的磁盘并创建OSD

ceph-deploy disk zap node2:sdb node2:sdc node2:sddceph-deploy disk zap node3:sdb node3:sdc node3:sdd

ceph-deploy osd create node2:sdb node2:sdc node2:sddceph-deploy osd create node3:sdb node3:sdc node3:sdd

调整rbd存储池pg_num和pgp_num的值,使我们的集群达到HEALTH_OK状态

ceph osd pool set rbd pg_num 256ceph osd pool set rbd pgp_num 256

检查集群状态,如果为HEALTH_OK,集群已经完整搭建并且正常了

4.遇到的一些错误

原因:修改了ceph用户里的ceph.conf文件内容,但是没有把这个文件里的最新消息发送给其他节点,所有要推送消息

ceph-deploy --overwrite-conf config push node2

ceph-deploy --overwrite-conf mon create node2

5.一些常用命令

检查Ceph安装状态

ceph -s

ceph status

[root@node1 ceph]# ceph -s cluster 5e563d2f-94e6-4d9b-8aaf-6b2c76f856e4 health HEALTH_OK monmap e3: 3 mons at {node1=192.168.68.11:6789/0,node2=192.168.68.12:6789/0,node3=192.168.68.13:6789/0}, election epoch 4, quorum 0,1,2 node1,node2,node3 osdmap e53: 9 osds: 9 up, 9 in pgmap v122: 256 pgs, 1 pools, 0 bytes data, 0 objects 318 MB used, 134 GB / 134 GB avail 256 active+clean [root@node1 ceph]# ceph -s cluster 5e563d2f-94e6-4d9b-8aaf-6b2c76f856e4 health HEALTH_OK monmap e3: 3 mons at {node1=192.168.68.11:6789/0,node2=192.168.68.12:6789/0,node3=192.168.68.13:6789/0}, election epoch 4, quorum 0,1,2 node1,node2,node3 osdmap e53: 9 osds: 9 up, 9 in pgmap v122: 256 pgs, 1 pools, 0 bytes data, 0 objects 318 MB used, 134 GB / 134 GB avail 256 active+clean

查看Ceph版本

ceph -v

[root@node1 ceph]# ceph -vceph version 0.87.2 (87a7cec9ab11c677de2ab23a7668a77d2f5b955e)

观察集群健康状况

ceph -

[root@node1 ceph]# ceph -w cluster 5e563d2f-94e6-4d9b-8aaf-6b2c76f856e4 health HEALTH_OK monmap e3: 3 mons at {node1=192.168.68.11:6789/0,node2=192.168.68.12:6789/0,node3=192.168.68.13:6789/0}, election epoch 4, quorum 0,1,2 node1,node2,node3 osdmap e53: 9 osds: 9 up, 9 in pgmap v122: 256 pgs, 1 pools, 0 bytes data, 0 objects 318 MB used, 134 GB / 134 GB avail 256 active+clean2022-04-25 10:27:09.830678 mon.0 [INF] pgmap v122: 256 pgs: 256 active+clean; 0 bytes data, 318 MB used, 134 GB / 134 GB avail

检查Ceph monitor仲裁状态

ceph quorum_status --format json-pretty

[root@node1 ceph]# ceph quorum_status --format json-pretty{ "election_epoch": 4, "quorum": [ 0, 1, 2], "quorum_names": [ "node1", "node2", "node3"], "quorum_leader_name": "node1", "monmap": { "epoch": 3, "fsid": "5e563d2f-94e6-4d9b-8aaf-6b2c76f856e4", "modified": "2022-04-25 10:06:48.209985", "created": "0.000000", "mons": [ { "rank": 0, "name": "node1", "addr": "192.168.68.11:6789\/0"}, { "rank": 1, "name": "node2", "addr": "192.168.68.12:6789\/0"}, { "rank": 2, "name": "node3", "addr": "192.168.68.13:6789\/0"}]}}

列表PG

ceph pg dump

列表Ceph存储池

ceph osd lspools

[root@node1 ceph]# ceph osd lspools0 rbd,

列表集群的认证密钥

ceph auth list

[root@node1 ceph]# ceph auth listinstalled auth entries:osd.0 key: AQA7GGViMOKvBhAApRSMC8DDLnlOQXmAD7UUDQ== caps: [mon] allow profile osd caps: [osd] allow *osd.1 key: AQBEGGVioIu6IxAAtFI6GkzHH86f5DbZcFLP+Q== caps: [mon] allow profile osd caps: [osd] allow *osd.2 key: AQBMGGViMHbdKxAAlahPljoMpYC5gRoJBPwmcg== caps: [mon] allow profile osd caps: [osd] allow *osd.3 key: AQB9BWZiwNzrOhAADsHBX/QZgBgZ/5SbJ9wFlg== caps: [mon] allow profile osd caps: [osd] allow *osd.4 key: AQCHBWZiECT2IBAApALFn7F7IDMW/ctkL8BAsA== caps: [mon] allow profile osd caps: [osd] allow *osd.5 key: AQCPBWZisNn3NxAAiDcZGUPWY+e3lflW+7c6AQ== caps: [mon] allow profile osd caps: [osd] allow *osd.6 key: AQCdBWZiQLATHxAAc7z2NE3FmFUx28dIXeHN2g== caps: [mon] allow profile osd caps: [osd] allow *osd.7 key: AQCnBWZiIPh+CRAA/hDBfG/iwChiNcVvB4lw2Q== caps: [mon] allow profile osd caps: [osd] allow *osd.8 key: AQCvBWZiQE9vARAAroqxul/dQRsDnN7Cz9pkAA== caps: [mon] allow profile osd caps: [osd] allow *client.admin key: AQCRF2ViOAyLMBAAip1+6gqV2wJmxYUlrBzdFQ== caps: [mds] allow caps: [mon] allow * caps: [osd] allow *client.bootstrap-mds key: AQCSF2ViiMcjChAAncKTNo7o5sGaKFvoJHEFmA== caps: [mon] allow profile bootstrap-mdsclient.bootstrap-osd key: AQCSF2ViUB/CABAAPXRQtcShw39kI6xYr51Cdw== caps: [mon] allow profile bootstrap-osd

检查集群使用状态

ceph df

[root@node1 ceph]# ceph dfGLOBAL: SIZE AVAIL RAW USED %RAW USED 134G 134G 318M 0.23 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 45938M 0

检查OSD的Crush map

ceph osd tree

[root@node1 ceph]# ceph osd tree# id weight type name up/down reweight-1 0.08995 root default-2 0.02998 host node10 0.009995 osd.0 up 1 1 0.009995 osd.1 up 1 2 0.009995 osd.2 up 1 -3 0.02998 host node23 0.009995 osd.3 up 1 4 0.009995 osd.4 up 1 5 0.009995 osd.5 up 1 -4 0.02998 host node36 0.009995 osd.6 up 1 7 0.009995 osd.7 up 1 8 0.009995 osd.8 up 1

查看osd黑名单ceph osd blacklist ls

[root@node1 ceph]# ceph osd blacklist lslisted 0 entries

(作者水平有限,文章如有错误,烦请指正。)

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:关于Type-C接口的作用和用途介绍
下一篇:Ixxat USB-to-CAN V2接口的优势
相关文章

 发表评论

暂时没有评论,来抢沙发吧~