CEPH环境搭建01

1、初始环境

准备四个节点(每个节点的hosts和hostname要进行相应修改):

ceph-0001 172.16.172.101
ceph-0002 172.16.172.102
ceph-0003 172.16.172.103
ceph-0004 172.16.172.104

每个节点都执行:

sudo apt-get update
sudo apt-get install docker.io

2、在主节点安装cephadmin

#官方推荐的方法有问题
#sudo ./cephadm add-repo --release octopus
#The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.

sudo rm /etc/apt/trusted.gpg.d/ceph.release.gpg
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb http://download.ceph.com/debian-octopus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt-get update

sudo ./cephadm install

sudo cephadm install ceph-common

3、初始化

sudo mkdir -p /etc/ceph

sudo cephadm bootstrap --mon-ip 172.16.172.101 --allow-overwrite
INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/docker) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: 7bffaaf6-9688-11ea-ac24-080027b4217f
INFO:cephadm:Verifying IP 172.16.172.101 port 3300 ...
INFO:cephadm:Verifying IP 172.16.172.101 port 6789 ...
INFO:cephadm:Mon IP 172.16.172.101 is in CIDR network 172.16.172.0/24
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
INFO:cephadm:Extracting ceph user uid/gid from container image...
INFO:cephadm:Creating initial keys...
INFO:cephadm:Creating initial monmap...
INFO:cephadm:Creating mon...
INFO:cephadm:Waiting for mon to start...
INFO:cephadm:Waiting for mon...
INFO:cephadm:Assimilating anything we can from ceph.conf...
INFO:cephadm:Generating new minimal ceph.conf...
INFO:cephadm:Restarting the monitor...
INFO:cephadm:Setting mon public_network...
INFO:cephadm:Creating mgr...
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start...
INFO:cephadm:Waiting for mgr...
INFO:cephadm:mgr not available, waiting (1/10)...
INFO:cephadm:mgr not available, waiting (2/10)...
INFO:cephadm:mgr not available, waiting (3/10)...
INFO:cephadm:Enabling cephadm module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 5...
INFO:cephadm:Setting orchestrator backend to cephadm...
INFO:cephadm:Generating ssh key...
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to root@localhost's authorized_keys...
INFO:cephadm:Adding host ceph01...
INFO:cephadm:Deploying mon service with default placement...
INFO:cephadm:Deploying mgr service with default placement...
INFO:cephadm:Deploying crash service with default placement...
INFO:cephadm:Enabling mgr prometheus module...
INFO:cephadm:Deploying prometheus service with default placement...
INFO:cephadm:Deploying grafana service with default placement...
INFO:cephadm:Deploying node-exporter service with default placement...
INFO:cephadm:Deploying alertmanager service with default placement...
INFO:cephadm:Enabling the dashboard module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 13...
INFO:cephadm:Generating a dashboard self-signed certificate...
INFO:cephadm:Creating initial admin user...
INFO:cephadm:Fetching dashboard port number...
INFO:cephadm:Ceph Dashboard is now available at:

URL: https://localhost:8443/
User: admin
Password: mdbewc14gq

INFO:cephadm:You can access the Ceph CLI with:

sudo /usr/sbin/cephadm shell --fsid 7bffaaf6-9688-11ea-ac24-080027b4217f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

INFO:cephadm:Please consider enabling telemetry to help improve Ceph:

ceph telemetry on

For more information see:

https://docs.ceph.com/docs/master/mgr/telemetry/

INFO:cephadm:Bootstrap complete.

4、此时可以通过最后给出的信息,登录管理页面了

5、修改配置文件

sudo vi /etc/ceph/ceph.conf

[global]
fsid = a4547d9d-f1a1-4753-b5cc-df0e043ebc65
mon_initial_members = ceph01
#原本生成的mon_host好像有些问题
#mon_host = [v2:ceph01:3300/0,v1:ceph:6789/0]
mon_host = 172.16.172.101
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.16.172.0/24

6、查看ceph状态

sudo ceph status
cluster:
id:     7bffaaf6-9688-11ea-ac24-080027b4217f
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
OSD count 0 < osd_pool_default_size 3

services:
mon: 1 daemons, quorum ceph01 (age 35m)
mgr: ceph01.lreqdw(active, since 33m)
osd: 0 osds: 0 up, 0 in

data:
pools:   1 pools, 1 pgs
objects: 0 objects, 0 B
usage:   0 B used, 0 B / 0 B avail
pgs:     100.000% pgs unknown
1 unknown

7、三个子节点做好准备

#在ceph01
#将ceph.pub拷贝到其他三个节点
scp /etc/ceph/ceph.pub  neohope@ceph02:~/authorized_keys

#在ceph02
#启用root
sudo passwd -u root
#设置好root的authorized_keys
mv authorized_keys  /root/.ssh/
cd /root/.ssh/
chown root:root authorized_keys
chmod 0600 authorized_keys
#允许root ssh登录
sudo sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
sudo service ssh restart

#在ceph01
#获取私钥
sudo ceph config-key get mgr/cephadm/ssh_identity_key > ceph.pem
chmod 0600 ceph.pem
#测试root登录
ssh  -i ceph.pem root@ceph02

8、三个子节点加入网络

sudo ceph orch host add ceph02
Added host 'ceph02'

sudo ceph orch host add ceph03
Added host 'ceph03'

sudo ceph orch host add ceph04
Added host 'ceph04'

sudo ceph orch host ls
HOST    ADDR    LABELS  STATUS
ceph01  ceph01
ceph02  ceph02
ceph03  ceph03
ceph04  ceph04

9、设置mon

ceph orch apply mon 4
ceph orch apply mon ceph01,ceph02,ceph03,ceph04
sudo ceph status

Leave a Reply

Your email address will not be published. Required fields are marked *

*