deploy
install cephadm
1 | dnf search release-ceph |
enable ceph cli
1
2cephadm add-repo --release reef
cephadm install ceph-common
booststrap
1 | cephadm bootstrap --mon-ip 172.20.7.232 |
- log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25Ceph Dashboard is now available at:
URL: https://dingo7232.com:8443/
User: admin
Password: <password>
Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/6a65c746-e532-11ef-8ac2-fa7c097efb00/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
sudo /sbin/cephadm shell --fsid 6a65c746-e532-11ef-8ac2-fa7c097efb00 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
sudo /sbin/cephadm shell
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/en/latest/mgr/telemetry/
Bootstrap complete.
add hosts
Install the cluster’s public SSH key in the new host’s root user’s authorized_keys file:
1
2ssh-copy-id -f -i /etc/ceph/ceph.pub root@dingo7233
ssh-copy-id -f -i /etc/ceph/ceph.pub root@dingo7234Tell Ceph that the new node is part of the cluster
1
2
3
4
5
6ceph orch host add *<newhost>* [*<ip>*] [*<label1> ...*]
ceph orch host add dingo7233 172.20.7.233
ceph orch host add dingo7234 172.20.7.234
or
ceph orch host add dingo7233 172.20.7.233 --labels _admin
ceph orch host add dingo7234 172.20.7.234 --labels _adminadd label (optional)
1
2ceph orch host label add dingo7233 _admin
ceph orch host label add dingo7234 _adminlist hosts
1
ceph orch host ls --detail
add storage
check available devices
1
ceph orch device ls
apply osd
1
ceph orch apply osd --all-available-devices
check
1 | ceph status |
thouble shooting
redeploy cluster
To remove an existing Ceph cluster deployed using cephadm
and redeploy a new one, follow these steps:
- Step 1: Stop All Ceph Services
First, stop all Ceph services on each host in the cluster.
1 | sudo systemctl stop ceph.target |
- Step 2: Remove Ceph Configuration and Data
Remove the Ceph configuration and data directories.
1 | sudo rm -rf /etc/ceph |
Step 3: deploy as below words
Step 4: Verify Cluster Health
1
ceph -s
If you encounter any issues during the redeployment, check the logs:
1 | sudo journalctl -u ceph-* -f |
Or check the Ceph logs directly:
1 | sudo less /var/log/ceph/ceph.log |