十年網(wǎng)站開發(fā)經(jīng)驗 + 多家企業(yè)客戶 + 靠譜的建站團(tuán)隊
量身定制 + 運營維護(hù)+專業(yè)推廣+無憂售后,網(wǎng)站問題一站解決
這篇文章給大家分享的是有關(guān)docker中ceph集群的日常運維操作有哪些的內(nèi)容。小編覺得挺實用的,因此分享給大家做個參考,一起跟隨小編過來看看吧。

[root@k8s-node1 ceph]# systemctl list-unit-files |grep ceph ceph-disk@.service static ceph-mds@.service disabled ceph-mgr@.service disabled ceph-mon@.service enabled ceph-osd@.service enabled ceph-radosgw@.service disabled ceph-mds.target enabled ceph-mgr.target enabled ceph-mon.target enabled ceph-osd.target enabled ceph-radosgw.target enabled ceph.target enabled
systemctl start ceph-osd.target systemctl start ceph-mon.target systemctl start ceph-mds.target
systemctl start ceph-osd@{id}
systemctl start ceph-mon@{hostname}
systemctl start ceph-msd@{hostname}[root@k8s-node1 ceph]# ceph -s
cluster 2e6519d9-b733-446f-8a14-8622796f83ef
health HEALTH_OK
monmap e4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}
election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3
mgr active: k8s-node1 standbys: k8s-node3, k8s-node2
osdmap e31: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds,require_kraken_osds
pgmap v13640: 64 pgs, 1 pools, 0 bytes data, 0 objects
35913 MB used, 21812 MB / 57726 MB avail
64 active+clean[root@k8s-node1 ceph]# ceph
ceph> status
cluster 2e6519d9-b733-446f-8a14-8622796f83ef
health HEALTH_OK
monmap e4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}
election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3
mgr active: k8s-node1 standbys: k8s-node3, k8s-node2
osdmap e31: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds,require_kraken_osds
pgmap v13670: 64 pgs, 1 pools, 0 bytes data, 0 objects
35915 MB used, 21810 MB / 57726 MB avail
64 active+cleanceph> health HEALTH_OK
ceph> mon_status
{"name":"k8s-node1","rank":0,"state":"leader","election_epoch":26,"quorum":[0,1,2],"features":{"required_con":"9025616074522624","required_mon":["kraken"],"quorum_con":"1152921504336314367","quorum_mon":["kraken"]},"outside_quorum":[],"extra_probe_peers":["172.16.22.202:6789\/0","172.16.22.203:6789\/0"],"sync_provider":[],"monmap":{"epoch":4,"fsid":"2e6519d9-b733-446f-8a14-8622796f83ef","modified":"2018-10-28 21:30:09.197608","created":"2018-10-28 09:49:11.509071","features":{"persistent":["kraken"],"optional":[]},"mons":[{"rank":0,"name":"k8s-node1","addr":"172.16.22.201:6789\/0","public_addr":"172.16.22.201:6789\/0"},{"rank":1,"name":"k8s-node2","addr":"172.16.22.202:6789\/0","public_addr":"172.16.22.202:6789\/0"},{"rank":2,"name":"k8s-node3","addr":"172.16.22.203:6789\/0","public_addr":"172.16.22.203:6789\/0"}]}}ceph 日志默認(rèn)的位置保存在節(jié)點/var/log/ceph/ceph.log 里面可以使用 ceph -w 查看實時的日志記錄情況
哪個節(jié)點報錯了,就登錄到哪個節(jié)點上用下面的命令看日志。
[root@k8s-node1 ceph]# ceph -w
ceph mon 也在不斷的對自?狀態(tài)進(jìn)?各種檢查,檢查失敗的時候會將自?的信息寫到集群日志中去
[root@k8s-node1 ceph]# ceph mon stat
e4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}, election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3[root@k8s-node1 ceph]# ceph osd stat osdmap e31: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds,require_kraken_osds
[root@k8s-node1 ceph]# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.05516 root default -2 0.01839 host k8s-node1 0 0.01839 osd.0 up 1.00000 1.00000 -3 0.01839 host k8s-node2 1 0.01839 osd.1 up 1.00000 1.00000 -4 0.01839 host k8s-node3 2 0.01839 osd.2 up 1.00000 1.00000
[root@k8s-node1 ceph]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 57726M 21811M 35914M 62.21 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 5817M 0
感謝各位的閱讀!關(guān)于“docker中ceph集群的日常運維操作有哪些”這篇文章就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,讓大家可以學(xué)到更多知識,如果覺得文章不錯,可以把它分享出去讓更多的人看到吧!