十年網(wǎng)站開(kāi)發(fā)經(jīng)驗(yàn) + 多家企業(yè)客戶 + 靠譜的建站團(tuán)隊(duì)
量身定制 + 運(yùn)營(yíng)維護(hù)+專業(yè)推廣+無(wú)憂售后,網(wǎng)站問(wèn)題一站解決
這篇文章主要介紹“K8s集群部署高可用架構(gòu)”,在日常操作中,相信很多人在K8s集群部署高可用架構(gòu)問(wèn)題上存在疑惑,小編查閱了各式資料,整理出簡(jiǎn)單好用的操作方法,希望對(duì)大家解答”K8s集群部署高可用架構(gòu)”的疑惑有所幫助!接下來(lái),請(qǐng)跟著小編一起來(lái)學(xué)習(xí)吧!
創(chuàng)新互聯(lián)建站服務(wù)項(xiàng)目包括武漢網(wǎng)站建設(shè)、武漢網(wǎng)站制作、武漢網(wǎng)頁(yè)制作以及武漢網(wǎng)絡(luò)營(yíng)銷策劃等。多年來(lái),我們專注于互聯(lián)網(wǎng)行業(yè),利用自身積累的技術(shù)優(yōu)勢(shì)、行業(yè)經(jīng)驗(yàn)、深度合作伙伴關(guān)系等,向廣大中小型企業(yè)、政府機(jī)構(gòu)等提供互聯(lián)網(wǎng)行業(yè)的解決方案,武漢網(wǎng)站推廣取得了明顯的社會(huì)效益與經(jīng)濟(jì)效益。目前,我們服務(wù)的客戶以成都為中心已經(jīng)輻射到武漢省份的部分城市,未來(lái)相信會(huì)繼續(xù)擴(kuò)大服務(wù)區(qū)域并繼續(xù)獲得客戶的支持與信任!

環(huán)境
系統(tǒng) 角色 IP centos7.4 master-1 10.10.25.149 centos7.4 master-2 10.10.25.112 centos7.4 node-1 10.10.25.150 centos7.4 node-2 10.10.25.151 centos7.4 lb-1 backup 10.10.25.111 centos7.4 lb-2 master 10.10.25.110 centos7.4 VIP 10.10.25.113
部署master02 節(jié)點(diǎn)
拷貝master01上面的 /opt/kubernetes/目錄
scp -r /opt/kubernetes/ root@10.10.25.112:/opt
拷貝master01上的相關(guān)服務(wù)
scp /usr/lib/systemd/system/{kube-apiserver,kube-scheduler,kube-controller-manager}.service root@10.10.25.112:/usr/lib/systemd/system
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
--bind-address=10.10.25.112 \
--insecure-bind-address=127.0.0.1 \
--authorization-mode=Node,RBAC \
--runtime-config=rbac.authorization.k8s.io/v1 \
--kubelet-https=true \
--anonymous-auth=false \
--basic-auth-file=/opt/kubernetes/ssl/basic-auth.csv \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
--service-cluster-ip-range=10.1.0.0/16 \
--service-node-port-range=20000-40000 \
--tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \
--etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \
--etcd-servers=https://10.10.25.149:2379,https://10.10.25.150:2379,https://10.10.25.151:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/log/api-audit.log \
--event-ttl=1h \
--v=2 \
--logtostderr=false \
--log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
啟動(dòng)apiserver
systemctl start kube-apiserver
# ps -aux | grep kube
systemctl start kube-scheduler kube-controller-manager
加入系統(tǒng)path
vim /root/.bash_profile
添加
PATH=$PATH:$HOME/bin:/opt/kubernetes/bin
source .bash_profile查看組件狀態(tài)
# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
此時(shí)已經(jīng)說(shuō)明可以連接到ETCD集群查看node狀態(tài)
# /opt/kubernetes/bin/kubectl get node NAME STATUS ROLES AGE VERSION 10.10.25.150 NotReady14d v1.10.3 10.10.25.151 NotReady 14d v1.10.3 說(shuō)明master02 還無(wú)法與node通信
配置單節(jié)點(diǎn)LB負(fù)載均衡
注:做高可用集群時(shí)間上需要同步
lb02節(jié)點(diǎn)配置
配置nginx yum源,使用4層代理做
vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=https://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 enabled=1 yum install -y nginx
修改Nginx配置文件
vim /etc/nginx/nginx.conf
stream {
log_format main "remote_addr $upstream_addr $time_local $status";
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 10.10.25.149:6443;
server 10.10.25.112:6443;
}
server {
listen 10.10.25.110:6443;
proxy_pass k8s-apiserver;
}
}修改node節(jié)點(diǎn)
cd /opt/kubernetes/cfg/ vim bootstrap.kubeconfig 修改 server: https://10.10.25.149:6443 為 server: https://10.10.25.110:6443 vim kubelet.kubeconfig 修改 server: https://10.10.25.149:6443 為 server: https://10.10.25.110:6443 vim kube-proxy.kubeconfig 修改 server: https://10.10.25.149:6443 為 server: https://10.10.25.110:6443 systemctl restart kubelet systemctl restart kube-proxy
此時(shí)啟動(dòng)以后會(huì)發(fā)現(xiàn)master01 master02 都無(wú)法與node節(jié)點(diǎn)通訊,查看node日志發(fā)現(xiàn),提示證書(shū)錯(cuò)誤,大致意思是kube-proxy證書(shū)是master01節(jié)點(diǎn)的而不是LB節(jié)點(diǎn)的.所以接下來(lái)我們需要重新生成Kube-proxy證書(shū)
master01重新生成api-server證書(shū)
編輯證書(shū)json文件
[root@master ssl]# cat kubernetes-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.10.25.149",
"10.10.25.112",
"10.10.25.110",
"10.10.25.111",
"10.10.25.113",
"10.1.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
說(shuō)明:json文件中的IP包括master01 master02節(jié)點(diǎn)IP,所有LB節(jié)點(diǎn)IP和VIP 的地址,因?yàn)槲覀冏罱K需要實(shí)現(xiàn) Nginx + Keepalive 0單節(jié)點(diǎn)的負(fù)載均衡架構(gòu)
生成證書(shū)
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
-ca-key=/opt/kubernetes/ssl/ca-key.pem \
-config=/opt/kubernetes/ssl/ca-config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
拷貝到相應(yīng)節(jié)點(diǎn)
cp kubernetes*.pem /opt/kubernetes/ssl/
scp kubernetes*.pem 10.10.25.112:/opt/kubernetes/ssl/
scp kubernetes*.pem 10.10.25.150:/opt/kubernetes/ssl/
scp kubernetes*.pem 10.10.25.151:/opt/kubernetes/ssl/重啟master節(jié)點(diǎn)的服務(wù)
systemctl start kube-scheduler kube-controller-manager kube-apiserver
重啟node節(jié)點(diǎn)服務(wù)
systemctl restart kube-proxy kubelet
驗(yàn)證
# kubectl get node NAME STATUS ROLES AGE VERSION 10.10.25.150 Ready15d v1.10.3 10.10.25.151 Ready 15d v1.10.3 說(shuō)明已經(jīng)實(shí)現(xiàn)了單節(jié)點(diǎn)負(fù)載均衡. 這里有個(gè)地方需要注意,在以上配置都完成并且沒(méi)有錯(cuò)誤的情況下,有可能出現(xiàn)獲取到的node狀態(tài)是notready,有可能出現(xiàn)此問(wèn)題的有原因是在日志里面發(fā)現(xiàn)node無(wú)法注冊(cè),此時(shí)我們需要手動(dòng)注冊(cè),在master01上執(zhí)行一下命令即可 kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
lb01節(jié)點(diǎn)配置
同樣安裝nginx,過(guò)程不贅述,nginx配置也相同。需要改變的只是綁定的IP
vim /etc/nginx/nginx.conf
stream {
log_format main "remote_addr $upstream_addr $time_local $status";
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 10.10.25.149:6443;
server 10.10.25.112:6443;
}
server {
listen 10.10.25.111:6443;
proxy_pass k8s-apiserver;
}
}使用Keepalive實(shí)現(xiàn)LB節(jié)點(diǎn)的高可用
安裝keepalive兩個(gè)節(jié)點(diǎn)都需要
yum install keepalived -y
設(shè)置lb02為keepalived為master節(jié)點(diǎn)
修改lb02keepalived配置文件
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh" #腳本檢查ngixn狀態(tài)
}
vrrp_instance VI_1 {
state MASTER
interface ens192
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.10.25.113/24
}
track_script {
check_nginx
}
}編寫(xiě)nginx狀態(tài)檢測(cè)腳本
cat /etc/keepalived/check_nginx.sh #!/bin/sh count=$(ps -ef | grep nginx | egrep -cv "grep|$$") #獲取nginx進(jìn)程數(shù) if [ "$count" -eq 0 ];then systemctl stop keepalived fi 授予腳本執(zhí)行權(quán)限 chmod +x /etc/keepalived/check_nginx.sh
啟動(dòng)keepalived
systemctl start keepalived
查看是VIP是否生效
ip addr 2: ens192:mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:0c:29:2e:86:82 brd ff:ff:ff:ff:ff:ff inet 10.10.25.110/24 brd 10.10.25.255 scope global dynamic ens192 valid_lft 71256sec preferred_lft 71256sec inet 10.10.25.113/32 scope global ens192 valid_lft forever preferred_lft forever inet6 fe80::58b8:49be:54a7:4c43/64 scope link valid_lft forever preferred_lft forever
配置lb01keepalived
修改為backup的keepalived配置文件
cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface ens192
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.10.25.113
}
track_script {
check_nginx
}
}編寫(xiě)nginx狀態(tài)檢測(cè)腳本
cat /etc/keepalived/check_nginx.sh #!/bin/sh count=$(ps -ef | grep nginx | egrep -cv "grep|$$") #獲取nginx進(jìn)程數(shù) if [ "$count" -eq 0 ];then systemctl stop keepalived fi 授予腳本執(zhí)行權(quán)限 chmod +x /etc/keepalived/check_nginx.sh
啟動(dòng)lb01keepalived
systemctl start keepalived
Keepalive故障切換
做keepalived故障切換,測(cè)試方法 1 打開(kāi)一個(gè)窗口一直ping VIP 2 kill master節(jié)點(diǎn)nginx 3 觀察VIP是否遷移到備份和VIP的丟包情況 4 啟動(dòng)master節(jié)點(diǎn)的nginx 和keepalive 5 觀察VIP時(shí)候漂移回到master節(jié)點(diǎn)
接入K8s集群
將node節(jié)點(diǎn)的接入VIP
cd /opt/kubernetes/cfg/ vim bootstrap.kubeconfig 修改 server: https://10.10.25.110:6443 為 server: https://10.10.25.113:6443 vim kubelet.kubeconfig 修改 server: https://10.10.25.110:6443 為 server: https://10.10.25.113:6443 vim kube-proxy.kubeconfig 修改 server: https://10.10.25.110:6443 為 server: https://10.10.25.113:6443 systemctl restart kubelet systemctl restart kube-proxy
重啟服務(wù)
systemctl restart kubelet systemctl restart kube-proxy
修改nginx配置文件(兩個(gè)節(jié)點(diǎn)都需要)
cat /etc/nginx/nginx.conf
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
stream {
log_format main "remote_addr $upstream_addr $time_local $status";
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 10.10.25.149:6443;
server 10.10.25.112:6443;
}
server {
listen 0.0.0.0:6443;
proxy_pass k8s-apiserver;
}
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}重啟Nginx
systemctl restart nginx
驗(yàn)證VIP接入
kubectl get node NAME STATUS ROLES AGE VERSION 10.10.25.150 Ready15d v1.10.3 10.10.25.151 Ready 15d v1.10.3 此時(shí)說(shuō)明接入VIP成功
到此,關(guān)于“K8s集群部署高可用架構(gòu)”的學(xué)習(xí)就結(jié)束了,希望能夠解決大家的疑惑。理論與實(shí)踐的搭配能更好的幫助大家學(xué)習(xí),快去試試吧!若想繼續(xù)學(xué)習(xí)更多相關(guān)知識(shí),請(qǐng)繼續(xù)關(guān)注創(chuàng)新互聯(lián)網(wǎng)站,小編會(huì)繼續(xù)努力為大家?guī)?lái)更多實(shí)用的文章!