十七、kubernetes 高可用集群搭建 v1.19

网友投稿 331 2022-10-21

十七、kubernetes 高可用集群搭建 v1.19

1、概述

Kubernetes 作为容器集群系统,通过健康检查+重启策略实现了 Pod 故障自我修复能力, 通过调度算法实现将 Pod 分布式部署,监控其预期副本数,并根据 Node 失效状态自动在正 常 Node 启动 Pod,实现了应用层的高可用性。

针对 Kubernetes 集群,高可用性还应包含以下两个层面的考虑:Etcd 数据库的高可用性 和 Kubernetes Master 组件的高可用性。 而 Etcd 我们已经采用 3 个节点组建集群实现高 可用,本节将对 Master 节点高可用进行说明和实施。

Master 节点扮演着总控中心的角色,通过不断与工作节点上的 Kubelet 和 kube-proxy 进 行通信来维护整个集群的健康工作状态。如果 Master 节点故障,将无法使用 kubectl 工具 或者 API 任何集群管理。

Master 节点主要有三个服务 kube-apiserver、kube-controller-mansger 和 kube- scheduler,其中 kube-controller-mansger 和 kube-scheduler 组件自身通过选择机制已 经实现了高可用,所以 Master 高可用主要针对 kube-apiserver 组件,而该组件是以 HTTP API 提供服务,因此对他高可用与 Web 服务器类似,增加负载均衡器对其负载均衡即可, 并且可水平扩容。

master必须大于三个节点,才能实现高可用。

多 Master 架构图:

keepalived 提供虚拟地址haproxy 提供负载均衡

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Sze8gJJy-1622108557348)(/Users/Linux/image/image-20210527172032954.png)]

2、高可用搭建环境

K8s-master01

192.168.10.3

keepalived+haproxy+docker kubectl kubeadm 网络插件

k8s-master02

192.168.10.4

keepalived+haproxy+docker kubectl kubeadm 网络插件

K8s-master03

192.168.10.7

keepalived+haproxy+docker kubectl kubeadm 网络插件

K8s-node01

192.168.10.5

Kus-node02

192.168.10.6

VIP

192.168.10.2

​修改hostname主机名并添加hosts解析​

hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02cat >>/etc/hosts <<-EOF192.168.10.3 k8s-master01192.168.10.4 k8s-master02192.168.10.7 k8s-master03192.168.10.5 k8s-node01192.168.10.6 k8s-node02EOF

​设置防火墙为iptables并设置空规则​

systemctl stop firewalld && systemctl disable firewalldyum -y install iptables-serives && systemctl start iptables && systemctl enable iptables && iptables -F && systemctl iptables save swapoff -a

​关闭seliunx​

​调整内核参数,对于k8s​

cat > kubernetes.conf <

​调整时区​

# 设置系统时区为中国/上海timedatectl set-timezone Asia/Shanghai# 将当前的 UTC 时间写入硬件时钟timedatectl set-local-rtc 0# 重启依赖于系统时间的服务systemctl restart rsyslogsystemctl restart crond

​关闭系统不需要的服务​

systemctl stop postfix && systemctl disable postfix

​升级系统内核为 4.44​

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如: rpm -Uvh-Uvh 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!yum --enablerepo=elrepo-kernel install -y kernel-lt# 设置开机从新内核启动grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)'

3、所有master节点部署keepalived

3.1 安装相关包和keepalived

yum -y install conntrack-tools libseccomp libtool-ltdl keepalived

3.2 添加keepalived配置文件

​master01节点配置​

[root@k8s-master01 ~]# cat > /etc/keepalived/keepalived.conf <<-EOF! Configuration File for keepalivedglobal_defs { router_id k8s}vrrp_script check_haproxy { script "killall -0 haproxy" interval 3 weight -2 fall 10 rise 2}vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 51 priority 250 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.10.2 # vip地址 } track_script { check_haproxy }}EOF

​master02节点配置​

[root@k8s-master02 ~]# cat > /etc/keepalived/keepalived.conf <<-EOF! Configuration File for keepalivedglobal_defs { router_id k8s}vrrp_script check_haproxy { script "killall -0 haproxy" interval 3 weight -2 fall 10 rise 2}vrrp_instance VI_1 { state BACKUP interface eth1 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 192.168.10.2 } track_script { check_haproxy }}EOF

3.3 启动和检查

两台master节点都执行

systemctl start keepalivedsystemctl enable keepalived# 查看启动状态systemctl status keepalivedip add s eth1

4. 部署haproxy

4.1 安装

yum -y install haproxy

4.2 配置

[root@k8s-master01 ~]# cat > /etc/haproxy/haproxy.cfg <<-EOF#---------------------------------------------------------------------# Example configuration for a possible web application. See the# full configuration options online.## Global settings#---------------------------------------------------------------------global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats#---------------------------------------------------------------------# common defaults that all the 'listen' and 'backend' sections will# use if not designated in their block#---------------------------------------------------------------------defaults mode log global option option dontlognull option option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout 10s timeout check 10s maxconn 3000#---------------------------------------------------------------------# kubernetes apiserver frontend which proxys to the backends#---------------------------------------------------------------------frontend kubernetes-apiserver mode tcp bind *:16433 option tcplog default_backend kubernetes-apiserver #---------------------------------------------------------------------# round robin balancing between the various backends#---------------------------------------------------------------------backend kubernetes-apiserver mode tcp balance roundrobin server master01.didi.cn 192.168.10.3:6443 check server master02.didi.cn 192.168.10.4:6443 check#---------------------------------------------------------------------# collection haproxy statistics message#---------------------------------------------------------------------listen stats bind *:1080 stats auth admin:awesomePassword stats refresh 5s stats realm HAProxy\ Statistics stats uri /admin?statsEOF

4.3 启动和检查

两台master都启动

systemctl enable haproxysystemctl start haproxysystemctl status harproxy# 检查端口netstat -antup | grep haproxy

5. 所有节点安装Docker/kubeadm/kubelet

5.1 安装Docker 软件

yum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo install -y docker-ce## 创建 /etc/docker 目录mkdir /etc/docker# 配置 daemon.cat > /etc/docker/daemon.json <

5.2 添加阿里云yum软件源

cat > /etc/yum.repos.d/kubernets.repo <<-EOF[kubernets]name=kubernetsbaseurl=安装 Kubeadm (主从配置)

yum -y install kubeadm-1.19.0 kubectl-1.19.0 kubelet-1.19.0systemctl enable kubelet.service

6. 部署kubernetes Master

6.1 创建kubeadm配置文件

​在具有vip的master上操作,这是为k8s-master01​

mkdir /usr/local/kubernetes/manifests -p cd /usr/local/kubernetes/manifests/vim kubeadm-config.yaml[root@k8s-master01 manifests]# cat > kubeadm-config.yaml <<-EOFapiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authenticationkind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 192.168.10.3 bindPort: 6443nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master01 taints: - effect: NoSchedule key: node-role.kubernetes.io/master---apiServer: certSANs: - k8s-master01 - k8s-master02 - master.k8s.io - 192.168.10.2 - 192.168.10.3 - 192.168.10.4 - 192.168.10.7 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta2certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrolPlaneEndpoint: "master.k8s.io:16433" controllerManager: {}dns: type: CoreDNSetcd: local: dataDir: /var/lib/etcdimageRepository: registry.aliyuncs.com/google_containerskind: ClusterConfigurationkubernetesVersion: v1.19.0networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12scheduler: {}EOFkubeadm init --config kubeadm-config.yaml | tee kubeadm-config.log

​输出结果​

7. 部署flannel网络

kubectl apply -f Master02、03节点加入集群

8.1 复制密钥及相关文件

从master01复制密钥及相关文件到master02、03中

ssh root@k8s-master02 'mkdir -p /etc/kubernetes/pki/etcd'scp /etc/kubernetes/admin.conf root@k8s-master02:/etc/kubernetesscp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@k8s-master02:/etc/kubernetes/pkiscp /etc/kubernetes/pki/etcd/ca.* k8s-master02:/etc/kubernetes/pki/etcd/

8.2 master02、03加入集群

执行在master01上init后输出的join命令,需要带上参数–control-plane表示把master控制节点加入集群

kubeadm join master.k8s.io:16433 --token d0mqe3.ym9zyzq6p3yezava \ --discovery-token-ca-cert-hash sha256:2e9319f545e4f2380338fd22e5e18c27c6b01e75e0556c07199123942fcfef96 \ --control-plane

​检查状态​

kubectl get nodes

9. 加入 Kubernetes Node节点

​在所有node上执行​

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令

kubeadm join master.k8s.io:16433 --token d0mqe3.ym9zyzq6p3yezava \ --discovery-token-ca-cert-hash sha256:2e9319f545e4f2380338fd22e5e18c27c6b01e75e0556c07199123942fcfef96

​检查状态​

kubectl get nodes

10. 测试验证

关闭master01节点,集群仍可正常使用。

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:一起来了解Java的File类和IO流
下一篇:1.Harbor-v2.1.2 企业级仓库安装
相关文章

 发表评论

暂时没有评论,来抢沙发吧~