kubernetes(七) 二进制部署k8s(1.18.4版本)

网友投稿 298 2022-10-28

kubernetes(七) 二进制部署k8s(1.18.4版本)

二进制部署k8s(1.18.4版本)

部署说明

软件名称 下载地址 备注
centos7.7+ https://mirrors.aliyun.com/centos/7.7.1908/isos/x86_64/CentOS-7-x86_64-Minimal-1908.iso 宿主机操作系统
kubernetes-server https://dl.k8s.io/v1.18.4/kubernetes-server-linux-amd64.tar.gz
etcd https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz k8s数据存储
cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64<br/>https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64<br/>https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 证书签发工具
docker https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz CNR运行引擎
cni https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz 网络环境

部署规划

主机 ip 作用 部署的软件 备注
centos7-node4 192.168.56.14 master kube-apiserver,kube-controller-manager,kube-scheduler,docker,etcd 后期介绍master扩容
centos7-node5 192.168.56.15 node kubelet,kube-proxy,docker,etcd 后期介绍master扩容
centos7-node6 192.168.56.16 node kubelet,kube-proxy,docker,etcd 后期介绍master扩容

系统初始化(所有节点执行)

软件安装路径默认路径为/data

# 更新yum源 yum -y install wget && wget -O /etc/yum.repos.d/CentOS-Base.repo && yum -y install epel-release # 关闭selinux,firewalld,swap sed -i 's/enforcing/disabled/' /etc/selinux/config systemctl disable firewalld && systemctl stop firewalld sed -ri 's/.*swap.*/#&/' /etc/fstab && swapoff -a # 设置好主机名与主机名解析 cat >> /etc/hosts << EOF 192.168.56.14 centos7-node4 192.168.56.15 centos7-node5 192.168.56.16 centos7-node6 192.168.56.14 k8s-master 192.168.56.15 k8s-node1 192.168.56.16 k8s-node2 192.168.56.17 k8s-master2 EOF # 将桥接的IPv4流量传递到iptables的链 modprobe br_netfilter cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf #配置生效 #时间同步 yum install chrony -y && systemctl enable chronyd && systemctl start chronyd timedatectl set-timezone Asia/Shanghai && timedatectl set-ntp yes

部署ETCD集群

Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库, 为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也 可以使用5台组建集群,可容忍2台机器故障。

节点hostname 节点名称 ip
centos7-node4 etcd-1 192.168.56.14
centos7-node5 etcd-2 192.168.56.15
centos7-node6 etcd-3 192.168.56.16

注:为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能 连接到就行。

生成etcd证书配置

准备cfssl证书管理工具,使用json文件生成证书,相比openssl更方便使用

# 软件安装 wget -O /usr/local/bin/cfssl wget -O /usr/local/bin/cfssljson wget -O /usr/local/bin/cfssl-certinfo chmod +x /usr/local/bin/cfssl*

准备ca与证书配置

mkdir ~/TLS/{etcd,k8s} && cd ~/TLS/etcd # 自谦CA配置文件 cat > ca-config.json << EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "{ "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF # 自签csr配置文件 cat > ca-csr.json << EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "87600h" }, "names": [ { "C": "CN", "L": "BJ", "ST": "BeiJing" } ] } EOF # 生成CA证书 cfssl gencert -initca ca-csr.json | cfssljson -bare ca - ls *pem # 签发ETCD https证书 cat > server.json << EOF { "CN": "etcd", "hosts": [ "192.168.56.14", "192.168.56.15", "192.168.56.16" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF

注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩 容可以多写几个预留的IP。

签发证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server-csr.json | cfssljson -bare server ls server*pem #签发证书文件

部署ETCD集群

单个节点配置

# 安装路径准备 mkdir /data/etcd/{bin,cfg,ssl,data} -p # 二进制文件准备 wget && tar xf etcd-v3.4.9-linux-amd64.tar.gz mv etcd-v3.4.9-linux-amd64/etcd* /data/etcd/bin/ # 当前节点192.168.56.14配置文件 cat > /data/etcd/cfg/etcd.conf <

配置文件字段介绍 ETCD_NAME:节点名称,集群中唯一ETCD_DATA_DIR:数据目录ETCD_LISTEN_PEER_URLS:集群通信监听地址 ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址 ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址 ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址ETCD_INITIAL_CLUSTER:集群节点地址ETCD_INITIAL_CLUSTER_TOKEN:集群Token ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集 群

其余两个节点配置

数据分发(从192.168.56.14分发到15,16两个节点)

scp -rp /data/etcd 192.1658.56.15:/data scp -rp /data/etcd 192.1658.56.16:/data

配置文件修改

然后在节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP

vi /data/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd-1" # 修改此处,节点2改为etcd-2,节点3改为etcd-3 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="# 修改此处为当前服务器IP ETCD_LISTEN_CLIENT_URLS="# 修改此处为当前服务器IP #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="# 修改此处为当前服务 器IP ETCD_ADVERTISE_CLIENT_URLS="# 修改此处为当前服务器IP ETCD_INITIAL_CLUSTER="etcd-1=ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"

最后启动etcd并设置开机启动。

systemctl daemon-reload && systemctl enable etcd && systemctl start etcd

验证Etcd部署状态是否成功

任意节点执行如下命令

ETCDCTL_API=3 /data/etcd/bin/etcdctl --cacert=/data/etcd/ssl/ca.pem --cert=/data/etcd/ssl/server.pem --key=/data/etcd/ssl/server-key.pem --endpoints="endpoint health

正常返回结果

is healthy: successfully committed proposal: took = 11.567437ms is healthy: successfully committed proposal: took = 11.946454ms is healthy: successfully committed proposal: took = 13.121313ms

集群异常排查

1. 查看/var/log/message日志或者journalctl -xe -f -uetcd 2. 一般配置文件没问题的话就ok,最大的问题还有一点就是网络通信和防火墙,注意响应的策略放开即可

所有节点安装docker

# 下载和解压docker二进制文件 wget && tar xf docker-19.03.9.tgz # 转移可执行文件 scp docker/* 192.168.56.15:/usr/bin/ scp docker/* 192.168.56.16:/usr/bin/ mv docker/* /usr/bin/ # 配置systemd管理docker (其余的两个节点也需要安装) cat > /usr/lib/systemd/system/docker.service << EOF [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify ExecStart=/usr/bin/dockerd ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target EOF

docker配置与启动

# 配置docker阿里云镜像加速和存储路径(graph) mkdir /etc/docker cat > /etc/docker/daemon.json << EOF { "graph": "/data/docker", "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"] } EOF # 服务启动 systemctl daemon-reload && systemctl restart docker && systemctl enable docker

k8s master节点的安装与部署

当前部署master节点 ip: 192.168.56.14

生成k8s证书配置

cd ~/TLS/k8s cat > ca-config.json < ca-csr.json <

自签apiserver证书

cd ~/TLS/k8s cat > server-csr.json << EOF { "CN":"kubernetes", "hosts":[ "10.0.0.1", "172.0.0.1", "127.0.0.1", "192.168.56.13", "192.168.56.14", "192.168.56.15", "192.168.56.16", "192.168.56.17", "192.168.56.18", "192.168.56.19", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key":{ "algo":"rsa", "size":2048 }, "names":[ { "C":"CN", "L":"BeiJing", "ST":"BeiJing", "O":"k8s", "OU":"System" } ] } EOF

注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多 写几个预留的IP。

生成apiserver证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server ls *pem

安装ApiServer

# 创建软件目录 mkdir -p /data/kubernetes/{cfg,bin,ssl,logs} # 文件下载与拷贝 wget && tar xf kubernetes-server-linux-amd64.tar.gz cp kubernetes/server/bin/kube-apiserver /data/kubernetes/bin/ cp kubernetes/server/bin/kube-controller-manager /data/kubernetes/bin/ cp kubernetes/server/bin/kube-scheduler /data/kubernetes/bin/ cp kubernetes/server/bin/kubectl /usr/bin/

创建apiserver配置文件

# 创建配置文件 cat > /data/kubernetes/cfg/kube-apiserver.conf << EOF KUBE_APISERVER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/data/kubernetes/logs \\ --etcd-servers=\\ --bind-address=192.168.56.14 \\ --secure-port=6443 \\ --advertise-address=192.168.56.14 \\ --allow-privileged=true \\ --service-cluster-ip-range=10.0.0.0/24 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestricti on \\ --authorization-mode=RBAC,Node \\ --enable-bootstrap-token-auth=true \\ --token-auth-file=/data/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-32767 \\ --kubelet-client-certificate=/data/kubernetes/ssl/server.pem \\ --kubelet-client-key=/data/kubernetes/ssl/server-key.pem \\ --tls-cert-file=/data/kubernetes/ssl/server.pem \\ --tls-private-key-file=/data/kubernetes/ssl/server-key.pem \\ --client-ca-file=/data/kubernetes/ssl/ca.pem \\ --service-account-key-file=/data/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/data/etcd/ssl/ca.pem \\ --etcd-certfile=/data/etcd/ssl/server.pem \\ --etcd-keyfile=/data/etcd/ssl/server-key.pem \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/data/kubernetes/logs/k8s-audit.log" EOF # 拷贝证书 mv ~/TLS/k8s/*pem /data/kubernetes/ssl/

注意事项: --logtostderr:启用日志---v:日志等级--log-dir:日志目录--etcd-servers:etcd集群地址--bind-address:监听地址--secure-port:--service-cluster-ip-range:Service虚拟IP地址段 --enable-admission-plugins:准入控制模块 --authorization-mode:认证授权,启用RBAC授权和节点自管理 --enable-bootstrap-token-auth:启用TLS bootstrap机制 --token-auth-file:bootstrap token文件 --service-node-port-range:Service nodeport类型默认分配端口范围 --kubelet-client-xxx:apiserver访问kubelet客户端证书 --tls-xxx-file:apiserver https证书 --etcd-xxxfile:连接Etcd集群证书 --audit-log-xxx:审计日志

启用TLS Bootstrap机制

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube- apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需 要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制 来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由 apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是 由我们统一颁发一个证书。TLS bootstraping 工作流程:

根据上述配置token文件

cat > /data/kubernetes/cfg/token.csv << EOF 2b4b65d2e33e24dc0beafddda6dd4b23,kubelet-bootstrap,10001,"system:node-bootstrapper" EOF

格式:token,用户名,UID,用户组 token也可自行生成替换: head -c 16 /dev/urandom | od -An -t x | tr -d ' '

使用systemctl管理apiserver

生成配置文件

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/data/kubernetes/cfg/kube-apiserver.conf ExecStart=/data/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF

启动并设置开机自启动

systemctl daemon-reload && systemctl start kube-apiserver && systemctl enable kube-apiserver

授权kubelet-bootstrap用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

部署kube-controller-manager

创建配置文件

cat > /data/kubernetes/cfg/kube-controller-manager.conf << EOF KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/data/kubernetes/logs \\ --leader-elect=true \\ --master=127.0.0.1:8080 \\ --bind-address=127.0.0.1 \\ --allocate-node-cidrs=true \\ --cluster-cidr=10.244.0.0/16 \\ --service-cluster-ip-range=10.0.0.0/24 \\ --cluster-signing-cert-file=/data/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/data/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/data/kubernetes/ssl/ca.pem \\ --service-account-private-key-file=/data/kubernetes/ssl/ca-key.pem \\ --experimental-cluster-signing-duration=87600h0m0s" EOF

--master:通过本地非安全本地端口8080连接apiserver。 --leader-elect:当该组件启动多个时,自动选举(HA) --cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver 保持一致

systemctl管理controller-manager

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/data/kubernetes/cfg/kube-controller-manager.conf ExecStart=/data/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF

启动&开机自启

systemctl daemon-reload && systemctl start kube-controller-manager && systemctl enable kube-controller-manager

部署kube-scheduler

创建配置文件

cat > /data/kubernetes/cfg/kube-scheduler.conf << EOF KUBE_SCHEDULER_dataS="--logtostderr=false \\ --v=2 \\ --log-dir=/data/kubernetes/logs \\ --leader-elect \\ --master=127.0.0.1:8080 \\ --bind-address=127.0.0.1" EOF

--master:通过本地非安全本地端口8080连接apiserver。 --leader-elect:当该组件启动多个时,自动选举(HA)

systemctl管理kube-scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/data/kubernetes/cfg/kube-scheduler.conf ExecStart=/data/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_dataS Restart=on-failure [Install] WantedBy=multi-user.target EOF

启动&开机自启

systemctl daemon-reload && systemctl start kube-scheduler && systemctl enable kube-scheduler

至此master部署完成,集群状态查看

kubectl get cs

返回如下结果,证明mater部署ok

NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"}

部署WorkNode

当前操作节点是: 192.168.56.14(将这个master也作为node)

需要的软件: kubelet kube-proxy

基础软件包准备

# 创建软件目录 mkdir -p /data/kubernetes/{cfg,bin,ssl,logs} # 文件下载与拷贝 wget && tar xf kubernetes-server-linux-amd64.tar.gz cp kubernetes/server/bin/kube-proxy /data/kubernetes/bin/ cp kubernetes/server/bin/kubelet /data/kubernetes/bin/

部署kubelet

创建kubelet配置文件

cat > /data/kubernetes/cfg/kubelet.conf << EOF KUBELET_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/data/kubernetes/logs \\ --hostname-override=k8s-master \\ --network-plugin=cni \\ --kubeconfig=/data/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig=/data/kubernetes/cfg/bootstrap.kubeconfig \\ --config=/data/kubernetes/cfg/kubelet-config.yml \\ --cert-dir=/data/kubernetes/ssl \\ --pod-infra-container-image=lizhenliang/pause-amd64:3.0" EOF

--hostname-override:显示名称,集群中唯一 --network-plugin:启用CNI --kubeconfig:空路径,会自动生成,后面用于连接apiserver --bootstrap-kubeconfig:首次启动向apiserver申请证书 --config:配置参数文件 --cert-dir:kubelet证书生成目录 --pod-infra-container-image:管理Pod网络容器的镜像

创建参数配置文件

cat > /data/kubernetes/cfg/kubelet-config.yml << EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 0.0.0.0 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: - 10.0.0.2 clusterDomain: cluster.local failSwapOn: false authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /data/kubernetes/ssl/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% maxOpenFiles: 1000000 maxPods: 110 EOF

生成bootstrap.kubeconfig文件

KUBE_APISERVER="# apiserver IP:PORT TOKEN="2b4b65d2e33e24dc0beafddda6dd4b23" # 与token.csv里保持一致 # 生成 kubelet bootstrap kubeconfig 配置文件 kubectl config set-cluster kubernetes --certificate-authority=/data/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=bootstrap.kubeconfig kubectl config set-credentials "kubelet-bootstrap" --token=${TOKEN} --kubeconfig=bootstrap.kubeconfig kubectl config set-context default --cluster=kubernetes --user="kubelet-bootstrap" --kubeconfig=bootstrap.kubeconfig kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #拷贝生成的配置到cfg cp bootstrap.kubeconfig /data/kubernetes/cfg

systemctl管理kubelet

创建启动文件

cat > /usr/lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet After=docker.service [Service] EnvironmentFile=/data/kubernetes/cfg/kubelet.conf ExecStart=/data/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF

启动kubelet&开机自启动

systemctl daemon-reload && systemctl start kubelet && systemctl enable kubelet

如有异常,及时查看日志解决,大多数问题会出现在/data/kubernetes/cfg/kubelet-config.yml格式上

批准kubelet证书申请并加入集群

查看kubelet证书请求

kubectl get csr

返回结果

NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-zJmrG00TW4zKRNPKoNo3ag0ojgPwEM2M3ARCsvVVyiI 60s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending

批准kubelet证书申请,并加入集群

kubectl certificate approve node-csr-zJmrG00TW4zKRNPKoNo3ag0ojgPwEM2M3ARCsvVVyiI

查看节点

kubectl get node

返回结果

NAME STATUS ROLES AGE VERSION k8s-master NotReady 2s v1.18.4

注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

部署kube-proxy

创建配置文件

cat > /data/kubernetes/cfg/kube-proxy.conf << EOF KUBE_PROXY_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/data/kubernetes/logs \\ --config=/data/kubernetes/cfg/kube-proxy-config.yml" EOF

配置参数文件

cat > /data/kubernetes/cfg/kube-proxy-config.yml << EOF kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 metricsBindAddress: 0.0.0.0:10249 clientConnection: kubeconfig: /data/kubernetes/cfg/kube-proxy.kubeconfig hostnameOverride: k8s-master clusterCIDR: 10.0.0.0/24 EOF

生成kube-proxy.kubeconfig文件

证书签发

cd ~/TLS/k8s/ cat > kube-proxy-csr.json << EOF { "CN":"system:kube-proxy", "hosts":[ ], "key":{ "algo":"rsa", "size":2048 }, "names":[ { "C":"CN", "L":"BeiJing", "ST":"BeiJing", "O":"k8s", "OU":"System" } ] } # 生成证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

生成kubeconfig配置文件

KUBE_APISERVER="https://192.168.56.14:6443" kubectl config set-cluster kubernetes --certificate-authority=/data/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy --client-certificate=./kube-proxy.pem --client-key=./kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig # 拷贝配置到cfg下 cp kube-proxy.kubeconfig /data/kubernetes/cfg/

systemctl管理kube-proxy

创建启动文件

cat > /usr/lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/data/kubernetes/cfg/kube-proxy.conf ExecStart=/data/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF

设置开机自启动

systemctl daemon-reload && systemctl start kube-proxy && systemctl enable kube-proxy

部署CNI网络

wget && mkdir /opt/cni/bin -p tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin/

部署flannel

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml kubectl apply -f kube-flannel.yml

查看部署状态

kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready 79m v1.18.4

授权apiserver访问kubelet

创建配置

cat > apiserver-to-kubelet-rbac.yaml <

新增workNode

同步文件和配置

将192.168.56.14上的node相关的配置拷贝到192.168.56.15,192.168.56.16上

# kubelet,kube-proxy拷贝 scp /data/kubernetes/bin/kubelet root@192.168.56.15:/data/kubernetes/bin/ scp /data/kubernetes/bin/kube-proxy root@192.168.56.15:/data/kubernetes/bin/ scp /data/kubernetes/bin/kubelet root@192.168.56.16:/data/kubernetes/bin/ scp /data/kubernetes/bin/kube-proxy root@192.168.56.16:/data/kubernetes/bin/ # cni插件拷贝 scp -rp /opt/cni/ root@192.168.56.15:/opt scp -rp /opt/cni/ root@192.168.56.16:/opt # 证书拷贝 scp /data/kubernetes/ssl/ca.pem 192.168.56.15:/data/kubernetes/ssl/ scp /data/kubernetes/ssl/ca.pem 192.168.56.16:/data/kubernetes/ssl/ # 配置文件拷贝 scp /data/kubernetes/cfg/kube-proxy* 192.168.56.15:/data/kubernetes/cfg/ scp /data/kubernetes/cfg/kube-proxy* 192.168.56.16:/data/kubernetes/cfg/ scp /data/kubernetes/cfg/kubelet* 192.168.56.15:/data/kubernetes/cfg/ scp /data/kubernetes/cfg/kubelet* 192.168.56.16:/data/kubernetes/cfg/ scp /data/kubernetes/cfg/bootstrap.kubeconfig 192.168.56.15:/data/kubernetes/cfg/ scp /data/kubernetes/cfg/bootstrap.kubeconfig 192.168.56.16:/data/kubernetes/cfg/ # 启动文件拷贝 scp /usr/lib/systemd/system/kubelet.service 192.168.56.15:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/kubelet.service 192.168.56.16:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.15:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.16:/usr/lib/systemd/system/

删除证书和配置文件

192.168.56.14

rm /data/kubernetes/cfg/kubelet.kubeconfig rm -f /data/kubernetes/ssl/kubelet*

注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除重新生成。

配置新的Node节点

修改kubelet和kube-proxy配置文件

vi /opt/kubernetes/cfg/kubelet.conf --hostname-override=k8s-node1 vi /opt/kubernetes/cfg/kube-proxy-config.yml hostnameOverride: k8s-node1

配置kubectl和kube-proxy开机启动

systemctl daemon-reload && systemctl start kubelet && systemctl start kube-proxy systemctl enable kubelet && systemctl enable kube-proxy

在master节点上准许node加入

获取准入的node信息

kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-63HqXs5ifBWopOS6dZAO8bRJ8PImXljxbOt-2wV5hHg 7m57s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending node-csr-t6XNO793xatm4gCwQiYH4QDOeIY4yMx8C0SUXSNye7c 38s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending

准入node

kubectl certificate approve node-csr-63HqXs5ifBWopOS6dZAO8bRJ8PImXljxbOt-2wV5hHg kubectl certificate approve node-csr-t6XNO793xatm4gCwQiYH4QDOeIY4yMx8C0SUXSNye7c

查看状态

kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready 3h32m v1.18.4 k8s-node1 Ready 106s v1.18.4 k8s-node2 Ready 105s v1.18.4

如果新加的node不是Ready,那就重新apply 一下kube-flannel.yml

部署Dashboard和CoreDNS

部署Dashboard

git地址: https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml vim recommended.yaml kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 nodePort: 30001 type: NodePort selector: k8s-app: kubernetes-dashboard # 部署dashboard kubectl apply -f recommended.yml # 查看状态 kubectl get pods,svc -n kubernetes-dashboard

创建dashboard访问token

kubectl create serviceaccount dashboard-admin -n kube-system kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') #获取token

此时可以直接访问 NodeIP也就是宿主机的IP

谷歌浏览器访问的时候证书存在问题,需要重新自签证书才能访问

# 使用cfssl生成证书,继续在192.168.56.14 master节点操作 cd ~/TLS/k8s/ cat > dashboard-csr.json <

删除默认的secret,使用自签证书创建新的secret

#删除secret kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard #自签证书创建新的secret kubectl create secret generic kubernetes-dashboard-certs --from-file=/data/kubernetes/ssl/kubernetes-dashboard-key.pem --from-file=/data/kubernetes/ssl/kubernetes-dashboard.pem -n kubernetes-dashboard

修改dasoboard.yml (recommanded.yml)

vim recommended.yaml args: - --auto-generate-certificates - --tls-key-file=kubernetes-dashboard-key.pem - --tls-cert-file=kubernetes-dashboard.pem - --namespace=kubernetes-dashboard # apply kubectl apply -f recommended.yaml # 查看token,使用这个token,可以直接访问 NodeIP也就是宿主机的IP kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

部署coreDNS

用于集群内部Service名称解析

# 下载coredns git clone https://github.com/coredns/deployment.git cd deployment/kubernetes/ # 修改部署脚本 vim deploy.sh if [[ -z $CLUSTER_DNS_IP ]]; then # Default IP to kube-dns IP # CLUSTER_DNS_IP=$(kubectl get service --namespace kube-system kube-dns -o jsonpath="{.spec.clusterIP}") CLUSTER_DNS_IP=10.10.0.2 # 执行部署 yum -y install epel-release jq ./deploy.sh | kubectl apply -f -

dns解析测试

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh If you don't see a command prompt, try pressing enter. / # nslookup kubernetes Server: 10.0.0.2 Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

高可用架构

Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法 实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。

针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可 用进行说明和实施。

Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整 个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。

Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube- controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主 要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增 加负载均衡器对其负载均衡即可,并且可水平扩容。

多Master架构图:

扩容流程

新增主机:centos7-node7, 角色k8s-master2

系统初始化 安装docker

wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz tar xf docker-19.03.9.tgz mv docker/* /usr/bin/ mkdir /data/docker mkdir /etc/docker

创建ETCD证书目录(mkdir /data/etcd/ssl -p) 拷贝文件,从master-1拷贝到新机器

# 创建目录 mkdir /data/kubernetes/{ssl,bin,cfg,logs} -pv # CNI scp -rp /opt/cni/ 192.168.56.17:/opt # 证书 scp -rp /data/etcd/ssl/* 192.168.56.17:/data/etcd/ssl/ scp -rp /data/kubernetes/ssl/* 192.168.56.17:/data/kubernetes/ssl/ # 二进制文件 scp -rp /data/kubernetes/bin/kube* 192.168.56.17:/data/kubernetes/bin # 配置文件 scp /data/kubernetes/cfg/* 192.168.56.17:/data/kubernetes/cfg/ # 启动脚本 scp -rp /usr/lib/systemd/system/kube* 192.168.56.17:/usr/lib/systemd/system/ scp -rp /usr/lib/systemd/system/docker.service 192.168.56.17:/usr/lib/systemd/system/

修改配置文件

$ vim /data/kubernetes/cfg/kube-apiserver.conf --bind-address=192.168.56.17 \ --advertise-address=192.168.56.17 \ $ vim /data/kubernetes/cfg/kubelet.conf --hostname-override=k8s-master2 $ vim /data/kubernetes/cfg/kube-proxy-config.yml hostnameOverride: k8s-master2

启动服务

systemctl daemon-reload && systemctl start docker && systemctl start kube-apiserver && systemctl start kube-controller-manager && systemctl start kube-scheduler && systemctl start kubelet && systemctl start kube-proxy systemctl enable kube-apiserver && systemctl enable docker && systemctl enable kube-controller-manager && systemctl enable kube-scheduler && systemctl enable kubelet && systemctl enable kube-proxy

master-1查看集群状态

$ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"}

k8s-mater1 准入新master节点

$ kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-wAP8aDK22Olbn5G34KDaH9xvAn49UyE2DkacElw4SFE 2m19s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending $ kubectl certificate approve node-csr-wAP8aDK22Olbn5G34KDaH9xvAn49UyE2DkacElw4SFE $ kubectl get node

此时,dashboard使用原来的信息登陆的时候会拿不到系统资源

# 创建 授权 cat > admin.yml <

部署ngixn负载均衡

Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。

Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中, Keepalived主要根据Nginx运行状态判断是否需要故障转移(偏移VIP),例如当Nginx主节点挂 掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。

资源规划

节点hostname ip 角色 软件
centos7-node8 192.168.56.18 nginx+keepalived nginx+keepalived
centos7-node9 192.168.56.19 nginx+keepalived nginx+keepalived

软件安装与配置

软件安装

# 软件安装 yum install epel-release -y yum install nginx keepalived -y

软件配置 nginx配置

# nginx配置 $ vim /etc/nginx/nginx.conf user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } # 四层负载均衡,为两台Master apiserver组件提供负载均衡 stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.56.14:6443; server 192.168.56.74:6443; } server { listen 6443; proxy_pass k8s-apiserver; } } { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$' '"$"$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; server { listen 80; server_name _; location / { } } }

keepalived配置

$ vim /etc/keepalived/keepalived.conf global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_BACKUP } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.56.111/24 } track_script { check_nginx } }

注意: virtual_router_id # VRRP 路由 ID实例,每个实例是唯一的 priority 100 # 优先级,备服务器设置 90,所以两台机器上的这个优先级设置是不一样的

上述脚本中的健康检查脚本

$ vim /etc/keepalived/check_nginx.sh #!/bin/bash count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then exit 1 else exit 0 fi $ chmod +x /etc/keepalived/check_nginx.sh

服务设置开机自启动

systemctl daemon-reload && systemctl start nginx && systemctl start keepalived && systemctl enable nginx && systemctl enable keepalived

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:Type-C接口可不只是一个充电功能而已
下一篇:基于Java汇总Spock框架Mock静态资源经验
相关文章

 发表评论

暂时没有评论,来抢沙发吧~