CentOS 8.0部署k8s

网友投稿 254 2022-10-24

CentOS 8.0部署k8s

一、Master节点、Node节点准备工作

1.关闭firewalld、selinux

systemctl stop firewalld

systemctl disable firewalld

setenforce 0

sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

2.设置系统时区,同步系统时间

timedatectl set-timezone Asia/Shanghai

systemctl enable --now chronyd

chronyc makestep

3.配置主机互信

ssh-keygen

ssh-copy-id

二、禁用swap

swapoff -a

sed -i '/swap/s/^/#/g' /etc/fstab

三、部署docker

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo list docker-ce --showduplicates | sort -r |tail -1

yum install docker-ce-19.03.13-3.el8 docker-ce-cli-19.03.13-3.el8 containerd.io

systemctl enable --now docker

docker info

curl -L "-s)-$(uname -m)" -o /usr/local/bin/docker-compose

chmod +x /usr/local/bin/docker-compose

ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

docker-compose --version

mkdir -p /etc/systemd/system/docker.service.d

cat >/etc/systemd/system/docker.service.d/<

[Service]

Environment="HTTP_PROXY="HTTPS_PROXY="NO_PROXY=localhost,127.0.0.1"

EOF

systemctl daemon-reload;systemctl restart docker

四、配置容器运行时

运行时:为了在 Pod 中运行容器,Kubernetes 使用 容器运行时(Container Runtime)。默认情况下,Kubernetes 使用 容器运行时接口(Container Runtime Interface,CRI) 来与你所选择的容器运行时交互。需要在集群内每个节点上安装一个 容器运行时 以使 Pod 可以运行在上面,如果同时检测到 Docker 和 containerd,则优先选择 Docker。

配置的先决条件:

cat <

overlay

br_netfilter

EOF

modprobe overlay

modprobe br_netfilter

# 设置必需的 sysctl 参数,这些参数在重新启动后仍然存在。

cat <

net.bridge.bridge-nf-call-iptables  = 1

net.ipv4.ip_forward  = 1

net.bridge.bridge-nf-call-ip6tables = 1

EOF

# 应用 sysctl 参数而无需重新启动

sudo sysctl --system

五、配置 Docker 守护程序,尤其是使用 systemd 来管理容器的 cgroup,

(Cgroup 驱动程序:默认情况下,CRI-O 使用 systemd cgroup 驱动程序,控制组用来约束分配给进程的资源)。

mkdir /etc/docker

cat <

{

"exec-opts": ["native.cgroupdriver=systemd"],

"log-driver": "json-file",

"log-opts": {

"max-size": "100m"

},

"storage-driver": "overlay2",

"storage-opts": [

"overlay2.override_kernel_check=true"

],

"registry-mirrors": [": ["myregistrydomain.com:5000"]

}

EOF

systemctl daemon-reload

systemctl restart docker

# 对于运行 Linux 内核版本 4.0 或更高版本,或使用 3.10.0-51 及更高版本的 RHEL 或 CentOS 的系统,overlay2是首选的存储驱动程序。

六、安装 kubeadm、kubelet 和 kubectl

需要在每台机器上安装以下的软件包:

kubeadm:用来初始化集群的指令。

kubelet:在集群中的每个节点上用来启动 Pod 和容器等。

kubectl:用来与集群通信的命令行工具。

cat <

[kubernetes]

name=Kubernetes

baseurl=kubeadm kubectl

EOF

yum -y install kubeadm-1.19.0 kubectl-1.19.0 kubelet-1.19.0 --disableexcludes=kubernetes

systemctl enable --now kubelet

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环。

七、配置自动补全命令

# 安装bash自动补全插件

yum install bash-completion -y

# 设置kubectl与kubeadm命令补全,下次login生效

kubectl completion bash >/etc/bash_completion.d/kubectl

kubeadm completion bash > /etc/bash_completion.d/kubeadm

八、预拉取kubernetes镜像

由于国内网络因素,kubernetes镜像需要从mirrors站点或通过dockerhub用户推送的镜像拉取。

kubeadm config images list --kubernetes-version v1.19.0

脚本:pull.sh

#!/bin/bash

# Script For Quick Pull K8S Docker Images

KUBE_VERSION=v1.19.0

PAUSE_VERSION=3.2

CORE_DNS_VERSION=1.7.0

ETCD_VERSION=3.4.9-1

# pull kubernetes images from hub.docker.com

docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION

docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION

docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION

docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION

# pull aliyuncs mirror docker images

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

# retag to k8s.gcr.io prefix

docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION  k8s.gcr.io/kube-proxy:$KUBE_VERSION

docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION

docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION

docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION

# untag origin tag, the images won't be delete.

docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION

docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION

docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION

docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

执行脚本后:7个镜像:proxy apiserver controller scheduler etcd dns pause

[root@k8smaster ~]# docker images |grep "k8s.gcr.io"

k8s.gcr.io/kube-proxy                                         v1.19.0             bc9c328f379c        10 months ago       118MB

k8s.gcr.io/kube-apiserver                                     v1.19.0             1b74e93ece2f        10 months ago       119MB

k8s.gcr.io/kube-controller-manager                            v1.19.0             09d665d529d0        10 months ago       111MB

k8s.gcr.io/kube-scheduler                                     v1.19.0             cbdc8369d8b1        10 months ago       45.7MB

k8s.gcr.io/etcd                                               3.4.9-1             d4ca8726196c        12 months ago       253MB

k8s.gcr.io/coredns                                            1.7.0               bfe3a36ebd25        12 months ago       45.2MB

k8s.gcr.io/pause                                              3.2                 80d28bedfe5d        17 months ago       683kB

九、初始化k8s的master节点

# master节点服务器执行

kubeadm config print init-defaults >init.yaml

init.yaml内容:需修改 需增加

apiVersion: kubeadm.k8s.io/v1beta2

bootstrapTokens:

- groups:

- system:bootstrappers:kubeadm:default-node-token

token: abcdef.0123456789abcdef

ttl: 24h0m0s

usages:

- signing

- authentication

kind: InitConfiguration

localAPIEndpoint:

advertiseAddress: 192.168.23.10

bindPort: 6443

nodeRegistration:

criSocket: /var/run/dockershim.sock

name: node

taints: null

---

apiServer:

timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta2

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controllerManager: {}

dns:

type: CoreDNS

etcd:

local:

dataDir: /var/lib/etcd

imageRepository: k8s.gcr.io

kind: ClusterConfiguration

kubernetesVersion: v1.19.0

networking:

dnsDomain: cluster.local

serviceSubnet: 10.96.0.0/12

podSubnet: "10.244.0.0/16" (内网某网段的子网,例如某网段172.16.0.0/16, 分化出子网:172.16.1.0/24,172.16.2.0/24等,保证每个node节点上pod分配网络地址的时候只能从这个子网范围内分配,避免了IP地址冲突)

scheduler: {}

kubeadm init phase preflight (测试)

WARNING是正常的。

10.244.0.0/16(能修改??)是flannel固定使用的IP段,设置取决于网络组件要求。对应了kube-flannel.yml中以下:

net-conf.json: |

{

"Network": "10.244.0.0/16",  (能修改??)

"Backend": {

"Type": "vxlan"

}

kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.19.0 |tee init.log

init.log信息有用,保存好。

记下node节点加入master的命令:

kubeadm join 192.168.23.10:6443 --token 2ax0m9.qbu5gri5c9rare3i     --discovery-token-ca-cert-hash sha256:ea68c3242205dfddb052d60b0d79dc552f5dda5aa9e6e367b6075b53a59dabc2

如果没有记下可以用以下命令重新生成:

kubeadm token create --print-join-command 2>&1|tail -n 1

十、配置master认证

# master节点服务器执行

echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile

. /etc/profile

十一、安装网络组件

# master节点服务器执行

yum install -y wget

#下载最新的flannel配置文件

wget apply -f kube-flannel.yml

十二、备份镜像供其他节点使用

docker save `docker images |egrep "(proxy|apiserver|controller-manager|scheduler|etcd|coredns|pause)"|awk '/k8s.gcr.io/{printf"%s ",$1}'` >k8s_imagesv1.19.0.tar

十三、拷贝镜像到node节点,导入镜像

docker load < k8s_imagesv1.19.0.tar

十四、在node*节点上执行加入集群命令

kubeadm join 192.168.23.10:6443 --token 2ax0m9.qbu5gri5c9rare3i     --discovery-token-ca-cert-hash sha256:ea68c3242205dfddb052d60b0d79dc552f5dda5aa9e6e367b6075b53a59dabc2

十五、在该集群上面执行业务层面的镜像操作

新建nginx.yml

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx

replicas: 3

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx

imagePullPolicy: IfNotPresent

ports:

- containerPort: 80

kubectl apply -f nginx.yaml

deployment.apps/configured

扩容:

kubectl scale --current-replicas=3 --replicas=6 deployment/scaled

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:QPS 最高提升 91% | 腾讯云 TKE 基于 Cilium eBPF 提升 k8s Servi
下一篇:苹果手机对于充电接口是如何规划的
相关文章

 发表评论

暂时没有评论,来抢沙发吧~