kubeadm安装单机版

网友投稿 216 2022-10-28

kubeadm安装单机版

机器准备

环境信息

机器名称 内外IP 外网IP 操作系统 k8s版本 docker版本
master01 10.0.0.106 192.168.1.9 CentOS Linux release 7.8.2003 v1.15.2 18.09.7
node01 10.0.0.107 192.168.1.11 CentOS Linux release 7.8.2003 v1.15.2 18.09.7
node02 10.0.0.108 192.168.1.15 CentOS Linux release 7.8.2003 v1.15.2 18.09.7

配置主机名称

#分别修改主机名称 hostnamectl set-hostname master01 hostnamectl set-hostname node01 hostnamectl set-hostname node02 #添加名称解析 cat <>/etc/hosts 10.0.0.106 master01 10.0.0.107 node01 10.0.0.108 node02 EOF

检查防火墙

systemctl status firewalld

#如果没有关闭,需要关闭

```systemctl disable firewalldsystemctl disable firewalldsystemctl stop firewalld

* 检查selinux是否关闭

[root@node02 ~]# getenforce Disabled

#重启会失效[root@master ~]# setenforce 0#永久关闭[root@master ~]# vi /etc/selinux/configSELINUX=disabled

* 关闭swap

#关闭swap swapoff -a;sed -i '/swap/s/^/#/' /etc/fstab

[root@node02 ~]# free -mtotal used free shared buff/cache availableMem: 1837 102 1571 8 164 1590Swap: 2047 0 2047[root@node02 ~]# swaposwapoff swapon [root@node02 ~]# swapoff -a[root@node02 ~]# free -mtotal used free shared buff/cache availableMem: 1837 101 1572 8 164 1591Swap: 0 0 0[root@node02 ~]#

[root@node02 ~]# grep swap /etc/fstab/dev/mapper/centos-swap swap swap defaults 0 0[root@node02 ~]# sed -i '/swap/s/^/#/' /etc/fstab[root@node02 ~]# grep swap /etc/fstab#/dev/mapper/centos-swap swap swap defaults 0 0[root@node02 ~]#

* 内核设置

#创建k8s.conf文件cat < /etc/sysctl.d/k8s.conf#文件内容net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1vm.swappiness=0 EOFsysctl -p

* docker 安装

#使用阿里云镜像,安装dockeryum -y install yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo -y install docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io

#配置存储卷[root@node01 ~]# cat </etc/docker/daemon.json {"exec-opts": ["native.cgroupdriver=systemd"]}EOF

#开启自动启动[root@node01 ~]# systemctl restart docker;systemctl enable docker;docker info | grep CgroupCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.Cgroup Driver: systemd

* kubeadm安装

cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=-y makecacheyum -y install kubelet-1.15.2 kubeadm-1.15.2 kubectl-1.15.2

#检查是否成功rpm -aq kubeadm kubelet kubectl#加入自动启动systemctl enable kubelet.service;systemctl enable kubelet

* kubeadm 配置

#master01上创建第一个节点kubeadm init --kubernetes-version=v1.15.2 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

#输出信息:[init] Using Kubernetes version: v1.15.2[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Activating the kubelet service[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.9][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [192.168.1.9 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [192.168.1.9 127.0.0.1 ::1][certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 39.007909 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][kubelet-check] Initial timeout of 40s passed.[bootstrap-token] Using token: xknie1.dm76a39ntgnwkyid[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.9:6443 --token xknie1.dm76a39ntgnwkyid \--discovery-token-ca-cert-hash sha256:76896f39087f6fa66a43a0c336c081649ae65a781c80d140ba492b57bb038df9

#按照提示配置mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

#查看到拉取的镜像:[root@master01 ~]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZEregistry.aliyuncs.com/google_containers/kube-proxy v1.15.2 167bbf6c9338 12 months ago 82.4MBregistry.aliyuncs.com/google_containers/kube-apiserver v1.15.2 34a53be6c9a7 12 months ago 207MBregistry.aliyuncs.com/google_containers/kube-controller-manager v1.15.2 9f5df470155d 12 months ago 159MBregistry.aliyuncs.com/google_containers/kube-scheduler v1.15.2 88fa9cb27bd2 12 months ago 81.1MBregistry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 19 months ago 40.3MBregistry.aliyuncs.com/google_containers/etcd 3.3.10 2c4adeb21b4f 20 months ago 258MBregistry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB[root@master01 ~]#

#查看node节点[root@master01 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster01 NotReady master 3m19s v1.15.2[root@master01 ~]#

#需要安装flannel模块后,会变为Ready#***下载wget ~]# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds-amd64 createddaemonset.apps/kube-flannel-ds-arm64 createddaemonset.apps/kube-flannel-ds-arm createddaemonset.apps/kube-flannel-ds-ppc64le createddaemonset.apps/kube-flannel-ds-s390x created[root@master01 ~]#

#镜像失败[root@master01 ~]# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-bccdc95cf-pfgls 0/1 Pending 0 15mcoredns-bccdc95cf-qcb4d 0/1 Pending 0 15metcd-master01 1/1 Running 0 14mkube-apiserver-master01 1/1 Running 0 14mkube-controller-manager-master01 1/1 Running 0 15mkube-flannel-ds-amd64-jdmjs 0/1 Init:ErrImagePull 0 93skube-proxy-bx8jv 1/1 Running 0 15mkube-scheduler-master01 1/1 Running 0 15m[root@master01 ~]#

#更改镜像重试:sponse from daemon: Get net/TLS handshake timeoutNormal Pulling 52s (x3 over 2m25s) kubelet, master01 Pulling image "quay.io/coreos/flannel:v0.12.0-amd64"Warning Failed 39s (x3 over 119s) kubelet, master01 Error: ErrImagePullWarning Failed 39s kubelet, master01 Failed to pull image "quay.io/coreos/flannel:v0.12.0-amd64": rpc error: code = Unknown desc = Error response from daemon: Get net/TLS handshake timeoutNormal BackOff 1s (x5 over 118s) kubelet, master01 Back-off pulling image "quay.io/coreos/flannel:v0.12.0-amd64"Warning Failed 1s (x5 over 118s) kubelet, master01 Error: ImagePullBackOff[root@master01 ~]#

#解决docker pull registry.cn-hangzhou.aliyuncs.com/chentging/flannel:v0.12.0-amd64

#已经启动:[root@master01 ~]# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-bccdc95cf-pfgls 1/1 Running 0 23mcoredns-bccdc95cf-qcb4d 1/1 Running 0 23metcd-master01 1/1 Running 0 22mkube-apiserver-master01 1/1 Running 0 22mkube-controller-manager-master01 1/1 Running 0 22mkube-flannel-ds-amd64-jdmjs 1/1 Running 0 8m44skube-proxy-bx8jv 1/1 Running 0 23mkube-scheduler-master01 1/1 Running 0 22m

#已经变为:[root@master01 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster01 Ready master 23m v1.15.2[root@master01 ~]#

* node节点加入

#参入node01,node02kubeadm join 192.168.1.9:6443 --token xknie1.dm76a39ntgnwkyid \--discovery-token-ca-cert-hash sha256:76896f39087f6fa66a43a0c336c081649ae65a781c80d140ba492b57bb038df9

#输出日志如下:[root@node01 ~]# kubeadm join 192.168.1.9:6443 --token xknie1.dm76a39ntgnwkyid \

--discovery-token-ca-cert-hash sha256:76896f39087f6fa66a43a0c336c081649ae65a781c80d140ba492b57bb038df9 [preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Activating the kubelet service[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

Certificate signing request was sent to apiserver and a response was received. The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@node01 ~]#

#检查状态[root@master01 ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster01 Ready master 33m v1.15.2node01 Ready 7m30s v1.15.2node02 Ready 5m20s v1.15.2[root@master01 ~]#

#如果成功后,那么就会正常[root@node02 ~]# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEquay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 4 months ago 52.8MBregistry.aliyuncs.com/google_containers/kube-proxy v1.15.2 167bbf6c9338 12 months ago 82.4MBregistry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB[root@node02 ~]#

* 验证

#创建一个deployment测试[root@master01 ~]# kubectl create deployment nginx --image=nginx

root@master01 ~]# kubectl get pod -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-554b9c67f9-bbrm5 1/1 Running 0 89s 10.244.1.2 node01

#访问验证已经成功了[root@master01 ~]# curl html>Welcome to nginx! Welcome to nginx!

If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.

For online documentation and support please refer to

上一篇:ZU+系列MPSoC的外围接口详细分析
下一篇:spring data jpa查询一个实体类的部分属性方式
相关文章

 发表评论

暂时没有评论,来抢沙发吧~