k8s安装(二)

网友投稿 281 2022-09-12

k8s安装(二)

环境

系统:centos 7.9 master:192.168.199.131  2核4Gnode1:192.168.199.129  2核2Gnode2:192.168.199.130 2核2G官方安装:​​< /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=stop firewalldsetenforce 0sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config3.主机名修改hostnamectl set-hostname master4.关闭swap分区swapoff -ased -ri 's/.*swap.*/#&/' /etc/fstab

2.内核参数调整

net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1问题:/proc/sys/net/bridge/bridge-nf-call-iptables does not exist解决:modprobe br_netfilter/proc/sys/net/ipv4/ip_forward contents are not set to 1解决:echo 1 > /proc/sys/net/ipv4/ip_forward

3.yum安装

# docker安装参照阿里云yum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repoyum install docker-ce && systemctl enable docker.service && systemctl start docker# 安装(master/node)yum install -y kubelet-1.19.1 kubeadm-1.19.1 kubectl-1.19.1systemctl enable kubelet

kubelet: 运行在cluster所有节点上,负责启动POD和容器kubeadm:负责初始化clusterkubectl: 是kubenetes命令行工具,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新组件

4.kubeadm init初始化(master)

kubeadm init --apiserver-advertise-address=192.168.199.131 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.19.1 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16完成提示:Your Kubernetes control-plane has initialized successfully!最后:---注意token和hash值,之后node节点加入需要用到kubeadm join 192.168.199.131:6443 --token 2del1g.bvtimznd86php4v1 \    --discovery-token-ca-cert-hash sha256:f0d1fdd4a5756a88a7d5f2b5e5067c5a6c4d13bd44be832a6df87bcc6b04c466

说明:service-cidr的选取不能和PodCIDR及本机网络有重叠或者冲突,一般可以选择一个本机网络和PodCIDR都没有用到的丝网地址段,比如PODCIDR使用10.244.0.0/16, 那么service cidr可以选择10.96.0.0/12,网络无重叠冲突即可

第二种init方法:通过修改初始化配置文件进行初始化kubeadm config print init-defaults > kubeadm-init.yaml kubeadm init --config kubeadm-init.yaml查看安装时所需要拉取的镜像[root@master ~]# kubeadm config images list --config kubeadm-init.yaml k8s.gcr.io/kube-apiserver:v1.19.0k8s.gcr.io/kube-controller-manager:v1.19.0k8s.gcr.io/kube-scheduler:v1.19.0k8s.gcr.io/kube-proxy:v1.19.0k8s.gcr.io/pause:3.2k8s.gcr.io/etcd:3.4.13-0k8s.gcr.io/coredns:1.7.0

kubeadm init初始化参数 --apiserver-advertise-address string 设置 apiserver 绑定的 IP. --apiserver-bind-port int32 设置apiserver 监听的端口. (默认 6443) --apiserver-cert-extra-sans strings api证书中指定额外的Subject Alternative Names (SANs) 可以是IP 也可以是DNS名称。 证书是和SAN绑定的。 --cert-dir string 证书存放的目录 (默认 "/etc/kubernetes/pki") --certificate-key string kubeadm-cert secret 中 用于加密 control-plane 证书的key --config string kubeadm 配置文件的路径. --cri-socket string CRI socket 文件路径,如果为空 kubeadm 将自动发现相关的socket文件; 只有当机器中存在多个 CRI socket 或者 存在非标准 CRI socket 时才指定. --dry-run 测试,并不真正执行;输出运行后的结果. --feature-gates string 指定启用哪些额外的feature 使用 key=value 对的形式。 -h, --help 帮助文档 --ignore-preflight-errors strings 忽略前置检查错误,被忽略的错误将被显示为警告. 例子: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks. --image-repository string 选择拉取 control plane images 的镜像repo (default "k8s.gcr.io") --kubernetes-version string 选择K8S版本. (default "stable-1") --node-name string 指定node的名称,默认使用 node 的 hostname. --pod-network-cidr string 指定 pod 的网络, control plane 会自动将 网络发布到其他节点的node,让其上启动的容器使用此网络 --service-cidr string 指定service 的IP 范围. (default "10.96.0.0/12") --service-dns-domain string 指定 service 的 dns 后缀, e.g. "myorg.internal". (default "cluster.local") --skip-certificate-key-print 不打印 control-plane 用于加密证书的key. --skip-phases strings 跳过指定的阶段(phase) --skip-token-print 不打印 kubeadm init 生成的 default bootstrap token --token string 指定 node 和control plane 之间,简历双向认证的token ,格式为 [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef --token-ttl duration token 自动删除的时间间隔。 (e.g. 1s, 2m, 3h). 如果设置为 '0', token 永不过期 (default 24h0m0s) --upload-certs 上传 control-plane 证书到 kubeadm-certs Secret.

报错:

报错一:[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL failed with error: Get "dial tcp [::1]:10248: connect: connection refused.[kubelet-check] It seems like the kubelet isn't running or healthy.解决方法:mdkir /etc/systemd/system/kubelet.service.decho 'Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"' >/etc/systemd/system/kubelet.service.d/10-kubeadm.confsystemctl daemon-reload报错二:[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID'error execution phase wait-control-plane: couldn't initialize a Kubernetes clusterTo see the stack trace of this error execute with --v=5 or higher原因:docker镜像没有启动解决方法:根据上面步骤进行检查,本次我降低了版本不用最新的就好了systemctl restart docker #可能是程序卡死了

5.测试kubectl命令

mkdir -p $HOME/.kubecp -i /etc/kubernetes/admin.conf $HOME/.kube/configchown $(id -u):$(id -g) $HOME/.kube/config[root@master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster NotReady master 7m39s v1.19.1

6.安装Pod网络插件flannel(master)

kubectl apply -f apply -f kube-flannel.yml执行结果(master和node都执行):[root@master docker]# kubectl apply -f kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds created然后查看节点状态:[root@master docker]# kubectl get nodes   ----可以看到7中的STATUS状态改变了NAME    STATUS   ROLES    AGE   VERSIONnode1   Ready    master   57m   v1.19.1node2   Ready       24m   v1.19.1

7.将node2节点加入k8s的master中

在node节点中输入:kubeadm join 192.168.199.131:6443 --token 2del1g.bvtimznd86php4v1 \>     --discovery-token-ca-cert-hash sha256:f0d1fdd4a5756a88a7d5f2b5e5067c5a6c4d13bd44be832a6df87bcc6b04c466在master中查看:[root@master docker]# kubectl get nodesNAME    STATUS     ROLES    AGE   VERSIONmaster   NotReady   master   32m   v1.19.1node2   NotReady     34s   v1.19.1

至此k8s搭建完成

1.如果token过期怎么办,如何生成新的token?token值24小时过期[root@master docker]# kubeadm token createW0223 15:44:03.687823 14187 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]eoen9s.xyp0r4wenn8qufr0或者生成token的同时将hash值列出来[root@master ~]# kubeadm token create --print-join-commandW0704 14:36:06.834121 9825 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]kubeadm join 192.168.199.129:6443 --token ioou1p.21ezxs9l5crmgard --discovery-token-ca-cert-hash sha256:f0d1fdd4a5756a88a7d5f2b5e5067c5a6c4d13bd44be832a6df87bcc6b04c4662.查看token值命令[root@ docker]# kubeadm token listTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS2del1g.bvtimznd86php4v1 23h 2022-02-24T15:03:49+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-tokeneoen9s.xyp0r4wenn8qufr0 23h 2022-02-24T15:44:03+08:00 authentication,signing system:bootstrappers:kubeadm:default-node-token3.获取ca证书sha256编码hash值[root@master docker]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'f0d1fdd4a5756a88a7d5f2b5e5067c5a6c4d13bd44be832a6df87bcc6b04c4664.如何将NotReady的节点重新加入到集群中[root@master ~]# kubectl describe nodes node2 #查看NotReady原因Conditions中Message:Kubelet stopped posting node status重启该节点的kubectl,再次查看就正常了5.将node2节点删除后重新加入(如果节点上已经有镜像了还是先找原因吧)先服务端执行:[root@master ~]# kubectl drain node2 --delete-local-data --force --ignore-daemonsetsnode/node2 cordonedWARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-c72bc, kube-system/kube-proxy-gzwrfnode/node2 drained[root@master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster Ready master 130d v1.19.1node2 Ready,SchedulingDisabled 4m28s v1.19.1[root@master ~]# kubectl delete node node2node "node2" deleted[root@master ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster Ready master 130d v1.19.1后客户端node2执行:[root@node2 ~]# kubeadm reset[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.[reset] Are you sure you want to proceed? [y/N]: y....将node2重新加入到集群中[root@node2 ~]# kubeadm join 192.168.199.129:6443 --token l4tvtl.jr5twaznicjax20r --discovery-token-ca-cert-hash sha256:f0d1fdd4a5756a88a7d5f2b5e5067c5a6c4d13bd44be832a6df87bcc6b04c466[preflight] Running pre-flight checks....

8.查询节点

#查询节点kubectl get nodes#查询pods 一般要带上"-n"即命名空间。不带等同 -n dafault# 查看运行时容器podkubectl get pods -n kube-system

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:#yyds干货盘点#k8s知识进阶知识,使用二进制安装包安装k8s的环境准备
下一篇:盲盒奶茶让营业额增长了近3倍,这又是什么新营销玩法?
相关文章

 发表评论

暂时没有评论,来抢沙发吧~