Kubernetes关于CSR

网友投稿 268 2022-09-28

Kubernetes关于CSR

引子:

今天一个小伙伴问我kuberntes集群中kubectl get csr怎么没有输出呢? 我试了一下我集群内确实没有csr的。what is csr?为什么kubectl get csr一定要有输出呢?什么时候会有csr呢(这里说的是系统默认的,不包括自己创建的!)

1. Kubernetes 关于CSR

1.什么是CSR?

csr 的全称是 CertificateSigningRequest 翻译过来就是证书签名请求。具体的定义清参照kubernetes官方文档:API 服务器的 IP 地址,格式如下:

kubeadm join --discovery-token abcdef.1234567890abcdef 1.2.3.4:6443

2. 但是要强调一点加入集群并不是只有这一种方式:

你可以提供一个文件 - 标准 kubeconfig 文件的一个子集。 该文件可以是本地文件,也可以通过 HTTPS URL 下载:

kubeadm join--discovery-file path/to/file.conf 或者kubeadm join --discovery-file join的过程中会产生csr。下面体验一下

2. 真实环境演示csr的产生

两台rocky8.5为例。只做简单演示,不用于生产环境(系统都没有优化,只是简单跑一下join)很多东西都只是为了跑一下测试!

主机名 ip master or work
k8s-master-01 10.0.4.2 master节点
k8s-work-01 10.0.4.36 work节点

1. rocky使用kubeadm搭建简单kubernetes集群

1. 一些系统的简单设置

升级内核其他优化我就不去做了只是想演示一下csr!只做一下update

[root@k8s-master-01 ~]# yum update -y and [root@k8s-work-01 ~]# yum update -y

###添加Kubernetes阿里云源 [root@k8s-master-01 ~]# cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF #关闭防火墙 [root@k8s-master-01 ~]# systemctl stop firewalld [root@k8s-master-01 ~]# systemctl disable firewalld # 关闭selinux [root@k8s-master-01 ~]# sed -ie 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config [root@k8s-master-01 ~]# setenforce 0 # 允许 iptables 检查桥接流量 [root@k8s-master-01 ~]# cat <

2. 安装并配置containerd

1. install contarinerd

[root@k8s-work-01 ~]# dnf install dnf-utils device-mapper-persistent-data lvm2 [root@k8s-work-01 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo Failed to set locale, defaulting to C.UTF-8 Adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root@k8s-work-01 ~]# sudo yum update -y && sudo yum install -y containerd.io

2. 生成配置文件并修改sandbox_image仓库地址

生成配置文件并修改pause仓库为阿里云镜像仓库地址

[root@k8s-work-01 yum.repos.d]# containerd config default > /etc/containerd/config.toml

3. 重新加加载服务

[root@k8s-work-01 yum.repos.d]# systemctl daemon-reload [root@k8s-work-01 yum.repos.d]# systemctl restart containerd [root@k8s-work-01 yum.repos.d]# systemctl status containerd

4. 配置 CRI 客户端 crictl

[root@k8s-master-01 ~]# VERSION="v1.23.0" [root@k8s-master-01 ~]# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz [root@k8s-master-01 ~]# sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin crictl [root@k8s-master-01 ~]# rm -f crictl-$VERSION-linux-amd64.tar.gz

[root@k8s-work-01 ~]# cat < /etc/crictl.yaml > runtime-endpoint: unix:///run/containerd/containerd.sock > image-endpoint: unix:///run/containerd/containerd.sock > timeout: 10 > debug: false > EOF [root@k8s-work-01 ~]# crictl pull nginx:alpine Image is up to date for sha256:51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502 [root@k8s-work-01 ~]# crictl images IMAGE TAG IMAGE ID SIZE docker.io/library/nginx alpine 51696c87e77e4 10.2MB

3. 安装 Kubeadm

# 查看所有可安装版本 # yum list --showduplicates kubeadm --disableexcludes=kubernetes # 安装版本用下面的命令(安装了最新的1.23.5版本),可以指定版本安装 # yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

vi /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS= --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock

[root@k8s-master-01 ~]# systemctl enable kubelet.service Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.

4. k8s-master-01初始化

1. 生成配置文件

[root@k8s-master-01 ~]# kubeadm config print init-defaults > config.yaml

2. 修改配置文件

3. kubeadm int

[root@k8s-master-01 ~]# kubeadm init --config=config.yaml

产生如下报错: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

两台server进行如下操作:

[root@k8s-master-01 ~]# modprobe br_netfilter [root@k8s-master-01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables [root@k8s-master-01 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward

[root@k8s-master-01 ~]# kubeadm init --config=config.yaml

[root@k8s-master-01 ~]# mkdir -p $HOME/.kube [root@k8s-master-01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master-01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@k8s-master-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-work-01 NotReady 46s v1.23.5

5. work节点加入集群

[root@k8s-work-01 ~]# kubeadm join 10.0.4.2:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:abdffa455bed6eeda802563b826d042e6e855b30d2f2dbc9b6e0cd4515dfe1e2

[root@k8s-master-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-work-01 NotReady 96m v1.23.5 node NotReady control-plane,master 96m v1.23.5

注:为什么master节点显示node前面已经说过了

6. 安装网络插件flannel

[root@k8s-master-01 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@k8s-master-01 ~]# kubectl apply -f kube-flannel.yml

flannel配置文件中Network默认网络地址为10.244.0.0/16 kuberentes init的配置文件中podSubnet两个要一致!

[root@k8s-master-01 ~]# kubectl get pods -n kube-system

7. 验证一下是否有csr产生:

[root@k8s-master-01 ~]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-8jq54 8m30s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:abcdef Approved,Issued csr-qpsst 8m54s kubernetes.io/kube-apiserver-client-kubelet system:node:node Approved,Issued

查看是有age生命周期的!

2.现有的kubeadm集群扩容状况下csr产生

找了一个现有集群腾讯云联网环境下搭建kubernetes集群:

[root@sh-master-01 ~]# kubectl get csr No resources found [root@sh-master-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION bj-work-01 NotReady 130d v1.21.3 sh-master-01 Ready control-plane,master 130d v1.21.3 sh-master-02 Ready control-plane,master 130d v1.21.3 sh-master-03 Ready control-plane,master 130d v1.21.3 sh-work-01 Ready 130d v1.21.3 sh-work-02 Ready 130d v1.21.3

准备这样操作:将sh-work-02节点踢出集群,在master节点生成内部令牌和SH256执行加密字符串,sh-work-02重新加入集群参照:Kubernetes集群扩容,好多年前写的文章了,直接一波梭哈了

[root@sh-master-01 ~]# kubectl delete nodes sh-work-02 node "sh-work-02" deleted [root@sh-master-01 ~]# kubeadm token create zu1fum.7wzn5rnj5kiz62mu [root@sh-master-01 ~]# [root@sh-master-01 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ccfd4e2b85a6a07fde8580422769c9e14113e8f05e95272e51cca2f13b0eb8c3 [root@sh-master-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION bj-work-01 NotReady 130d v1.21.3 sh-master-01 Ready control-plane,master 130d v1.21.3 sh-master-02 Ready control-plane,master 130d v1.21.3 sh-master-03 Ready control-plane,master 130d v1.21.3 sh-work-01 Ready 130d v1.21.3

[root@sh-work-02 ~]# kubeadm reset [root@sh-work-02 ~]# reboot [root@sh-work-02 ~]# kubeadm join 10.10.2.4:6443 --token zu1fum.7wzn5rnj5kiz62mu --discovery-token-ca-cert-hash sha256:ccfd4e2b85a6a07fde8580422769c9e14113e8f05e95272e51cca2f13b0eb8c3

[root@sh-master-01 ~]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION csr-lz6wl 97s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:zu1fum Approved,Issued

3. 创建一个tls证书并且批准证书签名请求

参照官方网站:管理集群中的 TLS 认证

[root@k8s-master-01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 [root@k8s-master-01 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 [root@k8s-master-01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 [root@k8s-master-01 ~]# chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 [root@k8s-master-01 ~]# mv cfssl_linux-amd64 /usr/bin/cfssl [root@k8s-master-01 ~]# mv cfssljson_linux-amd64 /usr/bin/cfssljson [root@k8s-master-01 ~]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo [root@k8s-master-01 ~]# cfssl version

[root@k8s-master-01 ~]# kubectl create ns my-namespace namespace/my-namespace created [root@k8s-master-01 ~]# kubectl run my-pod --image=nginx -n my-namespace pod/my-pod created

kubectl apply -f service.yaml

[root@k8s-master-01 ~]# cat service.yaml apiVersion: v1 kind: Service metadata: name: my-svc namespace: my-namespace labels: run: my-pod spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: run: my-pod

[root@k8s-master-01 ~]# kubectl get all -n my-namespace -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/my-pod 1/1 Running 0 51m 10.244.1.2 k8s-work-01 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/my-svc ClusterIP 10.109.248.68 80/TCP 7m6s run=my-pod

创建证书签名请求

service pod 对应域名与ip替换

cat <

创建证书签名请求对象发送到 Kubernetes API

cat <

[root@k8s-master-01 ~]# kubectl describe csr my-svc.my-namespace

批准证书签名请求

[root@k8s-master-01 ~]# kubectl certificate approve my-svc.my-namespace certificatesigningrequest.certificates.k8s.io/my-svc.my-namespace approved [root@k8s-master-01 ~]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION my-svc.my-namespace 87s kubernetes.io/kubelet-serving kubernetes-admin Approved,Issue

下载证书并使用它

[root@k8s-master-01 key]# kubectl get csr [root@k8s-master-01 key]# kubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' \ | base64 --decode > server.crt

总结:

在rocky环境下搭建了一下kubeadm 1.23 看了一遍csr的产生。并不是Kubectl get csr没有输出就是异常 体验了一下内部tls证书的证书签名与批准(虽然实际环境没有使用) 了解了kubectl join的另外一种方式kubeadm join--discovery-file ,讲真之前没有仔细看过。 当然还有创建用户鉴权这里也可以使用csr方式https://kubernetes.io/zh/docs/reference/access-authn-authz/certificate-signing-requests/#authorization。我之前是这样做的Kubernetes之kuberconfig--普通用户授权kubernetes集群。

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:深入浅出 Ext4 块和 Inode 分配器的优化(上)
下一篇:解决@Scope(“prototype“)不生效的问题
相关文章

 发表评论

暂时没有评论,来抢沙发吧~