linux怎么查看本机内存大小
250
2022-10-27
k8s集群部署步骤
k8s集群部署步骤
准备环境[root@localhost ~]# cat /etc/redhat-releaseCentOS Linux release 7.8.2003 (Core)
本次实验使用三台机器用于部署k8s的运行环境,1台master,2台node.10.10.21.8 k8s-master (Master)10.10.21.28 k8s-node1 (Node1)10.10.21.38 k8s-node2 (Node2)
Kubernetes集群组件:• etcd 一个高可用的K/V键值对存储和服务发现系统• flannel 实现夸主机的容器网络的通信• kube-apiserver 提供kubernetes集群的API调用• kube-controller-manager 确保集群服务• kube-scheduler 调度容器,分配到Node• kubelet 在Node节点上按照配置文件中定义的容器规格启动容器• kube-proxy 提供网络代理服务
设置主机名并永久生效Master上执行:hostnamectl set-hostname k8s-masterNode1上执行:hostnamectl set-hostname k8s-node1Node2上执行:hostnamectl set-hostname k8s-node2
三台机器,所有机器相互做解析[root@localhost ~]# vi /etc/hosts10.10.21.8 k8s-master10.10.21.28 k8s-node110.10.21.38 k8s-node2
安装依赖包每个节点都需要安装这些依赖[root@k8s-master ~]# yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget net-tools git
设置防火墙为 Iptables 并设置空规则每个节点都要执行,禁用firewalld,启用iptables,并且清空iptables的规则[root@k8s-master ~]# systemctl stop firewalld && systemctl disable firewalld[root@k8s-master ~]# yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
关闭虚拟内存(swap)每个节点都需要执行,如果pod运行在虚拟内存中,会大大降低效率,因此最好关闭虚拟内存[root@k8s-master ~]# swapoff -a && sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab
关闭SELINUX每个节点都需要执行[root@k8s-master ~]# setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
调整内核参数,对于 K8S每个节点都需要执行[root@k8s-master ~]# pwd/root[root@k8s-master ~]# vi kubernetes.confnet.bridge.bridge-nf-call-iptables=1net.bridge.bridge-nf-call-ip6tables=1 # 上面两条的作用是开启网桥模式,这两步是必须的net.ipv4.ip_forward=1net.ipv4.tcp_tw_recycle=0vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它vm.overcommit_memory=1 # 不检查物理内存是否够用vm.panic_on_oom=0 # 开启 OOMfs.inotify.max_user_instances=8192fs.inotify.max_user_watches=1048576fs.file-max=52706963fs.nr_open=52706963net.ipv6.conf.all.disable_ipv6=1 # 关闭ipv6,这步也是必须的net.netfilter.nf_conntrack_max=2310720
使开机时能调用[root@k8s-master ~]# cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
[root@k8s-master ~]# sysctl -p /etc/sysctl.d/kubernetes.conf # 手动刷新
调整系统时区每个节点都需要执行,根据自己环境的需求来修改,如果已经是CST的时区,就可以跳过这步#设置系统时区为 中国/上海[root@k8s-master ~]# timedatectl set-timezone Asia/Shanghai # 将当前的 UTC 时间写入硬件时钟[root@k8s-master ~]# timedatectl set-local-rtc 0#重启依赖于系统时间的服务[root@k8s-master ~]# systemctl restart rsyslog[root@k8s-master ~]# systemctl restart crond
关闭系统不需要的服务每个节点都需要执行,这是关闭邮件服务[root@k8s-master ~]# systemctl stop postfix && systemctl disable postfix
设置 rsyslogd 和 systemd journald每个节点都需要执行,因为centos7的引导方式改为了systemd,所以在centos7中就有两个日志系统,这里我们配置使用systemd journald[root@k8s-master ~]# mkdir /var/log/journal # 持久化保存日志的目录[root@k8s-master ~]# mkdir /etc/systemd/journald.conf.d[root@k8s-master ~]# vi /etc/systemd/journald.conf.d/99-prophet.conf[Journal]#持久化保存到磁盘Storage=persistent
#压缩历史日志Compress=yes
SyncIntervalSec=5mRateLimitInterval=30sRateLimitBurst=1000
#最大占用空间10GSystemMaxUse=10G
#单日志文件最大200MSystemMaxFileSize=200M
#日志保存时间 2 周MaxRetentionSec=2week
#不将日志转发到syslogForwardToSyslog=no
[root@k8s-master ~]# systemctl restart systemd-journald
升级内核为4.4版本CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如: rpm -Uvh ~]# rpm -Uvh /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装 一次![root@k8s-master ~]# yum --enablerepo=elrepo-kernel install -y kernel-lt#设置开机从新内核启动[root@k8s-master ~]# grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)'然后重启
升级前,内核[root@k8s-master ~]# uname -r3.10.0-1127.10.1.el7.x86_64升级后,内核[root@k8s-master ~]# uname -r4.4.245-1.el7.elrepo.x86_64
安装Kuberneteskube-proxy开启ipvs的前置条件每个节点都需要执行[root@k8s-master ~]# modprobe br_netfilter # 加载netfilter模块
[root@k8s-master ~]# vi /etc/sysconfig/modules/ipvs.modules#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4
[root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4 20480 0 nf_defrag_ipv4 16384 1 nf_conntrack_ipv4ip_vs_sh 16384 0 ip_vs_wrr 16384 0 ip_vs_rr 16384 0 ip_vs 147456 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 114688 2 ip_vs,nf_conntrack_ipv4libcrc32c 16384 2 xfs,ip_vs
安装Docker每个节点都需要执行[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@k8s-master ~]# yum-config-manager --add-repo update也能安装docker-ce[root@k8s-master ~]# yum update -y && yum install -y docker-ce
[root@k8s-master ~]# docker versionClient: Docker Engine - CommunityVersion: 19.03.13API version: 1.40Go version: go1.13.15Git commit: 4484c46d9dBuilt: Wed Sep 16 17:03:45 2020OS/Arch: linux/amd64
再次查看内核,变回了原来的[root@k8s-master ~]# uname -r3.10.0-1127.10.1.el7.x86_64
[root@k8s-master ~]# grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)' && reboot # 重启一下
##启动 Docker[root@k8s-master ~]# systemctl start docker && systemctl enable docker
#配置 daemon.[root@k8s-master ~]# vi /etc/docker/daemon.json{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"}}
[root@k8s-master ~]# mkdir -p /etc/systemd/system/docker.service.d # 创建文件夹来存放 Docker的配置文件
#重启docker服务[root@k8s-master ~]# systemctl daemon-reload && systemctl restart docker && systemctl enable docker
安装Kubeadm(主从配置)每个节点都需要执行##导入ali的源[root@k8s-master ~]# vi /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=~]# yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1[root@k8s-master ~]# systemctl enable kubelet.service # 一定要开机自启,如果没设置,当你重新节点后,pod不会自启动
初始化节点因为我是提前下载好了镜像,所以这里是直接导入到docker中,因为镜像较多,写了个简单脚本来导入,每个节点都需要执行这个导入镜像脚本[root@k8s-master ~]# pwd/root[root@k8s-master ~]# tar -zxvf kubeadm-basic.images.tar.gzkubeadm-basic.images/kubeadm-basic.images/coredns.tarkubeadm-basic.images/etcd.tarkubeadm-basic.images/pause.tarkubeadm-basic.images/apiserver.tarkubeadm-basic.images/proxy.tarkubeadm-basic.images/kubec-con-man.tarkubeadm-basic.images/scheduler.tar
[root@k8s-master ~]# vi load-images.sh#!/bin/bashls /root/kubeadm-basic.images > /tmp/image-list.txtcd /root/kubeadm-basic.imagesfor i in $( cat /tmp/image-list.txt )dodocker load -i $idonerm -rf /tmp/image-list.txt
[root@k8s-master ~]# chmod +x load-images.sh
[root@k8s-master ~]# ./load-images.sh
在主节点执行初始化kubeadm[root@k8s-master ~]# kubeadm config print init-defaults > kubeadm-config.yaml # 获取初始化文件的模板[root@k8s-master ~]# vi kubeadm-config.yamllocalAPIEndpoint:advertiseAddress: 10.10.21.8kubernetesVersion: v1.15.1networking:podSubnet: "10.244.0.0/16" # 这个需要我们自己添加serviceSubnet: 10.96.0.0/12
--- # 下面这个字段也需要自己添加,将默认的调度方式改为ipvsapiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationfeatureGates:SupportIPVSProxyMode: truemode: ipvs
[root@k8s-master ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.logFlag --experimental-upload-certs has been deprecated, use --upload-certs instead[init] Using Kubernetes version: v1.15.1[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.13. Latest validated version: 18.09error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
解决:CPU给2个core就可以了。
[root@k8s-master ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log出错:error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR Port-6443]: Port 6443 is in use解决:[root@k8s-master ~]# kubeadm reset[root@k8s-master ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.logYour Kubernetes control-plane has initialized successfully!
[root@k8s-master ~]# cd /etc/kubernetes/pki
[root@k8s-master pki]# lsapiserver.crt etcdapiserver-etcd-client.crt front-proxy-ca.crtapiserver-etcd-client.key front-proxy-ca.keyapiserver.key front-proxy-client.crtapiserver-kubelet-client.crt front-proxy-client.keyapiserver-kubelet-client.key sa.keyca.crt sa.pubca.key
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:you can join any number of worker nodes by running the following on each as root:kubeadm join 10.10.21.8:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:61cda12938fe5d2dc0bcd2acff29578eb45a0ec692bf77fd59cd647671be6a7d
根据上面安装成功的提示,在主节点执行(根据kubeadm初始化结果中的参数)[root@k8s-master ~]# mkdir -p $HOME/.kube[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master NotReady master 5m39s v1.15.1
部署网络[root@k8s-master ~]# wget 12:26:53-- raw.githubusercontent.com (raw.githubusercontent.com)... 0.0.0.0, ::Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|0.0.0.0|:443... failed: Connection refused.Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|::|:443... failed: Cannot assign requested address.Retrying.以上错误通过***下载后传给本机。
[root@k8s-master ~]# pwd/root
[root@k8s-master ~]# lsanaconda-ks.cfg kubeadm-basic.images.tar.gz kubeadm-init.log kubernetes.confkubeadm-basic.images kubeadm-config.yaml kube-flannel.yml load-images.sh
[root@k8s-master ~]# kubectl create -f kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds created
[root@k8s-master ~]# kubectl get pod -n kube-systemNAME READY STATUS RESTARTS AGEcoredns-5c98db65d4-2bkf8 1/1 Running 0 21mcoredns-5c98db65d4-5mgxd 1/1 Running 0 21metcd-k8s-master 1/1 Running 0 20mkube-apiserver-k8s-master 1/1 Running 0 20mkube-controller-manager-k8s-master 1/1 Running 0 20mkube-flannel-ds-clx8n 1/1 Running 0 116skube-proxy-rg2nm 1/1 Running 0 21mkube-scheduler-k8s-master 1/1 Running 0 20m
[root@k8s-master ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master Ready master 21m v1.15.1
加入其余的的节点,在其余的的节点执行如下(根据kubeadm初始化结果中的参数)[root@k8s-node1 ~]# kubeadm join 10.10.21.8:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:61cda12938fe5d2dc0bcd2acff29578eb45a0ec692bf77fd59cd647671be6a7d
[root@k8s-node2 ~]# kubeadm join 10.10.21.8:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:61cda12938fe5d2dc0bcd2acff29578eb45a0ec692bf77fd59cd647671be6a7d
结果:
到此基本安装完成,可以通过kubectl get nodes来查看状态了[root@k8s-master ~]# kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master Ready master 41m v1.15.1k8s-node1 Ready
安装harbor
安装docker(这里的docker安装可以和前面的一样)Get the most up-to-date version of Dockerscript is meant for quick & easy install via:
[root@linux-node0 ~]# curl -fsSL -o get-docker.sh[root@linux-node0 ~]# sh get-docker.sh
[root@linux-node0 ~]# service docker startRedirecting to /bin/systemctl start docker.service[root@linux-node0 ~]# docker versionClient:Version: 18.09.0API version: 1.39Go version: go1.10.4Git commit: 4d60db4Built: Wed Nov 7 00:48:22 2018OS/Arch: linux/amd64Experimental: false
Server: Docker Engine - CommunityEngine:Version: 18.09.0API version: 1.39 (minimum version 1.12)Go version: go1.10.4Git commit: 4d60db4Built: Wed Nov 7 00:19:08 2018OS/Arch: linux/amd64Experimental: false
设置开机自启动并开启服务[root@k8s-master ~]# systemctl enable docker[root@k8s-master ~]# systemctl start docker
设置主机名并永久生效[root@localhost ~]# hostnamectl set-hostname harbor
在所有节点操作[root@localhost ~]# vi /etc/docker/daemon.json{"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"insecure-registries":["~]# systemctl restart docker
[root@localhost ~]# echo "10.10.21.229 hub.atguigu.com" >> /etc/hosts为了实验,自己的Windows主机上也添加C:\Windows\System32\drivers\etc\hosts
Harbor上安装 docker-compose[root@localhost ~]# curl -L -s-uname -m` > /usr/local/bin/docker-compose#给docker-compose添加执行权限[root@localhost ~]# chmod +x /usr/local/bin/docker-compose#查看docker-compose是否安装成功[root@localhost ~]# docker-compose -versiondocker-compose version 1.25.0, build 0a186604
下载Harbor安装包下载地址如下:/home/norman[root@localhost ~]# cd /home/norman[root@localhost norman]# tar -zxvf harbor-offline-installer-v1.10.6.tgz[root@localhost norman]# mv harbor /usr/local/[root@localhost norman]# cd /usr/local/harbor[root@localhost harbor]# vi harbor.yml hostname: hub.atguigu.comcertificate: /data/cert/server.crtprivate_key: /data/cert/server.key
[root@localhost ~]# mkdir /data/cert[root@localhost ~]# cd /data/cert
生成CA证书私钥 ca.key。Generate a CA certificate private key.[root@localhost cert]# openssl genrsa -des3 -out server.key 2048Generating RSA private key, 2048 bit long modulus............................................................+++......+++e is 65537 (0x10001)Enter pass phrase for server.key:Verifying - Enter pass phrase for server.key:
根据上面生成的CA证书私钥,再来生成CA证书 ca.crt。 Generate the CA certificate.[root@localhost cert]# openssl req -new -key server.key -out server.csr (输入上面设置的密码)Country Name (2 letter code) [XX]:CNState or Province Name (full name) []:SHLocality Name (eg, city) [Default City]:SHOrganization Name (eg, company) [Default Company Ltd]:atguiguOrganizational Unit Name (eg, section) []:atguiguCommon Name (eg, your name or your server's hostname) []:hub.atguigu.comEmail Address []:normanjin@163.com
Please enter the following 'extra' attributesto be sent with your certificate requestA challenge password []: (不用写)An optional company name []: (不用写)
备份私钥[root@localhost cert]# cp server.key server.key.org
把私钥的密码去掉,去除文件口令[root@localhost cert]# openssl rsa -in server.key.org -out server.keyEnter pass phrase for server.key.org:writing RSA key
生成证书[root@localhost cert]# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crtSignature okbr/>subject=/C=CN/ST=SH/L=SH/O=atguigu/OU=atguigu/CN=hub.atguigu.com/emailAddress=normanjin@163.comGetting Private key
[root@localhost cert]# chmod -R 777 /data/cert
[root@localhost harbor]# ./install.sh
启动,停止harbordocker-compose up -d 启动docker-compose stop 停止docker-compose restart 重新启动
harbor-offline-installer-v1.10.6.tgz这个版本可以成功安装,一开始用了harbor-offline-installer-v1.10.2.tgz这个版本。一直报以下错。ERROR: for registryctl Cannot restart container de55d0f103c78e9b8dde7786305fca6c614eae226f261218fa0ebe0730e01eb4: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
ERROR: for redis Cannot restart container 24373a9084a1db768141feca4a0c117822347eb6245f1a9558e7f28d469b7524: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
ERROR: for registry Cannot restart container de09d7aaac8eb36c64af91ae3dbe5442b2d061b93806bb585dd539d28de4256d: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
ERROR: for harbor-db Cannot restart container d339aa6970b46d31be2a2606cb4372a8dffeaec60676e6c2feabd6a18310e1b2: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused
访问~]# docker login adminPassword: Error response from daemon: Get x509: certificate signed by unknown authority
解决:在harbor服务器上找到/data/cert/server.crt,将server.crt复制到各个节点的/etc/ssl/certs目录里。[root@k8s-master ~]# ls /etc/ssl/certsca-bundle.crt ca-bundle.trust.crt make-dummy-cert Makefile renew-dummy-cert server.crt[root@k8s-node1 ~]# systemctl daemon-reload[root@k8s-node1 ~]# systemctl restart docker再次登陆[root@k8s-node1 ~]# docker login adminPassword: WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. SeeSucceeded
下载个镜像然后推到harbor[root@k8s-node1 ~]# docker pull wangyanglinux/myapp:v1
[root@k8s-node1 ~]# docker tag wangyanglinux/myapp:v1 hub.atguigu.com/library/myapp:v1[root@k8s-node1 ~]# docker push hub.atguigu.com/library/myapp:v1The push refers to a repository [hub.atguigu.com/library/myapp]a0d2c4392b06: Pushed 05a9e65e2d53: Pushed 68695a6cfd7d: Pushed c1dc81a64903: Pushed 8460a579ab63: Pushed d39d92664027: Pushed v1: digest: sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e size: 1569
在master上从harbor上拉取运行镜像[root@k8s-master ~]# kubectl run nginx-development --image=hub.atguigu.com/library/myapp:v1 --port=80 --replicas=1
[root@k8s-master ~]# kubectl get deploymentNAME READY UP-TO-DATE AVAILABLE AGEnginx-development 1/1 1 1 105s
[root@k8s-master ~]# kubectl get rsNAME DESIRED CURRENT READY AGEnginx-development-999b6fb7c 1 1 1 205s
[root@k8s-master ~]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-development-999b6fb7c-25lft 1/1 Running 0 82s 10.244.2.2 k8s-node1
在k8s-node1查看(只有有1个pod 运行,就会有/pause)[root@k8s-node1 ~]# docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES99ed4f791e82 d4a5e0eaa84f "nginx -g 'daemon of…" 21 minutes ago Up 21 minutes k8s_nginx-development_nginx-development-999b6fb7c-qbqcj_default_af416d91-f967-4242-b80f-5b65f6e52c14_056cceadea634 k8s.gcr.io/pause:3.1 "/pause" 21 minutes ago Up 21 minutes k8s_POD_nginx-development-999b6fb7c-qbqcj_default_af416
[root@k8s-master ~]# curl 10.244.2.2Hello MyApp | Version: v1 | Pod Name
[root@k8s-master ~]# kubectl get podNAME READY STATUS RESTARTS AGEnginx-development-999b6fb7c-25lft 1/1 Running 0 2m15s
[root@k8s-master ~]# kubectl delete pod nginx-development-999b6fb7c-25lft
[root@k8s-master ~]# kubectl get pod (删除pod后会自动生成新的pod,因为前面定义了--replicas=1)NAME READY STATUS RESTARTS AGEnginx-development-3133788093-vg4jd 1/1 Running 0 5s
扩容:[root@k8s-master ~]# kubectl scale --replicas=3 deployment/nginx-development deployment.extensions/nginx-development scaled
[root@k8s-master ~]# kubectl get podNAME READY STATUS RESTARTS AGEnginx-development-999b6fb7c-dbcjk 1/1 Running 0 29mnginx-development-999b6fb7c-qbqcj 1/1 Running 0 29mnginx-development-999b6fb7c-vtclg 1/1 Running 0 29m
创建集群IP ClusterIP[root@k8s-master ~]# kubectl expose deployment nginx-development --port=30000 --target-port=80service "nginx-development" exposed
[root@k8s-master ~]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1
访问ClusterIP :[root@k8s-master ~]# curl 10.99.91.88:30000Hello MyApp | Version: v1 | Pod Name
并且是轮询:[root@k8s-node2 ~]# curl 10.99.91.88:30000/hostname.htmlnginx-development-999b6fb7c-qbqcj[root@k8s-node2 ~]# curl 10.99.91.88:30000/hostname.htmlnginx-development-999b6fb7c-vtclg[root@k8s-node2 ~]# curl 10.99.91.88:30000/hostname.htmlnginx-development-999b6fb7c-dbcjk
[root@k8s-master ~]# ipvsadm -LnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 10.96.0.1:443 rr-> 10.10.21.8:6443 Masq 1 3 0 TCP 10.96.0.10:53 rr-> 10.244.0.6:53 Masq 1 0 0 -> 10.244.0.7:53 Masq 1 0 0 TCP 10.96.0.10:9153 rr-> 10.244.0.6:9153 Masq 1 0 0 -> 10.244.0.7:9153 Masq 1 0 0 TCP 10.99.91.88:30000 rr-> 10.244.1.2:80 Masq 1 0 0 -> 10.244.1.3:80 Masq 1 0 0 -> 10.244.2.3:80 Masq 1 0 0 UDP 10.96.0.10:53 rr-> 10.244.0.6:53 Masq 1 0 0 -> 10.244.0.7:53 Masq 1 0 0
[root@k8s-master ~]# kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-development-999b6fb7c-dbcjk 1/1 Running 0 45m 10.244.1.2 k8s-node2
创建外部访问集群 [root@k8s-master ~]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1
修改ClusterIP 为NodePort,使集群能被外部访问[root@k8s-master ~]# kubectl edit svc nginx-developmenttype: NodePort
[root@k8s-master ~]# kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1
访问10.10.21.8:31231
再开一个浏览器查看,可以看到轮询。
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~