linux cpu占用率如何看
224
2022-10-28
Kubernetes单机开发环境部署记录
-Kubernetes官方推荐的集群并不适合在个人电脑上做Helm包开发使用,建议在PC上搭建单节点Kubernetes环境。操作方式有以下几种:1)使用官方的minikube工具部署;2)使用官方的kubeadm工具仅部署一个master节点,然后将pod调度到master节点工作,所需命令是:kubectl taint node k8s-master node-role.kubernetes.io/master-3)下载离线的Kubernetes二进制包,手动按需部署master节点,并将pod调度到master节点工作。本人搜集的Kubernetes1.8二进制包存放地址:In Zhengzhou,Henan。)一个更是火急火燎地说他们他们面临一个项目,是要给某家行政机构做虚拟化迁移,一个劲儿地催促着让我讲讲我主管这个项目的项目实施思路,还掏出手机搞了一通后反扣在桌子上、往我坐的方向推了推(我猜着家伙是打开了手机的录音,明摆着是在骗取项目实施方案的思路,跟郑州市CBD区的某中字头河南分公司的套路一样。碍于面皮,我只跟这家伙说了说传统的IT应用迁移可以有的几个方向及相关的利弊,分别是:VM-ESXi、PVE、Cirtrix-XenServer、Xen hypervisor、KVM-hypervisor、Microsoft Virtual server、采用OpenStack+Hadoop平台、采用Docker+Kubernetes平台。看着这似曾相识的场景,想起了我2018年刚回到河南去郑州市CBD区的某中字头河南分公司的面试,到那儿之后所谓的面试官随便找了个楼道电梯口,抽着烟就问我MySQL怎么做双活集群,我向他描述了双活HA集群的基本工作原理后,他把烟掐灭告诉我可以走了,他再考虑一下是否录用我;当天夜里快零点的时候,这中字头河南分公司所谓的面试官,打我电话让我早上7点过去参加第二次技术面试;我一大早赶到郑州市CBD区中粮大厦,这位中字头河南分公司所谓的面试官直接把我带到了他的工位让我搭建MySQL双主双从+Keepalived集群,一直到下午集群验证结束,这位中字头河南分公司所谓的面试官有事一句“你先回去吧,路上吃点儿饭”把我打发了,再也没有后文了。)还有一个家伙最后问了一通跟岗位招聘要求和我个人简历上都不着边的废话,比如“我们这都是实干的,不需要高学历充门面”、“你是安阳的,听说安阳有很多皮包公司,你的简历不会是在安阳的公司里包装的吧”、“看你开车过来的,车是租的吧?”、“听你说的头头是道,理论水平不低呀,都动手操作过吗”(我当时在想的是:我做过的项目,在线博客有实验记录、服务过的单位有项目实施记录,劳动合同上有工作单位和聘请职位抬头,郑州的IT圈儿就巴掌大,你要是真上道儿的话,不过是敲敲键盘、打几个电话验证一下的事儿。安阳人怎么了,怎么骗你郑州人?没听地域黑说“十个河南九个骗,总部设在驻马店,剩下一个是教练。 九个河南八个偷,指挥总部在郑州,剩下一个在练手。八个河南七个抢,贼子窝窝在洛阳,剩下一个在张望。”吗,这有我安阳人什么事儿吗?平心而论,恐怕你这三青子才是坑蒙拐骗偷盗抢的行家里手,真像那句话说的“河南的支柱产业是欺骗、郑州的支柱产业是诈骗”。)
跟这帮活宝浪费小半天儿时间,不自觉地想起了河南信息大厦里某家打国企名头招聘的公司:实习时做的项目不算工作经历,就像未成年人不能享有人的权利一样;不问技术能力能否匹配岗位的大部分要求,张嘴就是“工作不满10年就是简历造假”。突然发现,下到了郑州的市场经济之海后,所谓的招才引智企业,套路满满:骗取项目实施方案或思路的有之,不懂行业技术的外行打压内行者有之,为了压低支付工资而不惜采用人身***者有之,久拖不付工资者有之,为了不赔付赔偿金反咬一口指责盗取它商业机密者有之 ....... 劳资斗争,资方无所不用其极呀!
好后悔没有在自己年轻时留在体制内,如今心力交瘁到有点儿麻木不仁了......
下面是我的但点儿部署踩坑过程,待从朋友那儿拿回移动硬盘后在不上脚本部署过程:
配置虚拟机的Host/Board共享目录、安装VM-Tools[googlebigtable@localhost ~]$ su rootPassword: [root@localhost googlebigtable]# pwd -P/home/googlebigtable[root@localhost googlebigtable]# echo $PATH/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/googlebigtable/.local/bin:/home/googlebigtable/bin[root@localhost googlebigtable]# [root@localhost googlebigtable]# ls -FDesktop/ Documents/ Downloads/ Music/ Pictures/ Public/ Templates/ Videos/[root@localhost googlebigtable]# mkdir -p DVD/temp[root@localhost googlebigtable]# ls -FDesktop/ Documents/ Downloads/ DVD/ Music/ Pictures/ Public/ Templates/ Videos/[root@localhost googlebigtable]# cd DVD/[root@localhost DVD]# ls -Ftemp/[root@localhost DVD]# cd temp/[root@localhost temp]# ls -F[root@localhost temp]# pwd -P/home/googlebigtable/DVD/temp[root@localhost temp]# ls /dev/ | grep cdcdrom[root@localhost temp]# mount /dev/cdrom /home/googlebigtable/DVD/tempmount: /dev/sr0 is write-protected, mounting read-only[root@localhost temp]# cd /home/googlebigtable/DVD/temp/[root@localhost temp]# ls -Fmanifest.txt run_upgrader.sh VMwareTools-10.3.10-13959562.tar.gz vmware-tools-upgrader-32 vmware-tools-upgrader-64[root@localhost temp]# cp VMwareTools-10.3.10-13959562.tar.gz /home/googlebigtable/DVD/[root@localhost temp]# cd ..[root@localhost DVD]# ls -Ftemp/ VMwareTools-10.3.10-13959562.tar.gz[root@localhost DVD]# tar -xzvf VMwareTools-10.3.10-13959562.tar.gz vmware-tools-distrib/............................................................................................................vmware-tools-distrib/vmware-install.pl[root@localhost DVD]# ls -Ftemp/ VMwareTools-10.3.10-13959562.tar.gz vmware-tools-distrib/[root@localhost DVD]# cd vmware-tools-distrib/[root@localhost vmware-tools-distrib]# ls -Fbin/ caf/ doc/ etc/ FILES INSTALL installer/ lib/ vgauth/ vmware-install.pl*[root@localhost vmware-tools-distrib]# pwd -P/home/googlebigtable/DVD/vmware-tools-distrib[root@localhost vmware-tools-distrib]# /home/googlebigtable/DVD/vmware-tools-distrib/vmware-install.pl The installer has detected an existing installation of open-vm-tools packages on this system and will not attempt to remove and replace these user-space applications. It is recommended to use the open-vm-tools packages provided by the operating system. If you do not want to use the existing installation of open-vm-tools packages and use VMware Tools, you must uninstall the open-vm-tools packages and re-run this installer...............................................................................................................Ejecting device /dev/sr0 ...Enjoy,
--the VMware team
[root@localhost vmware-tools-distrib]# init 6
配置静态IP[googlebigtable@localhost ~]$ su rootPassword: [root@localhost googlebigtable]# pwd -P/home/googlebigtable[root@localhost googlebigtable]# echo $PATH/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/googlebigtable/.local/bin:/home/googlebigtable/bin[root@localhost googlebigtable]# ls -F /etc/sysconfig/network-scripts/ifcfg-ens33 ifdown-ib ifdown-ppp ifdown-tunnel ifup-ib ifup-plusb ifup-Team network-functionsifcfg-lo ifdown-ippp ifdown-routes ifup@ ifup-ippp ifup-post ifup-TeamPort network-functions-ipv6ifdown@ ifdown-ipv6 ifdown-sit ifup-aliases ifup-ipv6 ifup-ppp ifup-tunnelifdown-bnep ifdown-isdn@ ifdown-Team ifup-bnep ifup-isdn@ ifup-routes ifup-wirelessifdown-eth ifdown-post ifdown-TeamPort ifup-eth ifup-plip ifup-sit init.ipv6-global[root@localhost googlebigtable]# cp /etc/sysconfig/network-scripts/ifcfg-ens33{,.original}[root@localhost googlebigtable]# ls -F /etc/sysconfig/network-scripts/ifcfg-ens33 ifdown-eth ifdown-post ifdown-TeamPort ifup-eth ifup-plip ifup-sit init.ipv6-globalifcfg-ens33.original ifdown-ib ifdown-ppp ifdown-tunnel ifup-ib ifup-plusb ifup-Team network-functionsifcfg-lo ifdown-ippp ifdown-routes ifup@ ifup-ippp ifup-post ifup-TeamPort network-functions-ipv6ifdown@ ifdown-ipv6 ifdown-sit ifup-aliases ifup-ipv6 ifup-ppp ifup-tunnelifdown-bnep ifdown-isdn@ ifdown-Team ifup-bnep ifup-isdn@ ifup-routes ifup-wireless[root@localhost googlebigtable]# vim /etc/sysconfig/network-scripts/ifcfg-ens33[root@localhost googlebigtable]# cat -n /etc/sysconfig/network-scripts/ifcfg-ens331 TYPE="Ethernet"2 PROXY_METHOD="none"3 BROWSER_ONLY="no"4 BOOTPROTO="static"5 IPADDR=192.168.20.1996 NETMASK=255.255.255.07 GATEWAY=192.168.20.18 DEFROUTE="yes"9 IPV4_FAILURE_FATAL="no"10 IPV6INIT="yes"11 IPV6_AUTOCONF="yes"12 IPV6_DEFROUTE="yes"13 IPV6_FAILURE_FATAL="no"14 IPV6_ADDR_GEN_MODE="stable-privacy"15 NAME="ens33"16 UUID="174bc0f4-a139-4ec1-928a-611747463f29"17 DEVICE="ens33"18 ONBOOT="yes"19 DNS=8.8.8.8[root@localhost googlebigtable]# service network restartRestarting network (via systemctl): [ OK ][root@localhost googlebigtable]# systemctl restart network[root@localhost googlebigtable]# ping 8.8.8.8PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.64 bytes from 8.8.8.8: icmp_seq=2 ttl=128 time=173 ms64 bytes from 8.8.8.8: icmp_seq=7 ttl=128 time=438 ms64 bytes from 8.8.8.8: icmp_seq=8 ttl=128 time=123 ms64 bytes from 8.8.8.8: icmp_seq=10 ttl=128 time=150 ms^C--- 8.8.8.8 ping statistics ---11 packets transmitted, 4 received, 63% packet loss, time 10007msrtt min/avg/max/mdev = 123.603/221.344/438.219/126.442 ms[root@localhost googlebigtable]#
配置OS的YUM源[root@localhost googlebigtable]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.original0[root@localhost googlebigtable]# ls -F /etc/yum.repos.d/CentOS-Base.repo.original0 CentOS-CR.repo CentOS-Debuginfo.repo CentOS-fasttrack.repo CentOS-Media.repo CentOS-Sources.repo CentOS-Vault.repo[root@localhost googlebigtable]# wget -O /etc/yum.repos.d/CentOS-Base.repo 12:00:45-- mirrors.163.com (mirrors.163.com)... 59.111.0.251Connecting to mirrors.163.com (mirrors.163.com)|59.111.0.251|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 1572 (1.5K) [application/octet-stream]Saving to: ‘/etc/yum.repos.d/CentOS-Base.repo’
100%[===========================================================================================================>] 1,572 --.-K/s in 0s
2020-05-24 12:00:45 (525 MB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [1572/1572]
[root@localhost googlebigtable]# ls -F /etc/yum.repos.d/CentOS-Base.repo CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repoCentOS-Base.repo.original0 CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo[root@localhost googlebigtable]# cat -n /etc/yum.repos.d/CentOS-Base.repo1 # CentOS-Base.repo2 #3 # The mirror system uses the connecting IP address of the client and the4 # update status of each mirror to pick mirrors that are updated to and5 # geographically close to the client. You should use this for CentOS updates6 # unless you are manually picking other mirrors.7 #8 # If the mirrorlist= does not work for you, as a fall back you can try the 9 # remarked out baseurl= line instead.10 #11 #12 [base]13 name=CentOS-$releasever - Base - 163.com14 #mirrorlist= baseurl= gpgcheck=117 gpgkey= 19 #released updates20 [updates]21 name=CentOS-$releasever - Updates - 163.com22 #mirrorlist= baseurl= gpgcheck=125 gpgkey= 27 #additional packages that may be useful28 [extras]29 name=CentOS-$releasever - Extras - 163.com30 #mirrorlist= baseurl= gpgcheck=133 gpgkey= 35 #additional packages that extend functionality of existing packages36 [centosplus]37 name=CentOS-$releasever - Plus - 163.com38 baseurl= gpgcheck=140 enabled=041 gpgkey=googlebigtable]# yum clean allLoaded plugins: fastestmirror, langpacksCleaning repos: base extras updatesCleaning up everythingMaybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed reposCleaning up list of fastest mirrors[root@localhost googlebigtable]# yum makecacheLoaded plugins: fastestmirror, langpacksDetermining fastest mirrorsbase | 3.6 kB 00:00:00 extras | 2.9 kB 00:00:00 updates | 2.9 kB 00:00:00 (1/10): base/7/x86_64/group_gz | 153 kB 00:00:00 (2/10): base/7/x86_64/primary_db | 6.1 MB 00:00:03 (3/10): extras/7/x86_64/filelists_db | 205 kB 00:00:00 (4/10): extras/7/x86_64/other_db | 122 kB 00:00:00 (5/10): extras/7/x86_64/primary_db | 194 kB 00:00:00 (6/10): updates/7/x86_64/filelists_db | 980 kB 00:00:01 (7/10): updates/7/x86_64/primary_db | 1.3 MB 00:00:01 (8/10): updates/7/x86_64/other_db | 183 kB 00:00:00 (9/10): base/7/x86_64/filelists_db | 7.1 MB 00:00:06 (10/10): base/7/x86_64/other_db | 2.6 MB 00:00:02 Metadata Cache Created[root@localhost googlebigtable]#[root@localhost googlebigtable]# yum update -yLoaded plugins: fastestmirror, langpacksLoading mirror speeds from cached hostfileResolving Dependencies--> Running transaction check.....................................................................................................Complete![root@localhost googlebigtable]#
查看OS环境[root@localhost googlebigtable]# cat /etc/redhat-releaseCentOS Linux release 7.8.2003 (Core)[root@localhost googlebigtable]# uname -r3.10.0-862.el7.x86_64[root@localhost googlebigtable]# hostnamectl statusStatic hostname: localhost.localdomainIcon name: computer-vmChassis: vmMachine ID: b42ee68190eb41aea794fc999eab1a65Boot ID: 9f5106fc1c4a4a358105dc8dc0b0b87eVirtualization: vmwareOperating System: CentOS Linux 7 (Core)CPE OS Name: cpe:/o:centos:centos:7Kernel: Linux 3.10.0-862.el7.x86_64Architecture: x86-64[root@localhost googlebigtable]# rpm -q centos-releasecentos-release-7-8.2003.0.el7.centos.x86_64[root@localhost googlebigtable]# ip addr1: lo:
lo: flags=73
virbr0: flags=4099
virbr0-nic: flags=4098
[root@localhost googlebigtable]#
时间同步服务[root@localhost googlebigtable]# yum -y updateLoaded plugins: fastestmirror, langpacksLoading mirror speeds from cached hostfileNo packages marked for update[root@localhost googlebigtable]# yum install -y ntpdateLoaded plugins: fastestmirror, langpacksLoading mirror speeds from cached hostfilePackage ntpdate-4.2.6p5-29.el7.centos.x86_64 already installed and latest versionNothing to do[root@localhost googlebigtable]# ntpdate time.windows.com24 May 12:26:43 ntpdate[26764]: adjust time server 52.231.114.183 offset -0.018299 sec[root@localhost googlebigtable]# ntpq -pntpq: read: Connection refused[root@localhost googlebigtable]# ntpstatsynchronised to NTP server (162.159.200.123) at stratum 4time correct to within 86 mspolling server every 64 s[root@localhost googlebigtable]# timedatectl statusLocal time: Sun 2020-05-24 12:27:26 EDTUniversal time: Sun 2020-05-24 16:27:26 UTCRTC time: Sun 2020-05-24 16:27:26Time zone: America/New_York (EDT, -0400)NTP enabled: yesNTP synchronized: yesRTC in local TZ: noDST active: yesLast DST change: DST began atSun 2020-03-08 01:59:59 ESTSun 2020-03-08 03:00:00 EDTNext DST change: DST ends (the clock jumps one hour backwards) atSun 2020-11-01 01:59:59 EDTSun 2020-11-01 01:00:00 EST[root@localhost googlebigtable]#
关闭防火墙及SELinux[root@localhost googlebigtable]# systemctl stop firewalld && systemctl disable firewalldRemoved symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.[root@localhost googlebigtable]# systemctl status firewalld● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)
May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete FORWAR...hain?).May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete FORWAR...t name.May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete FORWAR...t name.May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete INPUT ...hain?).May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete INPUT ...hain?).May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete OUTPUT...hain?).May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete INPUT ...hain?).May 24 12:16:06 localhost.localdomain firewalld[94641]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -w --table filter --delete INPUT ...hain?).May 24 12:29:08 localhost.localdomain systemd[1]: Stopping firewalld - dynamic firewall daemon...May 24 12:29:11 localhost.localdomain systemd[1]: Stopped firewalld - dynamic firewall daemon.Hint: Some lines were ellipsized, use -l to show in full.[root@localhost googlebigtable]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config && setenforce 0[root@localhost googlebigtable]# sestatusSELinux status: enabledSELinuxfs mount: /sys/fs/selinuxSELinux root directory: /etc/selinuxLoaded policy name: targetedCurrent mode: permissiveMode from config file: disabledPolicy MLS status: enabledPolicy deny_unknown status: allowedMax kernel policy version: 31[root@localhost googlebigtable]# init 6[root@localhost googlebigtable]# sestatusSELinux status: disabled[root@localhost googlebigtable]#
关闭SWAP分区[root@localhost googlebigtable]# swapoff -a[root@localhost googlebigtable]# sed -i '/ swap / s/^(.)$/#\1/g' /etc/fstabsed: -e expression #1, char 23: invalid reference \1 on `s' command's RHS[root@localhost googlebigtable]#[root@localhost googlebigtable]# sed -ri 's/.swap.*/#&/' /etc/fstab[root@localhost googlebigtable]# cat -n /etc/fstab1 2 #3 # /etc/fstab4 # Created by anaconda on Sun May 24 10:11:42 20205 #6 # Accessible filesystems, by reference, are maintained under '/dev/disk'7 # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info8 #9 /dev/mapper/centos-root / xfs defaults 0 010 UUID=ca495f7f-06a4-49bb-8b7b-c0a624209f2c /boot xfs defaults 0 011 /dev/mapper/centos-home /home xfs defaults 0 012 #/dev/mapper/centos-swap swap swap defaults 0 0[root@localhost googlebigtable]#
配置hosts[root@localhost googlebigtable]# hostnamectl set-hostname kubernetes-single[root@localhost googlebigtable]# hostnamectl statusStatic hostname: kubernetes-singleIcon name: computer-vmChassis: vmMachine ID: b42ee68190eb41aea794fc999eab1a65Boot ID: 54bed94757bd43b4a77c599f98519fd2Virtualization: vmwareOperating System: CentOS Linux 7 (Core)CPE OS Name: cpe:/o:centos:centos:7Kernel: Linux 3.10.0-1127.8.2.el7.x86_64Architecture: x86-64[root@localhost googlebigtable]# hostname -ife80::ea69:80fc:6c2c:368d%ens33 92.168.20.199 192.168.20.199 192.168.122.1[root@localhost googlebigtable]#[root@localhost googlebigtable]# cat -n /etc/hosts1 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain42 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6[root@localhost googlebigtable]# cp /etc/hosts{,.original}[root@localhost googlebigtable]# ls -F /etc/ | grep hostsghostscript/hostshosts.allowhosts.denyhosts.original[root@localhost googlebigtable]# cat >> /etc/hosts << EOF
192.168.20.199 kubernetes-master192.168.20.199 kubernetes-node0EOF[root@localhost googlebigtable]# cat -n /etc/hosts1 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain42 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain63 192.168.20.199 kubernetes-master4 192.168.20.199 kubernetes-node0[root@localhost googlebigtable]#
master与nodes间的免密登录【此操作只需要在master机器上执行】【这里相当于配置了192.168.20.199自身的SSH免密登录】[root@localhost googlebigtable]# ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:SHA256:brmVbKFaiYV871rNxTANIaWPi1VaPSinuUIzBYaCwy8 root@kubernetes-singleThe key's randomart image is:+---[RSA 2048]----+| . . .o ..+. || + . .. . o = || o . + B + || E .. . . @ + . || . o S B . o || * @ B . || . X X o || + B || . o.. |+----[SHA256]-----+[root@localhost googlebigtable]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.20.199/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"The authenticity of host '192.168.20.199 (192.168.20.199)' can't be established.ECDSA key fingerprint is SHA256:WI8wxf0lYeC+E36wAGj+ydKWkIL2c/4tu5hUXbLkQ1k.ECDSA key fingerprint is MD5:3d:0b:1a:6a:11:63:c6:db:c6:6b:a6:48:d9:3f:91:a3.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysroot@192.168.20.199's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@192.168.20.199'"and check to make sure that only the key(s) you wanted were added.
[root@localhost googlebigtable]# ssh 'root@192.168.20.199'Last login: Sun May 24 12:33:41 2020[root@kubernetes-single ~]# exitlogoutConnection to 192.168.20.199 closed.[root@localhost googlebigtable]#
将桥接的IPv4流量传递到iptables的链[root@localhost googlebigtable]# modprobe br_netfilter[root@localhost googlebigtable]# sysctl -p[root@localhost googlebigtable]# sysctl --system
Applying /usr/lib/sysctl.d/00-system.conf ...net.bridge.bridge-nf-call-ip6tables = 0net.bridge.bridge-nf-call-iptables = 0net.bridge.bridge-nf-call-arptables = 0 Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...kernel.yama.ptrace_scope = 0 Applying /usr/lib/sysctl.d/50-default.conf ...kernel.sysrq = 16kernel.core_uses_pid = 1net.ipv4.conf.default.rp_filter = 1net.ipv4.conf.all.rp_filter = 1net.ipv4.conf.default.accept_source_route = 0net.ipv4.conf.all.accept_source_route = 0net.ipv4.conf.default.promote_secondaries = 1net.ipv4.conf.all.promote_secondaries = 1fs.protected_hardlinks = 1fs.protected_symlinks = 1 Applying /usr/lib/sysctl.d/60-libvirtd.conf ...fs.aio-max-nr = 1048576 Applying /etc/sysctl.d/99-sysctl.conf ... Applying /etc/sysctl.conf ...[root@localhost googlebigtable]# cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF[root@localhost googlebigtable]# modprobe br_netfilter[root@localhost googlebigtable]# sysctl -p[root@localhost googlebigtable]# sysctl --system Applying /usr/lib/sysctl.d/00-system.conf ...net.bridge.bridge-nf-call-ip6tables = 0net.bridge.bridge-nf-call-iptables = 0net.bridge.bridge-nf-call-arptables = 0 Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...kernel.yama.ptrace_scope = 0 Applying /usr/lib/sysctl.d/50-default.conf ...kernel.sysrq = 16kernel.core_uses_pid = 1net.ipv4.conf.default.rp_filter = 1net.ipv4.conf.all.rp_filter = 1net.ipv4.conf.default.accept_source_route = 0net.ipv4.conf.all.accept_source_route = 0net.ipv4.conf.default.promote_secondaries = 1net.ipv4.conf.all.promote_secondaries = 1fs.protected_hardlinks = 1fs.protected_symlinks = 1 Applying /usr/lib/sysctl.d/60-libvirtd.conf ...fs.aio-max-nr = 1048576 Applying /etc/sysctl.d/99-sysctl.conf ... Applying /etc/sysctl.d/k8s.conf ...net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1 Applying /etc/sysctl.conf ...[root@localhost googlebigtable]#
配置docker和kubernetes YUM软件源[root@localhost googlebigtable]# wget -O /etc/yum.repos.d/docker-ce.repo 14:03:31-- mirrors.aliyun.com (mirrors.aliyun.com)... 111.6.206.244, 111.6.126.161, 111.6.206.242, ...Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|111.6.206.244|:443... connected.HTTP request sent, awaiting response... 200 OKLength: 2640 (2.6K) [application/octet-stream]Saving to: ‘/etc/yum.repos.d/docker-ce.repo’
100%[===========================================================================================================>] 2,640 --.-K/s in 0s
2020-05-24 14:03:31 (1.20 GB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [2640/2640]
[root@localhost googlebigtable]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]name=Kubernetesbaseurl=googlebigtable]# ls -F /etc/yum.repos.d/CentOS-Base.repo CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo CentOS-x86_64-kernel.repo kubernetes.repoCentOS-Base.repo.original0 CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo docker-ce.repo[root@localhost googlebigtable]#
安装Docker[root@localhost googlebigtable]# yum -y updateLoaded plugins: fastestmirror, langpacksLoading mirror speeds from cached hostfiledocker-ce-stable | 3.5 kB 00:00:00 kubernetes | 1.4 kB 00:00:00 (1/3): docker-ce-stable/x86_64/primary_db | 42 kB 00:00:00 (2/3): docker-ce-stable/x86_64/updateinfo | 55 B 00:00:00 (3/3): kubernetes/primary | 69 kB 00:00:00 kubernetes 505/505No packages marked for update[root@localhost googlebigtable]# yum list installed | grep docker[root@localhost googlebigtable]# curl -sSL | sh
Executing docker install script, commit: 26ff363bcf3b3f5a00498ac43694bf1c7d9ce16c
sh -c 'yum install -y -q yum-utils'Package yum-utils-1.1.31-54.el7_8.noarch already installed and latest version
sh -c 'yum-config-manager --add-repo plugins: fastestmirror, langpacksadding repo from: file to /etc/yum.repos.d/docker-ce.reporepo saved to /etc/yum.repos.d/docker-ce.repo
'[' stable '!=' stable ']'
sh -c 'yum makecache'Loaded plugins: fastestmirror, langpacksLoading mirror speeds from cached hostfilebase | 3.6 kB 00:00:00 docker-ce-stable | 3.5 kB 00:00:00 extras | 2.9 kB 00:00:00 kubernetes | 1.4 kB 00:00:00 updates | 2.9 kB 00:00:00 (1/4): kubernetes/other | 44 kB 00:00:00 (2/4): kubernetes/filelists | 23 kB 00:00:00 (3/4): docker-ce-stable/x86_64/filelists_db | 20 kB 00:00:00 (4/4): docker-ce-stable/x86_64/other_db | 114 kB 00:00:00 kubernetes 505/505kubernetes 505/505Metadata Cache Created
'[' -n '' ']'
sh -c 'yum install -y -q docker-ce'warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEYPublic key for containerd.io-1.2.13-3.2.el7.x86_64.rpm is not installedImporting GPG key 0x621E9F35:Userid : "Docker Release (CE rpm)
Remember that you will have to log out and back in for this to take effect!
WARNING: Adding a user to the "docker" group will grant the ability to runcontainers which can be used to obtain root privileges on thedocker host.Refer to more information.[root@localhost googlebigtable]# yum list installed | grep dockercontainerd.io.x86_64 1.2.13-3.2.el7 @docker-ce-stabledocker-ce.x86_64 3:19.03.9-3.el7 @docker-ce-stabledocker-ce-cli.x86_64 1:19.03.9-3.el7 @docker-ce-stable[root@localhost googlebigtable]# systemctl enable docker && systemctl start dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.[root@localhost googlebigtable]# docker --versionDocker version 19.03.9, build 9d988398e7[root@localhost googlebigtable]# docker infoClient:Debug Mode: false
Server:Containers: 0Running: 0Paused: 0Stopped: 0Images: 0Server Version: 19.03.9Storage Driver: overlay2Backing Filesystem: xfsSupports d_type: trueNative Overlay Diff: trueLogging Driver: json-fileCgroup Driver: cgroupfsPlugins:Volume: localNetwork: bridge host ipvlan macvlan null overlayLog: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslogSwarm: inactiveRuntimes: runcDefault Runtime: runcInit Binary: docker-initcontainerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429runc version: dc9208a3303feef5b3839f4323d9beb36df0a9ddinit version: fec3683Security Options:seccompProfile: defaultKernel Version: 3.10.0-1127.8.2.el7.x86_64Operating System: CentOS Linux 7 (Core)OSType: linuxArchitecture: x86_64CPUs: 4Total Memory: 7.62GiBName: kubernetes-singleID: U4GI:7OI3:B2AK:TA4C:EDHL:63L5:RFD6:NIDM:BCPA:ROWN:U5BQ:KKZADocker Root Dir: /var/lib/dockerDebug Mode: falseRegistry: falseInsecure Registries:127.0.0.0/8Live Restore Enabled: false
[root@localhost googlebigtable]#
安装Kubernetes[root@localhost googlebigtable]# yum install -y kubelet kubeadm kubectlLoaded plugins: fastestmirror, langpacksLoading mirror speeds from cached hostfile.....................................................................................................Complete![root@localhost googlebigtable]# systemctl enable kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[root@localhost googlebigtable]# systemctl enable kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[root@localhost googlebigtable]# cat >> /etc/docker/daemon.json << EOF
{"registry-mirrors": ["googlebigtable]# cat /etc/docker/daemon.json{"registry-mirrors": ["googlebigtable]#
部署Kubernetes Master[root@localhost googlebigtable]# kubeadm init --apiserver-advertise-address=192.168.20.199 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version stable --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16【输出执行结果:W0525 01:31:27.750870 26835 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.3[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at Hostname]: hostname "kubernetes-single" could not be reached[WARNING Hostname]: hostname "kubernetes-single": lookup kubernetes-single on 192.168.20.1:53: no such host[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.3 not found: manifest unknown: manifest unknown, error: exit status 1[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.3 not found: manifest unknown: manifest unknown, error: exit status 1[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.3 not found: manifest unknown: manifest unknown, error: exit status 1[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-proxy:v1.18.3: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-proxy:v1.18.3 not found: manifest unknown: manifest unknown, error: exit status 1[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...To see the stack trace of this error execute with --v=5 or higher】【出现这个报错,最主要的原因是无法下载Kubernetes的最新稳定版镜像,原因是Kubernetes官网被我国的长城防火墙屏蔽了,而我们配置的aliyun软件源中最新稳定版尚未被收录。】[root@localhost googlebigtable]# kubeadm versionkubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:49:29Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}[root@localhost googlebigtable]# cat -n /etc/yum.repos.d/kubernetes.repo1 [kubernetes]2 name=Kubernetes3 baseurl= enabled=15 gpgcheck=06 repo_gpgcheck=07 gpgkey=googlebigtable]#【根据提示,当前kubadm是v1.18.3,默认下载的最新稳定版Kubernetes也是v1.18.3,但目前aliyun收录的Kubernetes最新稳定版却是v1.18.0。解决方式是,在master节点运行kubeadm reset,然后重新指定Kubernetes版本执行kubadm init命令。】[root@localhost googlebigtable]# kubeadm reset[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[preflight] Running pre-flight checksW0525 01:30:24.235511 26771 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory[reset] No etcd config found. Assuming external etcd[reset] Please, manually reset etcd to prevent further issues[reset] Stopping the kubelet service[reset] Unmounting mounted directories in "/var/lib/kubelet"W0525 01:30:24.238567 26771 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki][reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf][reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.Please, check the contents of the $HOME/.kube/config file.[root@localhost googlebigtable]#[root@localhost googlebigtable]# kubeadm init --apiserver-advertise-address=192.168.20.199 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16W0525 01:43:12.879595 27416 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.0[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at Hostname]: hostname "kubernetes-single" could not be reached[WARNING Hostname]: hostname "kubernetes-single": lookup kubernetes-single on 192.168.20.1:53: no such host[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes-single kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.20.199][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [kubernetes-single localhost] and IPs [192.168.20.199 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [kubernetes-single localhost] and IPs [192.168.20.199 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W0525 01:44:35.823363 27416 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W0525 01:44:35.824368 27416 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed.[apiclient] All control plane components are healthy after 80.002404 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node kubernetes-single as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node kubernetes-single as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: 77a1kv.bx3qsxohrzit2vfa[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.20.199:6443 --token 77a1kv.bx3qsxohrzit2vfa \--discovery-token-ca-cert-hash sha256:c99cbda7e0094e70794ca9a4732118842e6086d1d2c16d06b2c0450da7475ba2 [root@localhost googlebigtable]# exitexit[googlebigtable@localhost ~]$ mkdir -p $HOME/.kube[googlebigtable@localhost ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
We trust you have received the usual lecture from the local SystemAdministrator. It usually boils down to these three things:
#1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility.
[sudo] password for googlebigtable: [googlebigtable@localhost ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config[googlebigtable@localhost ~]$ su rootPassword: [root@kubernetes-single googlebigtable]# mkdir -p $HOME/.kube[root@kubernetes-single googlebigtable]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@kubernetes-single googlebigtable]# chown $(id -u):$(id -g) $HOME/.kube/config[root@kubernetes-single googlebigtable]#【到目前为止,已经完成了master节点的初始化,但我们注意到执行kubadm init时,有一个警告:[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at join时也会再次提示这个报错。】【docker驱动配置有两种:/usr/lib/systemd/system/docker.service或者/etc/docker/daemon.json。kubelet驱动配置为:/etc/systemd/system/kubelet.service.d/10-kubeadm.conf】【修改docker配置前需要先将docker服务停止,否则会出现docker重启失败的情况】[root@kubernetes-single googlebigtable]# ps -aux | grep dockerroot 3956 1.0 1.0 619092 81060 ? Ssl 03:22 0:03 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sockroot 6436 0.0 0.0 107688 6060 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/d3c3d042437eb6669a44ceb5cfe9aa15248dc16148c3797faf5bedfd804db300 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 6501 0.0 0.0 107688 6128 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/d8a0564e736da715490a25eb2194b234d264f8def23d2fb323ee0fdd4c04d0d4 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 6667 0.0 0.0 109096 6148 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/aea3a4a57bc4f47318707e5f483c34862bc8beeb88f03f77914db15786232eae -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 6731 0.0 0.0 107688 6228 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/10e2beb886ef01013382d111efa87f71c2a4e1efd882dd9d749ffff399a08024 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 6890 0.0 0.0 109096 6492 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/80172ec50dad5e8dbf247bb4a68eacf0d181dbcd17cb9c5fab106ff9e90ab604 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 6947 0.0 0.0 109096 6348 ? Sl 03:24 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/5480968b39c37f254c107f27fd3d5eec3669e717b21dd8ccd15aa95b85c59808 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 7132 0.0 0.0 109096 6248 ? Sl 03:25 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/db1e49cab420764b0ff2c22c81647bdd79db5ed6f1fcf1246675fc617ff264a1 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 7195 0.0 0.0 107688 6540 ? Sl 03:25 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/715c4ad9bd56cedc47ac9149efa04fb0242af29af7c49d7c026d68ab72fe7cdc -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 7374 0.0 0.0 107688 6492 ? Sl 03:25 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/986d65617c6bf86d572ea79deca4887d1e51e78c790294e9ac7f1ca40b500434 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 7430 0.0 0.0 107688 6364 ? Sl 03:25 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/fd16858ea9604aff7e3664851399bbfe0a6ea04c41e2467618b919bbdd1ef2f8 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 8161 0.0 0.0 112816 968 pts/0 S+ 03:28 0:00 grep --color=auto docker[root@kubernetes-single googlebigtable]# systemctl stop docker[root@kubernetes-single googlebigtable]# ps -aux | grep dockerroot 8408 0.0 0.0 112812 968 pts/0 S+ 03:28 0:00 grep --color=auto docker[root@kubernetes-single googlebigtable]#[root@kubernetes-single googlebigtable]# ls -F /etc/docker/daemon.json key.json[root@kubernetes-single googlebigtable]# cat -n /etc/docker/daemon.json 1 {2 "registry-mirrors": [" }[root@kubernetes-single googlebigtable]# cp /etc/docker/daemon.json{,.original}[root@kubernetes-single googlebigtable]# ls -F /etc/docker/daemon.json daemon.json.original key.json[root@kubernetes-single googlebigtable]# cat > /etc/docker/daemon.json < {"registry-mirrors": ["["native.cgroupdriver=systemd"]}EOF[root@kubernetes-single googlebigtable]# cat -n /etc/docker/daemon.json1 {2 "registry-mirrors": [" "exec-opts": ["native.cgroupdriver=systemd"]4 }[root@kubernetes-single googlebigtable]# 或者[root@kubernetes-single googlebigtable]# docker info | grep CgroupCgroup Driver: cgroupfs[root@kubernetes-single googlebigtable]# cat -n /usr/lib/systemd/system/docker.service1 [Unit]2 Description=Docker Application Container Engine3 Documentation= BindsTo=containerd.service5 After=network-online.target firewalld.service containerd.service6 Wants=network-online.target7 Requires=docker.socket8 9 [Service]10 Type=notify11 # the default is not to use systemd for cgroups because the delegate issues still12 # exists and systemd currently does not support the cgroup feature set required13 # for containers run by docker14 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock15 ExecReload=/bin/kill -s HUP $MAINPID16 TimeoutSec=017 RestartSec=218 Restart=always19 20 # Note that StartLimit options were moved from "Service" to "Unit" in systemd 229.21 # Both the old, and new location are accepted by systemd 229 and up, so using the old location22 # to make them work for either version of systemd.23 StartLimitBurst=324 25 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.26 # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make27 # this option work for either version of systemd.28 StartLimitInterval=60s29 30 # Having non-zero Limits causes performance problems due to accounting overhead31 # in the kernel. We recommend using cgroups to do container-local accounting.32 LimitNOFILE=infinity33 LimitNPROC=infinity34 LimitCORE=infinity35 36 # Comment TasksMax if your systemd version does not support it.37 # Only systemd 226 and above support this option.38 TasksMax=infinity39 40 # set delegate yes so that systemd does not reset the cgroups of docker containers41 Delegate=yes42 43 # kill only the docker process, not all processes in the cgroup44 KillMode=process45 46 [Install]47 WantedBy=multi-user.target[root@kubernetes-single googlebigtable]#[root@kubernetes-single googlebigtable]# cp /usr/lib/systemd/system/docker.service{,.original}【在"ExecStart=/usr/bin/dockerd "后追加“--exec-opt native.cgroupdriver=systemd”并保存】[root@kubernetes-single googlebigtable]# cat -n /usr/lib/systemd/system/docker.service1 [Unit]2 Description=Docker Application Container Engine3 Documentation= BindsTo=containerd.service5 After=network-online.target firewalld.service containerd.service6 Wants=network-online.target7 Requires=docker.socket8 9 [Service]10 Type=notify11 # the default is not to use systemd for cgroups because the delegate issues still12 # exists and systemd currently does not support the cgroup feature set required13 # for containers run by docker14 ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd -H fd:// --containerd=/run/containerd/containerd.sock15 ExecReload=/bin/kill -s HUP $MAINPID16 TimeoutSec=017 RestartSec=218 Restart=always19 20 # Note that StartLimit options were moved from "Service" to "Unit" in systemd 229.21 # Both the old, and new location are accepted by systemd 229 and up, so using the old location22 # to make them work for either version of systemd.23 StartLimitBurst=324 25 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.26 # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make27 # this option work for either version of systemd.28 StartLimitInterval=60s29 30 # Having non-zero Limits causes performance problems due to accounting overhead31 # in the kernel. We recommend using cgroups to do container-local accounting.32 LimitNOFILE=infinity33 LimitNPROC=infinity34 LimitCORE=infinity35 36 # Comment TasksMax if your systemd version does not support it.37 # Only systemd 226 and above support this option.38 TasksMax=infinity39 40 # set delegate yes so that systemd does not reset the cgroups of docker containers41 Delegate=yes42 43 # kill only the docker process, not all processes in the cgroup44 KillMode=process45 46 [Install]47 WantedBy=multi-user.target[root@kubernetes-single googlebigtable]#[root@kubernetes-single googlebigtable]# systemctl disable dockerRemoved symlink /etc/systemd/system/multi-user.target.wants/docker.service.[root@kubernetes-single googlebigtable]# systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.[root@kubernetes-single googlebigtable]# systemctl daemon-reload[root@kubernetes-single googlebigtable]# systemctl restart docker[root@kubernetes-single googlebigtable]# docker info | grep CgroupCgroup Driver: systemd[root@kubernetes-single googlebigtable]#[root@kubernetes-single googlebigtable]# docker infoClient:Debug Mode: false Server:Containers: 18Running: 8Paused: 0Stopped: 10Images: 7Server Version: 19.03.9Storage Driver: overlay2Backing Filesystem: xfsSupports d_type: trueNative Overlay Diff: trueLogging Driver: json-fileCgroup Driver: cgroupfsPlugins:Volume: localNetwork: bridge host ipvlan macvlan null overlayLog: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslogSwarm: inactiveRuntimes: runcDefault Runtime: runcInit Binary: docker-initcontainerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429runc version: dc9208a3303feef5b3839f4323d9beb36df0a9ddinit version: fec3683Security Options:seccompProfile: defaultKernel Version: 3.10.0-1127.8.2.el7.x86_64Operating System: CentOS Linux 7 (Core)OSType: linuxArchitecture: x86_64CPUs: 4Total Memory: 7.62GiBName: kubernetes-singleID: U4GI:7OI3:B2AK:TA4C:EDHL:63L5:RFD6:NIDM:BCPA:ROWN:U5BQ:KKZADocker Root Dir: /var/lib/dockerDebug Mode: falseRegistry: falseInsecure Registries:127.0.0.0/8Registry Mirrors:Restore Enabled: false [root@kubernetes-single googlebigtable]# ps -aux | grep dockerroot 9022 3.7 1.0 873104 80920 ? Ssl 03:45 0:01 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sockroot 9201 0.0 0.0 108968 6656 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/89c8a3397bf326c5fca957a17073d9ffe30253956e08d3c962caefb086a828b8 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 9210 0.0 0.0 107560 6672 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/5aab15177951ed3ef9f85fe158fde0fbcfb2acede2ba48a4c4678d5fe0b7d2ca -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 9219 0.0 0.0 107560 6924 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/254fda158dba85349af7c4f38d84dde1ee500d18509f22ca6e3578d4b4aece4f -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 9228 0.0 0.0 108968 6660 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/6086315ede5d1d128b6ecd5a66d3f6d630ab390f9bb643fa008fb5468dc76771 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 9409 0.5 0.0 107560 6672 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/505859a22bc24ae459dd83ead5d056537572fe6eec8f1d88d03236551a36d4a4 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 9424 0.0 0.0 108968 6656 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/011089c5e1a622d23d175a8419abb022603d99a2f283dee9106c9ca5a4bcfc50 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 9459 0.0 0.0 108968 6656 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/fe96bf8ac43c42b0e7f1f4a8e0f459d91f140d91edbf29b3608e0cd7d6031538 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 9467 0.0 0.1 107560 8712 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/445b75f79429ec62ae7e13cc0f346e8c62d42cdc39b4e7528b2a7fd5b0de31b2 -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 9690 0.0 0.0 107560 6924 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/89bff4fac0d1e3e84141e87555c93f98bd2f7e9a422944fdcfeb30128853848c -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 9748 0.0 0.0 107560 6672 ? Sl 03:45 0:00 containerd-shim -namespace moby -workdir /var/lib/containerd/io.containerd.runtime.v1.linux/moby/825e5af830edef5f41582b0f4a1be301db8483b123f0d807837bee9638e01e2a -address /run/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runcroot 9884 0.0 0.0 112816 968 pts/0 S+ 03:45 0:00 grep --color=auto docker[root@kubernetes-single googlebigtable]# docker info | grep CgroupCgroup Driver: systemd[root@kubernetes-single googlebigtable]# 【由于一些原因,我们打算使用Calico作为Kubernetes集群的网络插件。Calico的默认网段:192.168.0.0/16,Flannel的默认网段:10.244.0.0/16。所用的虚拟机IP 192.168.20.199 和Calico的默认网段重叠,因此需要修改 Calico的默认网段或者所用虚拟机的默认网段。此处选择修改Calico的默认网段为10.244.0.0/16。即,现在可以直接部署Calico作为Kubernetes的网络插件。】[root@kubernetes-single googlebigtable]# wget 04:26:10-- docs.projectcalico.org (docs.projectcalico.org)... 157.230.37.202, 2400:6180:0:d1::57a:6001Connecting to docs.projectcalico.org (docs.projectcalico.org)|157.230.37.202|:443... connected.The connection to the server :443 was refused - did you specify the right host or port? 总结:此方法不可行,可以考虑仅部署master节点,并在master节点上部署node节点上的组件,通过 kubectl taint nodes --all node-role.kubernetes.io/master 开启单机模式。 孟伯,20200526
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~