Kubernetes之搭建(二)

网友投稿 307 2022-09-10

Kubernetes之搭建(二)

环境介绍

OS:CentOS Linux release 7.4.1708 (Core)

软件版本:

docker17.03.2-ce socat-1.7.3.2-2.el7.x86_64

kubelet-1.10.0-0.x86_64

kubernetes-cni-0.6.0-0.x86_64

kubectl-1.10.0-0.x86_64

kubeadm-1.10.0-0.x86_64


主机名



IP



k8s-master01



172.16.5.238



k8s-master02



172.16.5.239



k8s-master03



172.16.5.240



k8s-node01



172.16.5.241



vip



172.16.5.242


建议master最少2GB内存,否则后面会非常卡。

一、环境准备

1.1、修改每台机器的名字,方便互联互信

hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-master02hostnamectl set-hostname k8s-master03hostnamectl set-hostname k8s-node01

1.2、修改每台机器的hosts文件,方便访问。

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6172.16.5.238 k8s-master01172.16.5.239 k8s-master02172.16.5.240 k8s-master03172.16.5.241 k8s-node01

修改好master01后,直接可复制到其它机器

scp /etc/hosts root@k8s-master02:/etc/scp /etc/hosts root@k8s-master03:/etc/scp /etc/hosts root@k8s-node01:/etc/

1.3、给每台机器做互信,实现每台机器都可无需密码直接访问。

ssh-keygen ssh-copy-id k8s-master02ssh-copy-id k8s-master03ssh-copy-id k8s-node01

1.4、基础环境(四台机器都做)

# 停防火墙systemctl stop firewalldsystemctl disable firewalld#关闭Swapswapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab#关闭Selinuxsetenforce 0 sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config #加载br_netfiltermodprobe br_netfilter#添加配置内核参数cat < /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF#加载配置sysctl -p /etc/sysctl.d/k8s.conf#查看是否生成相关文件ls /proc/sys/net/bridge# 添加K8S的国内yum源cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=install -y epel-releaseyum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim lrzsz libseccomp libtool-ltdl 执行如下命令设定时区为亚洲上海,并自动同步时间timedatectl set-timezone Asia/Shanghaitimedatectl set-ntp yes# /etc/security/limits.conf 是 Linux 资源使用配置文件,用来限制用户对系统资源的使用echo "* soft nofile 65536" >> /etc/security/limits.confecho "* hard nofile 65536" >> /etc/security/limits.confecho "* soft nproc 65536" >> /etc/security/limits.confecho "* hard nproc 65536" >> /etc/security/limits.confecho "* soft memlock unlimited" >> /etc/security/limits.confecho "* hard memlock unlimited" >> /etc/security/limits.conf

二、安装keepalived

注意:只需在master01-03安装即可。

2.1、安装keepalived

yum install -y keepalivedsystemctl enable keepalived

2.2、修改配置文件(k8s-master01的keepalived.conf,配置文件有几点需要注意的,在下面有补充。)

配置文件路径/etc/keepalived/keepalived.conf

global_defs { router_id LVS_k8s}vrrp_script CheckK8sMaster { script "curl -k 172.16.5.242:6443" interval 3 timeout 9 fall 2 rise 2}vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 61 priority 100 advert_int 1 mcast_src_ip 172.16.5.238 nopreempt authentication { auth_type PASS auth_pass sqP05dQgMSlzrxHj } unicast_peer { 172.16.5.239 172.16.5.240 } virtual_ipaddress { 172.16.5.242/24 } track_script { CheckK8sMaster }}

k8s-master01

global_defs { router_id LVS_k8s}vrrp_script CheckK8sMaster { script "curl -k 172.16.5.242:6443" interval 3 timeout 9 fall 2 rise 2}vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 61 priority 90 advert_int 1 mcast_src_ip 172.16.5.239 nopreempt authentication { auth_type PASS auth_pass sqP05dQgMSlzrxHj } unicast_peer { 172.16.5.238 172.16.5.240 } virtual_ipaddress { 172.16.5.242/24 } track_script { CheckK8sMaster }}

k8s-master02

global_defs { router_id LVS_k8s}vrrp_script CheckK8sMaster { script "curl -k 172.16.5.242:6443" interval 3 timeout 9 fall 2 rise 2}vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 61 priority 80 advert_int 1 mcast_src_ip 172.16.5.240 nopreempt authentication { auth_type PASS auth_pass sqP05dQgMSlzrxHj } unicast_peer { 172.16.5.238 172.16.5.239 } virtual_ipaddress { 172.16.5.242/24 } track_script { CheckK8sMaster }}

k8s-master03

参数介绍:

state:实例角色。分为一个MASTER和一(多)个BACKUP。

interface:VIP所绑定的网卡,指定处理VRRP多播协议包的网卡。

priority:优先级初始值,竞选MASTER用到,有效范围为0-255 auth_pass:这里使用任何随机字符串。

virtual_ipaddresses: VIP mcast_src_ip:本机IP地址 对于keepalived配置的详细参数可以百度。

2.3、启动keepalived

注意:首先启动k8s-master01,再依次启动master02和master03

systemctl restart keepalived

2.4、查看(k8s-master01)

[root@k8s-master01 new]# ip addr2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:80:cc:41 brd ff:ff:ff:ff:ff:ff inet 172.16.5.238/24 brd 192.168.1.255 scope global ens33 valid_lft forever preferred_lft forever inet 172.16.5.242/24 scope global secondary ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe80:cc41/64 scope link valid_lft forever preferred_lft forever

2.5查看k8s-master01-03的keepalived相信运行情况。

[root@k8s-master01 ~]# systemctl status keepalived● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-05-31 12:04:16 CST; 27min ago Process: 786 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 897 (keepalived) Memory: 5.2M CGroup: /system.slice/keepalived.service ├─897 /usr/sbin/keepalived -D ├─898 /usr/sbin/keepalived -D └─899 /usr/sbin/keepalived -DMay 31 12:05:00 k8s-master01 Keepalived_vrrp[899]: Sending gratuitous ARP on ens33 for 172.16.5.242May 31 12:05:00 k8s-master01 Keepalived_vrrp[899]: Sending gratuitous ARP on ens33 for 172.16.5.242May 31 12:05:00 k8s-master01 Keepalived_vrrp[899]: Sending gratuitous ARP on ens33 for 172.16.5.242May 31 12:05:00 k8s-master01 Keepalived_vrrp[899]: Sending gratuitous ARP on ens33 for 172.16.5.242May 31 12:05:05 k8s-master01 Keepalived_vrrp[899]: Sending gratuitous ARP on ens33 for 172.16.5.242May 31 12:05:05 k8s-master01 Keepalived_vrrp[899]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 172.16.5.242May 31 12:05:05 k8s-master01 Keepalived_vrrp[899]: Sending gratuitous ARP on ens33 for 172.16.5.242May 31 12:05:05 k8s-master01 Keepalived_vrrp[899]: Sending gratuitous ARP on ens33 for 172.16.5.242May 31 12:05:05 k8s-master01 Keepalived_vrrp[899]: Sending gratuitous ARP on ens33 for 172.16.5.242May 31 12:05:05 k8s-master01 Keepalived_vrrp[899]: Sending gratuitous ARP on ens33 for 172.16.5.242

k8s-master01

[root@k8s-master02 ~]# systemctl status keepalived● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-05-31 12:31:49 CST; 1s ago Process: 5715 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 5718 (keepalived) Memory: 1.4M CGroup: /system.slice/keepalived.service ├─5718 /usr/sbin/keepalived -D ├─5719 /usr/sbin/keepalived -D └─5720 /usr/sbin/keepalived -DMay 31 12:31:49 k8s-master02 Keepalived_vrrp[5720]: Truncating auth_pass to 8 charactersMay 31 12:31:49 k8s-master02 Keepalived_vrrp[5720]: (VI_1): Warning - nopreempt will not work with initial state MASTERMay 31 12:31:49 k8s-master02 Keepalived_vrrp[5720]: Unable to access script `curl`May 31 12:31:49 k8s-master02 Keepalived_vrrp[5720]: Disabling track script CheckK8sMaster since not foundMay 31 12:31:49 k8s-master02 Keepalived_vrrp[5720]: VRRP_Instance(VI_1) removing protocol VIPs.May 31 12:31:49 k8s-master02 Keepalived_vrrp[5720]: Using LinkWatch kernel netlink reflector...May 31 12:31:49 k8s-master02 Keepalived_vrrp[5720]: VRRP sockpool: [ifindex(2), proto(112), unicast(1), fd(10,11)]May 31 12:31:50 k8s-master02 Keepalived_vrrp[5720]: VRRP_Instance(VI_1) Transition to MASTER STATEMay 31 12:31:50 k8s-master02 Keepalived_vrrp[5720]: VRRP_Instance(VI_1) Received advert with higher priority 100, ours 90May 31 12:31:50 k8s-master02 Keepalived_vrrp[5720]: VRRP_Instance(VI_1) Entering BACKUP STATE

k8s-master02

[root@k8s-master03 keepalived]# systemctl status keepalived● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-05-31 12:32:05 CST; 5s ago Process: 7001 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 7004 (keepalived) Memory: 1.4M CGroup: /system.slice/keepalived.service ├─7004 /usr/sbin/keepalived -D ├─7005 /usr/sbin/keepalived -D └─7006 /usr/sbin/keepalived -DMay 31 12:32:05 k8s-master03 Keepalived_vrrp[7006]: Truncating auth_pass to 8 charactersMay 31 12:32:05 k8s-master03 Keepalived_vrrp[7006]: (VI_1): Warning - nopreempt will not work with initial state MASTERMay 31 12:32:05 k8s-master03 Keepalived_vrrp[7006]: Unable to access script `curl`May 31 12:32:05 k8s-master03 Keepalived_vrrp[7006]: Disabling track script CheckK8sMaster since not foundMay 31 12:32:05 k8s-master03 Keepalived_vrrp[7006]: VRRP_Instance(VI_1) removing protocol VIPs.May 31 12:32:05 k8s-master03 Keepalived_vrrp[7006]: Using LinkWatch kernel netlink reflector...May 31 12:32:05 k8s-master03 Keepalived_vrrp[7006]: VRRP sockpool: [ifindex(2), proto(112), unicast(1), fd(10,11)]May 31 12:32:06 k8s-master03 Keepalived_vrrp[7006]: VRRP_Instance(VI_1) Transition to MASTER STATEMay 31 12:32:06 k8s-master03 Keepalived_vrrp[7006]: VRRP_Instance(VI_1) Received advert with higher priority 100, ours 80May 31 12:32:06 k8s-master03 Keepalived_vrrp[7006]: VRRP_Instance(VI_1) Entering BACKUP STATE

k8s-master03

注意:配置文件的正确性,如果keepalived状态是以下状态,请重启keepalived服务再次查看状态,如果还有问题请仔细检查配置文件(这里有个坑,明天配置文件没问题,但就是不行,我是直接从正常的机器上scp一个过来,改的IP地址)

错误状态:

三、安装配置etcd

3.1、创建etcd证书(k8s-master01上执行即可)

kubernetes系统各组件需要使用TLS证书对通信进行加密,本文档使用CloudFlare的 PKI 工具集​​cfssl​​来生成 Certificate Authority (CA) 证书和秘钥文件,CA 是自签名的证书,用来签名后续创建的其它 TLS 证书。

3.2、下载

wget cfssl_linux-amd64 /usr/local/bin/cfsslmv cfssljson_linux-amd64 /usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

3.4、修改安装目录权限,并添加环境变量

chmod +x /usr/local/bin/cfssl*export PATH=/root/local/bin:$PATH

3.5、创建 CA 配置文件

mkdir /root/sslcd /root/ssl

3.6、创建ca-config.ison文件。

cat > ca-config.json <

3.7、创建ca-csr.json文件。

cat > ca-csr.json <

3.8、查看是否操作成功

ls /usr/local/bin/cfssl*

3.9、生成 CA 证书和私钥

# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

[root@k8s-master01 ssl]# ls ca*ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

4.0、创建 etcd TLS 秘钥(注意引号之间和IP、之间不能有空格,hosts最后一个值后面不能有逗号)

cat > etcd-csr.json <

参数介绍:

hosts:字段指定授权使用该证书的 etcd 节点 IP

4.1、生成私钥

cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd

4.2、k8s-master01分发etcd证书到k8s-master02、k8s-master03

mkdir -p /etc/etcd/sslcp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/ssh -n k8s-master02 "mkdir -p /etc/etcd/ssl && exit"ssh -n k8s-master03 "mkdir -p /etc/etcd/ssl && exit"scp -r /etc/etcd/ssl/*.pem k8s-master02:/etc/etcd/ssl/scp -r /etc/etcd/ssl/*.pem k8s-master03:/etc/etcd/ssl/

4.3、安装配置etcd (k8s-master01、k8s-master02、k8s-master03)

上面我们已经将所有的证书都生成,并发送到每台master上,下面我们在每台master上安装并设置etcd。

安装etcd

yum install -y etcd #必须先创建工作目录mkdir -p /var/lib/etcd

4.4、创建 etcd 的 systemd unit 文件

cat </etc/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.targetDocumentation= --name k8s-master01 --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem --peer-cert-file=/etc/etcd/ssl/etcd.pem --peer-key-file=/etc/etcd/ssl/etcd-key.pem --trusted-ca-file=/etc/etcd/ssl/ca.pem --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem --initial-advertise-peer-urls --listen-peer-urls --listen-client-urls --advertise-client-urls --initial-cluster-token etcd-cluster-0 --initial-cluster k8s-master01= --initial-cluster-state new --data-dir=/var/lib/etcdRestart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF

k8s-master01

cat </etc/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.targetDocumentation= --name k8s-master02 --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem --peer-cert-file=/etc/etcd/ssl/etcd.pem --peer-key-file=/etc/etcd/ssl/etcd-key.pem --trusted-ca-file=/etc/etcd/ssl/ca.pem --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem --initial-advertise-peer-urls --listen-peer-urls --listen-client-urls --advertise-client-urls --initial-cluster-token etcd-cluster-0 --initial-cluster k8s-master01= --initial-cluster-state new --data-dir=/var/lib/etcdRestart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF

k8s-master02

cat </etc/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.targetDocumentation= --name k8s-master03 --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem --peer-cert-file=/etc/etcd/ssl/etcd.pem --peer-key-file=/etc/etcd/ssl/etcd-key.pem --trusted-ca-file=/etc/etcd/ssl/ca.pem --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem --initial-advertise-peer-urls --listen-peer-urls --listen-client-urls --advertise-client-urls --initial-cluster-token etcd-cluster-0 --initial-cluster k8s-master01= --initial-cluster-state new --data-dir=/var/lib/etcdRestart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF

k8s-master03

参数介绍:

如果你的主机名和ip跟教程的不一样,请确保更换k8s-master01,k8s-master02和k8s-master03与相应的IPv4地址。此外,请确保你更换k8s-master01,k8s-master02和k8s-master03每台机器的主机名。这些机器必须能够使用DNS访问,或确保添加记录/etc/hosts;指定 etcd 的工作目录和数据目录为 ,需在启动服务前创建这个目录;为了保证通信安全,需要指定 etcd 的公私钥(cert-file和key-file)、Peers 通信的公私钥和 CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file);

4.5、配置完添加自启动(如果起不来,请检查/var/log/messages 日志)

#将我们创建好的命令直接剪切到/usr/lib/systemd/system/目录下/bin/mv /etc/systemd/system/etcd.service /usr/lib/systemd/system/ systemctl daemon-reload systemctl enable etcd systemctl restart etcd systemctl status etcd

Systemd 默认从目录/etc/systemd/system/读取配置文件。但是,里面存放的大部分文件都是符号链接,指向目录/usr/lib/systemd/system/,真正的配置文件存放在那个目录。 最先启动的 etcd 进程会卡住一段时间,等待其它节点上的 etcd 进程加入集群,为正常现象。在所有的 etcd 节点重复上面的步骤,直到所有机器的 etcd 服务都已启动。

4.6、自检,这步很重要,在3个master上分别检测你的etcd运行是否正常,是否可以继续下面的安装配置,。

[root@k8s-master01 ssl]# curl -L "true"}

{"health": "true"}表示正常

四、所有master节点安装docker

注意:安装docker(kubeadm目前支持docker最高版本是17.03.x),不过你可以安装更新的版本,下载地址见如下链接。

yum install -yyum install -y

4.1、修改docker配置文件

vim /usr/lib/systemd/system/docker.service

#将原来的ExecStart=/usr/bin/dockerd一行注释,写入下面这行ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --registry-mirror=/usr/lib/systemd/system/docker.service root@k8s-master02:/usr/lib/systemd/system/scp /usr/lib/systemd/system/docker.service root@k8s-master03:/usr/lib/systemd/system/

4.3、启动docker

systemctl daemon-reloadsystemctl restart dockersystemctl enable dockersystemctl status docker

五、所有master节点安装kubelet kubeadm kubectl

5.1.1、安装、启动、配置开机启动。

yum install -y kubelet kubeadm kubectlsystemctl start kubeletsystemctl enable kubelet

5.1.2、所有安装节点修改kubelet配置文件

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

内容如下

#修改这一行Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"#最底部添加这一行Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0"

5.1.3、因为配置文件相同,我这里直接scp到其它两个master节点。

scp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf root@k8s-master02:/etc/systemd/system/kubelet.service.d/scp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf root@k8s-master03:/etc/systemd/system/kubelet.service.d/

5.1.4、所有的master节点需要重新加载配置文件

systemctl daemon-reload

5.1.5、添加命令补全功能

yum install -y bash-completionsource /usr/share/bash-completion/bash_completionsource <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrc

5.1.6、初始化集群,检查一下keepalived(vip是否生成),etcd等服务(是否正常)

#检查keepalived状态是否正常systemctl restart keepalived#查看k8s-master01上的VIP是否存在ip a#查看etcd状态是否正常curl -L "true"}

5.1.7、生成配置文件(这里我们先将k8s-master01加入)

cat < config.yaml apiVersion: kubeadm.k8s.io/v1alpha1kind: MasterConfigurationetcd: endpoints: - - - caFile: /etc/etcd/ssl/ca.pem certFile: /etc/etcd/ssl/etcd.pem keyFile: /etc/etcd/ssl/etcd-key.pem dataDir: /var/lib/etcdnetworking: podSubnet: 10.244.0.0/16kubernetesVersion: 1.10.0api: advertiseAddress: "172.16.5.242"token: "b99a00.a144ef80536d4344"tokenTTL: "0s"apiServerCertSANs:- k8s-master01- k8s-master02- k8s-master03- k8s-node01- 172.16.5.238- 172.16.5.239- 172.16.5.240- 172.16.5.241- 172.16.5.242featureGates: CoreDNS: trueimageRepository: "registry.cn-hangzhou.aliyuncs.com/k8sth"EOF

k8s-master01

同步到k8s-master02和k8s-master03上

scp config.yaml root@k8s-master02:/root/scp config.yaml root@k8s-master03:/root/

5.1.8、首先初始化k8s-master01

kubeadm init --config config.yaml

正常输出如下,注意,这里我们需要将下图中圈红部分的内容务必保存下来,后面node点加入需要用最下面的圈红部分。

5.1.9、如果这里提示的不是以上内容,需要检查keepalive和etcd的服务是否正常。这个生成命令只能用一次,如果失败,需要用下面命令初始化失败后还原。

kubeadm reset或者rm -rf /etc/kubernetes/*.confrm -rf /etc/kubernetes/manifests/*.yamldocker ps -a |awk '{print $1}' |xargs docker rm -fsystemctl stop kubelet

5.2.1、k8s-master01上面执行如下命令

#为使kubectl适用于非root用户,你可以运行这些命令(这也是kubeadm init输出的一部分): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config#如果不执行7.3这步将会有报错运行kubectl命令时将会有如下: The connection to the server localhost:8080 was refused - did you specify the right host or port?

5.2.2、kubeadm生成证书密码文件分发到k8s-master02和k8s-master03上面去

scp -r /etc/kubernetes/pki k8s-master02:/etc/kubernetes/scp -r /etc/kubernetes/pki k8s-master03:/etc/kubernetes

5.2.3、部署flannel网络,只需要在master01执行就行。

在执行这一步前,其实可以可以先查看集群的状态了。在没有部署flannel之前,pod,node的状态都是没有跑起来的,因为网络问题,所以这一步很重要。

#查看node node pods 状态

[root@k8s-master01 system]# kubectl get nodekNAME STATUS ROLES AGE VERSIONk8s-master01 NotReady master 47s v1.10.3

[root@k8s-master01 system]# kubectl get pods --namespace="kube-system"NAME READY STATUS RESTARTS AGEcoredns-7997f8864c-fgfph 0/1 Pending 0 44scoredns-7997f8864c-ng2p9 0/1 Pending 0 44skube-controller-manager-k8s-master01 1/1 Running 0 59skube-proxy-v8f25 1/1 Running 0 44skube-scheduler-k8s-master01 1/1 Running 0 52s

5.2.4、安装

获取kube-flannel.yml

wget create -f kube-flannel.yml

或者直接一步完成,如下

kubectl apply -f system]# kubectl get nodeNAME STATUS ROLES AGE VERSIONk8s-master01 Ready master 42m v1.10.3

[root@k8s-master01 system]# kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-7997f8864c-fgfph 1/1 Running 0 41mkube-system coredns-7997f8864c-ng2p9 1/1 Running 0 41mkube-system kube-apiserver-k8s-master01 1/1 Running 0 40mkube-system kube-controller-manager-k8s-master01 1/1 Running 0 41mkube-system kube-flannel-ds-w8xfx 1/1 Running 0 39mkube-system kube-proxy-v8f25 1/1 Running 0 41mkube-system kube-scheduler-k8s-master01 1/1 Running 0 41m

到此k8s的单机集群就部署成功了。

六、部署dashboard(只需要在k8s-master01上安装即可)

6.1、先创建dashboard的yaml文件

cat <kubernetes-dashboard.yaml# Copyright 2017 The Kubernetes Authors.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# Configuration to deploy release version of the Dashboard UI compatible with# Kubernetes 1.8.## Example usage: kubectl create -f # ------------------- Dashboard Secret ------------------- #apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-systemtype: Opaque---# ------------------- Dashboard Service Account ------------------- #apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system---# ------------------- Dashboard Role & Role Binding ------------------- #kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: kubernetes-dashboard-minimal namespace: kube-systemrules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.- apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.- apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster.- apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"]- apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "" verbs: ["get"]---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: name: kubernetes-dashboard-minimal namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimalsubjects:- kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system---# ------------------- Dashboard Deployment ------------------- #kind: DeploymentapiVersion: apps/v1beta2metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-systemspec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: serviceAccountName: kubernetes-dashboard containers: - name: kubernetes-dashboard image: siriuszg/kubernetes-dashboard-amd64:v1.8.3 ports: - containerPort: 9090 protocol: TCP args: #- --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: scheme: HTTP path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule---# ------------------- Dashboard Service ------------------- #kind: ServiceapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-systemspec: ports: - port: 9090 targetPort: 9090 selector: k8s-app: kubernetes-dashboard# ------------------------------------------------------------kind: ServiceapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-external namespace: kube-systemspec: ports: - port: 9090 targetPort: 9090 nodePort: 30090 type: NodePort selector: k8s-app: kubernetes-dashboardEOF

kubernetes-dashboard.yaml

官方源文件​​pull siriuszg/kubernetes-dashboard-amd64:v1.8.3

6.3、修改完就可以创建一个pod了

kubectl create -f kubernetes-dashboard.yaml

6.4、运行成功就能看到kubernetes-dashboard在running了,如果不成功请检查pod状态

[root@k8s-master kubernetes-dashboard-master]# kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-7997f8864c-92xtj 1/1 Running 0 4hkube-system coredns-7997f8864c-cwvpl 1/1 Running 0 4hkube-system kube-apiserver-k8s-master 1/1 Running 0 3hkube-system kube-controller-manager-k8s-master 1/1 Running 0 3hkube-system kube-flannel-ds-2wclt 1/1 Running 0 3hkube-system kube-proxy-mtcns 1/1 Running 0 4hkube-system kube-scheduler-k8s-master 1/1 Running 0 3hkube-system kubernetes-dashboard-6699c65d5f-6k2w4 1/1 Running 0 3h通过get svc 查看dashboard的端口。[root@k8s-master01 kubernetes-dashboard-master]# kubectl -n kube-system get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 5dkubernetes-dashboard-external NodePort 10.96.220.208 9090:30090/TCP 1h

6.5、测试(可以直接访问VIP+30090即可)

is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list configmaps in the namespace "default"google了一下 看到一个网站把问题解决了~]# vim kube-dashboard-access.yaml

键入如下内容

apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboardroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects:- kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system

执行如下命令,然后刷新浏览器或者换个浏览器,报错就没了

kubectl create -f kube-dashboard-access.yaml

七、在master02和master03上面分别执行初始化(加入集群,壮大队伍,多核心)

7.1、上面我们在k8s-master01上创建了config.yaml文件,并分别scp到k8s-master02和k8s-master03上了,所以下面我们分别在k8s-master02和k8s-master03上执行如下命令,加入集群。

kubeadm init --config config.yaml#初始化的结果和master01的结果完全一样mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

7.2、刚初始化完毕后,kubectl get node能看到集群的信息了,但是STATUS是NotReady,这是因为镜像还在同步中,所以是NotReady,待镜像到位后就是Ready的状态了,可以多看看各主机/var/log/message的信息

kubectl get node

7.3、查看各个pod状态

root@k8s-master01 ~# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGEkube-system coredns-7997f8864c-fgfph 1/1 Running 0 1hkube-system coredns-7997f8864c-ng2p9 1/1 Running 0 1hkube-system kube-apiserver-k8s-master01 1/1 Running 0 1hkube-system kube-apiserver-k8s-master02 1/1 Running 0 6mkube-system kube-apiserver-k8s-master03 1/1 Running 0 4mkube-system kube-controller-manager-k8s-master01 1/1 Running 0 1hkube-system kube-controller-manager-k8s-master02 1/1 Running 0 6mkube-system kube-controller-manager-k8s-master03 1/1 Running 0 4mkube-system kube-flannel-ds-6h4r8 0/1 Init:ImagePullBackOff 0 11mkube-system kube-flannel-ds-sdww9 0/1 Init:ImagePullBackOff 0 9mkube-system kube-flannel-ds-w8xfx 1/1 Running 0 1hkube-system kube-proxy-7nmgz 1/1 Running 0 9mkube-system kube-proxy-nzb5f 1/1 Running 0 11mkube-system kube-proxy-v8f25 1/1 Running 0 1hkube-system kube-scheduler-k8s-master01 1/1 Running 0 1hkube-system kube-scheduler-k8s-master02 1/1 Running 0 6mkube-system kube-scheduler-k8s-master03 1/1 Running 0 4mkube-system kubernetes-dashboard-6699c65d5f-fr8jr 1/1 Running 0 26m

状态全部是running代表正常

访问页面,状态全绿代表正常。

八、将k8s-node01加入

8.1、安装docker

yum install -yyum install -y

8.2、修改配置文件(其实上面三个master我们都已经做过了,你可以直接scp过来也行)

vim /usr/lib/systemd/system/docker.service

修改#ExecStart=/usr/bin/dockerd这行为

ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --registry-mirror=daemon-reloadsystemctl restart dockersystemctl enable dockersystemctl status docker

8.4、安装、配置kubeadm(前面我们已经在三个master上安装配置过了)

yum install -y kubelet kubeadm kubectlsystemctl enable kubelet

8.5、修改配置文件

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

#修改这一行Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"#添加这一行Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0"

修改后

重新加载配置

systemctl daemon-reload

8.6、添加命令补全功能

yum install -y bash-completionsource /usr/share/bash-completion/bash_completionsource <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrc

8.7、初始node点

cat < config.yamlapiVersion: kubeadm.k8s.io/v1alpha1kind: MasterConfigurationetcd: endpoints: - - - caFile: /etc/etcd/ssl/ca.pem certFile: /etc/etcd/ssl/etcd.pem keyFile: /etc/etcd/ssl/etcd-key.pem dataDir: /var/lib/etcdnetworking: podSubnet: 10.244.0.0/16kubernetesVersion: 1.10.0api: advertiseAddress: "172.16.5.242"token: "b99a00.a144ef80536d4344"tokenTTL: "0s"apiServerCertSANs:- k8s-master01- k8s-master02- k8s-master03- k8s-node01- 172.16.5.238- 172.16.5.239- 172.16.5.240- 172.16.5.241- 172.16.5.242featureGates: CoreDNS: trueimageRepository: "registry.cn-hangzhou.aliyuncs.com/k8sth"EOF

8.8、在k8s-node01上执行以下命令(就是集群一开始初始化出来的命令,这是k8s-master01上执行成功后提示给的命令)

kubeadm join 172.16.5.242:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:4af22c3aad6625dd24130d9bfa0b12b16696fb1147773b3fe5d33eb505e8d12b

8.9、然后也是等待一系列的镜像加载,没事多看看日志,加载完毕后查看node。

kubectl get node

搭建完成

九、安装heapster

9.1、下载github kubernetes-heapster上最新的版本并解压

wget master.zip

目录文件检查

heapster要用的目录分别是/heapster-master/deploy/kube-config/influxdb和/heapster-master/deploy/kube-config/rbaccd /heapster-master/deploy/kube-config# lsgoogle influxdb rbac standalone standalone-test standalone-with-apiserver# cd influxdb/# lsgrafana.yaml heapster.yaml influxdb.yaml# cd rbac/# lsheapster-rbac.yaml

9.2、下面开始重点(/root/heapster-master/deploy/kube-config/influxdb下的三个文件中的image后面默认跟的地址都是google上的,除非你可以FQ)

官方源文件的镜像都放在谷歌了image: k8s.gcr.io/heapster-amd64:v1.5.3image: k8s.gcr.io/heapster-grafana-amd64:v4.4.3image: k8s.gcr.io/heapster-influxdb-amd64:v1.3.3

9.3、这里我们使用的是阿里巴巴的仓库文件,毕竟不用FQ。首先,我们需要更改/root/heapster-master/deploy/kube-config/influxdb/下的三个文件grafana.yaml  heapster.yaml  influxdb.yaml,修改每个文件中的image后面的地址为下面的地址。

#heapster.yamlregistry.cn-shenzhen.aliyuncs.com/rancher\_cn/heapster-amd64:v1.5.1#influxdb.yamlregistry.cn-hangzhou.aliyuncs.com/kube\_containers/heapster\_influxdb:v1.3.3#grafana.yamlregistry.cn-shenzhen.aliyuncs.com/rancher\_cn/heapster-grafana-amd64:v4.4.3

9.4、拉镜像

docker pull registry.cn-shenzhen.aliyuncs.com/rancher\_cn/heapster-amd64:v1.5.1docker pull registry.cn-hangzhou.aliyuncs.com/kube\_containers/heapster\_influxdb:v1.3.3docker pull registry.cn-shenzhen.aliyuncs.com/rancher\_cn/heapster-grafana-amd64:v4.4.3

9.5、每次拉完镜像后,会提示你这个镜像是从具体哪个拉下来的,一定要记住,然后再分别修改一遍上面说的那个三个文件grafana.yaml 、heapster.yaml 、influxdb.yaml中的image值。

箭头指定的就是你pull下来镜像的实际地址,一定要记下来。

9.6、将上面每个镜像的实际地址记下来后,再分别grafana.yaml 、heapster.yaml 、influxdb.yaml修改那三个文件里面每个image后面的值。

9.7、最后开始创建

[root@k8s-master01 kube-config]# cd /root/heapster-master/deploy/kube-config#先创建rbac root@k8s-master01 kube-config# kubectl create -f rbac/.创建grafana.yaml``heapster.yaml``influxdb.yamlroot@k8s-master01 kube-config# kubectl create -f influxdb/.

9.8、验证

[root@k8s-master01 kube-config]# kubectl get pods --all-namespaces

正在创建pod、

创建完成

十、测试高可用

10.1、首先关闭k8s-master01,

10.2、然后在k8-master02上执行ip a命令,看看vip是否已经自动切换过来。

10.3、在k8s-master03上执行kubectl get node命令,查看各节点状态。这个时候k8s-master01已经是NotReady状态。

10.4、在master03上创建一个简单的pod。

[root@k8s-master03 ~]# kubectl create -f myapp-pod.ymlpod "myapp-pod" created

10.5、查看创建的pod状态(这里刚开始的状态是ContainerCreating,需要等1分钟左右)

[root@k8s-master03 ~]# kubectl get pods myapp-pod -o wideNAME READY STATUS RESTARTS AGE IP NODEmyapp-pod 1/1 Running 0 1m 10.244.3.50 k8s-node01

10.6、查看相信信息

kubectl describe pod myapp-pod

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:client-go gin的简单整合五-list-watch deployment应用
下一篇:Kubernetes之概念理解(一)
相关文章

 发表评论

暂时没有评论,来抢沙发吧~