k8s

kubeadm安装kubernetes高可用集群

Posted by NianShen Blog on April 12, 2020

机器规划

注意事项:

  • 服务器之间网络互通
  • 服务器能访问公网或者内网有镜像仓库yum源原子钟等基础设施
  • etcd节点数量必须是奇数,可选用kubeadm自动搭建也可选用外部etcd集群
  • master节点高可用最少是三台
  • 负载VIP通过keepalived和haproxy来实现apiserver的高可用,可以是自己内网一个没有用的ip地址也可以是共有云的lb
  • 服务器之间的时间需要同步
节点主机名 IP地址 角色 部署服务
k8s-master-001 172.17.113.80 master+etcd kubeadm、kubelet、kubectl、docker、haproxy、keepalived
k8s-master-002 172.17.113.81 master+etcd kubeadm、kubelet、kubectl、docker、haproxy、keepalived
k8s-master-003 172.17.113.82 master+etcd kubeadm、kubelet、kubectl、docker
k8s-node-001 172.17.113.83 node kubeadm、kubelet、docker
k8s-node-002 172.17.113.84 node kubeadm、kubelet、docker
负载VIP 172.17.113.79 VIP  

组件版本规划

如果使用yum安装的话可以通过命令查看可支持的rpm包的版本

yum list docker-ce --showduplicates | sort -r 

docker和kubernetes的版本有依赖关系可通过 https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG 查看CHANGELOG看想要安装的k8s的版本支持的docker版本

服务名称 version
docker-ce 19.03.8-3.el7
kubeadm 1.17.4-0
kubectl 1.17.4-0
kubelet 1.17.4-0
etcd  
OS CentOS 7.7.1908
kernel 3.10.0-1062.18.1.el7.x86_64
kube-proxy  
flannel  
keepalived 1.3.5-16.el7
haproxy 1.5.18-9

附部分版本的对应信息

Kubernetes版本 Docker版本
Kubernetes 1.18.x Docker版本1.13.1、17.03、17.06、17.09、18.06、18.09、19.03
Kubernetes 1.17.x Docker版本1.13.1、17.03、17.06、17.09、18.06、18.09、19.03
Kubernetes 1.16.x Docker版本1.13.1、17.03、17.06、17.09、18.06、18.09
Kubernetes 1.15.x  Docker版本1.13.1、17.03、17.06、17.09、18.06、18.09
Kubernetes 1.14.x  Docker版本1.13.1、17.03、17.06、17.09、18.06、18.09
Kubernetes 1.13.x  Docker版本1.11.1、1.12.1、1.13.1、17.03、17.06、17.09、18.06
Kubernetes 1.12.x  Docker版本1.11.1、1.12.1、1.13.1、17.03、17.06、17.09、18.06
Kubernetes 1.11.x  Docker版本1.11.2到1.13.1、17.03
Kubernetes 1.10.x  Docker版本1.11.2到1.13.1、17.03

准备工作

每台服务器安装指定版本的docker-ce(可看之前写的安装docker-ce)
$ yum install -y yum-utils device-mapper-persistent-data lvm2
$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ yum install -y docker-ce-19.03.8-3.el7 
添加修改每台服务器上docker服务的daemon.json并启动docker
$ mkdir -p /etc/docker/

$ cat <<EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://fz5yth0r.mirror.aliyuncs.com"],
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "3"
  }
}
EOF

$ systemctl start docker

$ systemctl enable docker

关闭swap分区

k8s1.8版本以后,要求关闭swap,否则默认配置下kubelet将无法启动

$  swapoff -a

防止开机自动挂载 swap 分区
$  sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
关闭selinux和firewall
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
systemctl disable firewalld
systemctl stop firewalld
调整内核参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.swappiness=0
EOF

sysctl --system
开启IPVS

pod的负载均衡是用kube-proxy来实现的,实现方式有两种,一种是默认的iptables,一种是ipvs,ipvs比iptable的性能更好。之后会写一篇IPVS和iptables相关的文章。 需要开启5个模块:ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4

$ yum -y install ipvsadm ipset

$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

如果出现下图的情况说明内核模块已经被加载 ipvs.png

添加hosts、master-001到其他服务器的免密访问(略)
每台服务器上添加kubernetes的yum源并安装相应组件
$ cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

$ yum -y install kubelet-1.17.4 kubeadm-1.17.4 kubectl-1.17.4
haproxy和keepalived

master-001和master-002安装配置haproxy,两台服务器的haproxy的配置相同。如果使用公有云最好使用公有云提供的负载均衡器。

$ yum -y install haproxy

[root@k8s-master-001 ~]# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfgbak
[root@k8s-master-001 ~]# vim /etc/haproxy/haproxy.cfg
[root@k8s-master-001 ~]# cat /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

defaults
    mode                    tcp
    log                     global
    retries                 3
    timeout connect         10s
    timeout client          1m
    timeout server          1m

frontend kubernetes
    bind *:8443                     #配置端口为8443,不跟apiserver的端口有冲突
    mode tcp
    default_backend kubernetes-master

backend kubernetes-master           #访问172.17.113.79:8443会将请求转发到后端的2台,这样就实现了负载均衡
    balance roundrobin               
    server k8s-master-001  172.17.113.80:6443 check maxconn 2000
    server k8s-master-002  172.17.113.81:6443 check maxconn 2000

[root@k8s-master-001 ~]# systemctl restart haproxy && systemctl enable haproxy

[root@k8s-master-001 ~]# yum install -y keepalived
[root@k8s-master-002 ~]# yum install -y keepalived
[root@k8s-master-001 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 1.1.1.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER           #备服务器上改为BACKUP
    interface eth0         #改为自己的网卡接口名称
    virtual_router_id 51
    priority 100           #备服务器上改为小于100的数字,90,80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.17.113.79          #vip自己设定
    }
}

[root@k8s-master-001 ~]# systemctl enable keepalived && systemctl start keepalived

验证keepalived是否正常工作

查看VIP是否已经在master-001这台服务器上了

[root@k8s-master-001 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:30:4e:f3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.113.80/20 brd 172.17.127.255 scope global dynamic eth0
       valid_lft 315239229sec preferred_lft 315239229sec
    inet 172.17.113.79/32 scope global eth0
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:19:34:32:ef brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0
       valid_lft forever preferred_lft forever

暂停master-001的keepalived服务,查看VIP是否已经漂移到了master-002节点上
[root@k8s-master-001 ~]# systemctl stop keepalived

[root@k8s-master-002 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:10:76:55 brd ff:ff:ff:ff:ff:ff
    inet 172.17.113.81/20 brd 172.17.127.255 scope global dynamic eth0
       valid_lft 315239124sec preferred_lft 315239124sec
    inet 172.17.113.79/32 scope global eth0
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:ea:95:44:08 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0
       valid_lft forever preferred_lft forever

重启master-001的keepalived服务器,查看VIP是否又回到了master-001
[root@k8s-master-001 ~]# systemctl start keepalived
[root@k8s-master-001 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:30:4e:f3 brd ff:ff:ff:ff:ff:ff
    inet 172.17.113.80/20 brd 172.17.127.255 scope global dynamic eth0
       valid_lft 315239080sec preferred_lft 315239080sec
    inet 172.17.113.79/32 scope global eth0
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:19:34:32:ef brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0
       valid_lft forever preferred_lft forever