余晖落尽暮晚霞,黄昏迟暮远山寻
本站
当前位置:网站首页 > 编程知识 > 正文

Kubernetes 高可用多master集群搭建

xiyangw 2023-05-14 11:15 10 浏览 0 评论

1 架构规划

1.1 版本说明

Kubernetes 高可用多master集群搭建

  • 系统版本:CentOS 7.6 内核:3.10.0‐1062.4.1.el7.x86_64
  • Kubernetes: v1.16.2
  • Docker-ce: 19.03
  • 推荐硬件配置:2核4G
  • Keepalived保证apiserever服务器的IP高可用
  • Haproxy实现apiserver的负载均衡

1.2 服务器信息

2 部署前准备

所有机器操作

2.1 初始化系统

2.1.1 关闭selinux和防火墙

$ sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
$ setenforce 0
$ systemctl disable firewalld
$ systemctl stop firewalld

2.1.2 关闭swap

$ swapoff -a #临时关闭,重启失效
$ sed -ri 's/.*swap.*/#&/' /etc/fstab ##永久关闭,需重启

2.1.3 为每台服务器添加host解析记录

 $ cat <<EOF > /etc/hosts
 192.168.0.61 master1
 192.168.0.62 master2
 192.168.0.63 master3
 192.168.0.64 node1
 192.168.0.65 node2
 192.168.0.66 node3
 EOF

2.1.4 创建并分发密钥

在master1创建ssh密钥,分发 master1 的公钥,用于免密登录其他服务器

$ ssh-keygen -t rsa
$ for i in {62..66};do ssh‐copy‐id 192.168.0.$i;done

2.1.5 配置内核参数

$ cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge‐nf‐call‐ip6tables = 1
net.bridge.bridge‐nf‐call‐iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
$ modprobe br_netfilter && modprobe bridge
$ sysctl -p /etc/sysctl.d/k8s.conf

2.1.6 加载ipvs模块

$ cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

2.1.7 添加yum源

$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
 http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
$ wget http://mirrors.aliyun.com/repo/Centos-7.repo -O /etc/yum.repos.d/CentOS-Base.repo
$ wget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.d/epel.repo
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

3 部署keepalived和haproxy

在node1和node2安装keepalived和haproxy

3.1 安装keepalived和haproxy

$ yum install -y keepalived haproxy

3.2 修改配置

keepalived配置 :

node1的priority为100,node2的priority为90node1的state为MASTER,node2的state为BACKUPinterface ens192 网卡名称根据自己环境名称配置

$ cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived

global_defs {
 router_id LVS_DEVEL
}

vrrp_script check_haproxy {
 script "killall -0 haproxy"
 interval 3
 weight -2
 fall 10
 rise 2
}

vrrp_instance VI_1 {
 state MASTER
 interface ens192
 virtual_router_id 51
 priority 100
 advert_int 1
 authentication {
 auth_type PASS
 auth_pass 35f18af7190d51c9f7f78f37300a0cbd
 }
 virtual_ipaddress {
 192.168.0.60
 }
 track_script {
 check_haproxy
 }
}
EOF

haproxy配置:

node1和node2的haproxy配置是一样的。此处我们监听的是192.168.0.60的6443端口如果haproxy是和k8s apiserver是部署在同一台服务器上,就不能用6443否则会冲突

cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
 # to have these messages end up in /var/log/haproxy.log you will
 # need to:
 #
 # 1) configure syslog to accept network log events. This is done
 # by adding the '-r' option to the SYSLOGD_OPTIONS in
 # /etc/sysconfig/syslog
 #
 # 2) configure local2 events to go to the /var/log/haproxy.log
 # file. A line like the following can be added to
 # /etc/sysconfig/syslog
 #
 # local2.* /var/log/haproxy.log
 #
 log 127.0.0.1 local2

 chroot /var/lib/haproxy
 pidfile /var/run/haproxy.pid
 maxconn 4000
 user haproxy
 group haproxy
 daemon

 # turn on stats unix socket
 stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
 mode http
 log global
 option httplog
 option dontlognull
 option http-server-close
 option forwardfor except 127.0.0.0/8
 option redispatch
 retries 3
 timeout http-request 10s
 timeout queue 1m
 timeout connect 10s
 timeout client 1m
 timeout server 1m
 timeout http-keep-alive 10s
 timeout check 10s
 maxconn 3000

#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
 mode tcp
 bind *:6443
 option tcplog
 default_backend kubernetes-apiserver

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
 mode tcp
 balance roundrobin
 server node1 192.168.0.64:6443 check port 6443 inter 5000 fall 5
 server node2 192.168.0.65:6443 check port 6443 inter 5000 fall 5

#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
 bind *:1080
 stats auth admin:awesomePassword
 stats refresh 5s
 stats realm HAProxy\ Statistics
 stats uri /admin?stats
EOF

3.3 启动服务

$ systemctl enable keepalived && systemctl start keepalived
$ systemctl enable haproxy && systemctl start haproxy

4 部署kubernetes

4.1 安装软件

所有机器操作

$ yum remove docker \
 docker-client \
 docker-client-latest \
 docker-common \
 docker-latest \
 docker-latest-logrotate \
 docker-logrotate \
 docker-selinux \
 docker-engine-selinux \
 docker-engine \
 docker-ce-cli
$ yum install -y yum-utils \
 device-mapper-persistent-data \
 lvm2
$ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ yum install docker-ce-18.06.1.ce-3.el7 -y
$ sed -i 's%ExecStart=/usr/bin/dockerd%ExecStart=/usr/bin/dockerd --graph=/opt/docker%g' /usr/lib/systemd/system/docker.service
$ systemctl enable docker && systemctl start docker
$ yum remove -y kubelet kubeadm kubectl
$ yum install -y kubelet-1.15.3 \
 kubeadm-1.15.3 \
 kubectl-1.15.3 \
 ipvsadm \
 ipset
$ systemctl enable kubelet

4.2 修改初始化配置

master1机器操作

$ cat > /root/kubeadm.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
 - system:bootstrappers:kubeadm:default-node-token
 token: abcdef.0123456789abcdef
 ttl: 24h0m0s
 usages:
 - signing
 - authentication
kind: InitConfiguration
localAPIEndpoint:
 advertiseAddress: 192.168.0.61 # 本机IP地址
 bindPort: 6443
nodeRegistration:
 criSocket: /var/run/dockershim.sock
 name: master1
 taints:
 - effect: NoSchedule
 key: node-role.kubernetes.io/master
---
apiServer:
 timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.16.0.60:6443" # APIserver IP地址
controllerManager: {}
dns:
 type: CoreDNS
etcd:
 local:
 dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # 镜像下载地址
kind: ClusterConfiguration
kubernetesVersion: v1.15.3 # k8s 版本
networking:
 dnsDomain: cluster.local
 podSubnet: "10.244.0.0/16" # pod分配的网段
 serviceSubnet: 10.96.0.0/12
scheduler: {}
EOF

4.3 预下载镜像

master1机器操作

$ kubeadm config images pull --config kubeadm.yaml 
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1

4.4 初始化

master1机器操作

$ kubeadm init --config kubeadm.yaml 
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
 [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.0.51 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.0.51 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.51 192.168.0.60]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 60.006233 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities 
and service account keys on each node and then running the following as root:

 kubeadm join 192.168.0.60:6443 --token abcdef.0123456789abcdef \
 --discovery-token-ca-cert-hash sha256:b406c35bbd87a502c195d3f5322a66fbbad3584f6181558842aae85bd1757538 \
 --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.60:6443 --token abcdef.0123456789abcdef \
 --discovery-token-ca-cert-hash sha256:b406c35bbd87a502c195d3f5322a66fbbad3584f6181558842aae85bd1757538 

kubeadm init主要执行了以下操作:

[init]:指定版本进行初始化操作[preflight] :初始化前的检查和下载所需要的Docker镜像文件[kubelet-start] :生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动失 败。[certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。[kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。[control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。[wait-control-plane]:等待control-plan部署的Master组件启动。[apiclient]:检查Master组件服务状态。[uploadconfig]:更新配置[kubelet]:使用configMap配置kubelet。[patchnode]:更新CNI信息到Node上,通过注释的方式记录。[mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。[bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到[addons]:安装附加组件CoreDNS和kube-proxy

其他master节点执行这个加入集群:kubeadm join 192.168.0.60:6443 --token abcdef.0123456789abcdef \

--discovery-token-ca-cert-hash sha256:b406c35bbd87a502c195d3f5322a66fbbad3584f6181558842aae85bd1757538 \

--control-plane

其他node节点执行这个加入集群:kubeadm join 192.168.0.60:6443 --token abcdef.0123456789abcdef \

--discovery-token-ca-cert-hash sha256:b406c35bbd87a502c195d3f5322a66fbbad3584f6181558842aae85bd1757538

4.5 为kubectl准备Kubeconfig文件

master1机器操作kubectl默认会在执行的用户家目录下面的.kube目录下寻找config文件。这里是将在初始化时[kubeconfig]步骤生成的admin.conf拷贝到.kube/config。

$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config

在该配置文件中,记录了API Server的访问地址,所以后面直接执行kubectl命令就可以正常连接到API Server中。

$ for host in {master2,master3} ;do ssh "root"@$host "mkdir -p /etc/kubernetes/pki/etcd"
scp -r /etc/kubernetes/pki/ca.* "root"@$host:/etc/kubernetes/pki/
scp -r /etc/kubernetes/pki/sa.* "root"@$host:/etc/kubernetes/pki/
scp -r /etc/kubernetes/pki/front-proxy-ca.* "root"@$host:/etc/kubernetes/pki/
scp -r /etc/kubernetes/pki/etcd/ca.* "root"@$host:/etc/kubernetes/pki/etcd/
scp -r /etc/kubernetes/admin.conf "root"@$host:/etc/kubernetes/
done

4.6 其他master部署

$ kubeadm join 192.168.0.60:6443 --token abcdef.0123456789abcdef \
 --discovery-token-ca-cert-hash sha256:b406c35bbd87a502c195d3f5322a66fbbad3584f6181558842aae85bd1757538 \
 --control-plane

4.7 node部署

node机器注意没有‐‐control‐plane参数

$ kubeadm join 192.168.0.60:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:07eabe96310a5a83f8d3447a98cc713ce707466d6fbe721f15dea1b743dd79fb

4.8 部署网络插件flannel

$ curl https://docs.projectcalico.org/v3.9/manifests/calico.yaml -O

如果您使用的是pod CIDR 192.168.0.0/16,请跳至下一步。如果使用其他Pod CIDR,请使用以下命令来设置一个称为POD_CIDR包含Pod CIDR 的环境变量,并192.168.0.0/16在清单中将其替换为Pod CIDR。

$ sed -i -e "s?192.168.0.0/16?10.244.0.0/16?g" calico.yaml
$ kubectl apply -f calico.yaml

4.9 查看节点及集群状态

$ kubectl get nodes
$ kubectl get pod -A -o wide

相关推荐

辞旧迎新,新手使用Containerd时的几点须知

相信大家在2020年岁末都被Kubernetes即将抛弃Docker的消息刷屏了。事实上作为接替Docker运行时的Containerd在早在Kubernetes1.7时就能直接与Kubelet集成使...

分布式日志系统ELK+skywalking分布式链路完整搭建流程

开头在分布式系统中,日志跟踪是一件很令程序员头疼的问题,在遇到生产问题时,如果是多节点需要打开多节点服务器去跟踪问题,如果下游也是多节点且调用多个服务,那就更麻烦,再者,如果没有分布式链路,在生产日志...

Linux用户和用户组管理

1、用户账户概述-AAA介绍AAA指的是Authentication、Authorization、Accounting,即认证、授权和审计。?认证:验证用户是否可以获得权限,是3A的第一步,即验证身份...

linux查看最后N条日志

其实很简单,只需要用到tail这个命令tail-100catalina.out输入以上命令,就能列出catalina.out的最后100行。...

解决linux系统日志时间错误的问题

今天发现一台虚拟机下的系统日志:/var/log/messages,文件时间戳不对,跟正常时间差了12个小时。按网上说的执行了servicersyslogrestart重启syslog服务,还是不...

全程软件测试(六十二):软件测试工作如何运用Linux—读书笔记

从事过软件测试的小伙们就会明白会使用Linux是多么重要的一件事,工作时需要用到,面试时会被问到,简历中需要写到。对于软件测试人员来说,不需要你多么熟练使用Linux所有命令,也不需要你对Linux...

Linux运维之为Nginx添加错误日志(error_log)配置

Nginx错误日志信息介绍配置记录Nginx的错误信息是调试Nginx服务的重要手段,属于核心功能模块(nginx_core_module)的参数,该参数名字为error_log,可以放在不同的虚机主...

Linux使用swatchdog实时监控日志文件的变化

1.前言本教程主要讲解在Linux系统中如何使用swatchdog实时监控日志文件的变化。swatchdog(SimpleWATCHDOG)是一个简单的Perl脚本,用于监视类Unix系统(比如...

syslog服务详解

背景:需求来自于一个客户想将服务器的日志转发到自己的日志服务器上,所以希望我们能提供这个转发的功能,同时还要满足syslog协议。1什么是syslog服务1.1syslog标准协议如下图这里的fa...

linux日志文件的管理、备份及日志服务器的搭建

日志文件存放目录:/var/log[root@xinglog]#cd/var/log[root@xinglog]#lsmessages:系统日志secure:登录日志———————————...

运维之日志管理简介

日志简介在运维过程中,日志是必不可少的东西,通过日志可以快速发现问题所在。日志分类日志分类,对不同的日志进行不同维度的分析。操作系统日志操作系统是基础,应用都是在其之上;操作系统日志的分析,可以反馈出...

Apache Log4j 爆核弹级漏洞,Spring Boot 默认日志框架就能完美躲过

这两天沸沸扬扬的Log4j2漏洞门事件炒得热火朝天:突发!ApacheLog4j2报核弹级漏洞。。赶紧修复!!|Java技术栈|Java|SpringBoot|Spring...

Linux服务器存在大量log日志,如何快速定位错误?

来源:blog.csdn.net/nan1996jiang/articlep/details/109550303针对大量log日志快速定位错误地方tail/head简单命令使用:附加针对大量log日志...

Linux中查看日志文件的正确姿势,求你别tail走天下了!

作为一个后端开发工程师,在Linux中查看查看文件内容是基本操作了。尤其是通常要分析日志文件排查问题,那么我们应该如何正确打开日志文件呢?对于我这种小菜鸡来说,第一反应就是cat,tail,vi(或...

分享几款常用的付费日志系统,献给迷茫的你!

概述在前一篇文章中,我们分享了几款免费的日志服务器。他们各有各的特点,但是大家有不同的需求,有时免费的服务器不能满足大家的需要,下面推荐几款付费的日志服务器。1.Nagios日志服务器Nagio...

取消回复欢迎 发表评论: