余晖落尽暮晚霞,黄昏迟暮远山寻
本站
当前位置:网站首页 > 编程知识 > 正文

二进制部署k8s集群

xiyangw 2023-09-28 15:30 38 浏览 0 评论

环境准备

kube-apiserver:

  • 使用节点本地 nginx 实现高可用;
  • 关闭非安全端口 8080 和匿名访问;
  • 在安全端口 6443 接收 https 请求;
  • 严格的认证和授权策略 (x509、token、RBAC);
  • 开启 bootstrap token 认证,支持 kubelet TLS bootstrapping;
  • 使用 https 访问 kubelet、etcd,加密通信;

kube-controller-manager:

  • 3 节点高可用;
  • 关闭非安全端口,在安全端口 10252 接收 https 请求;
  • 使用 kubeconfig 访问 apiserver 的安全端口;
  • 自动 approve kubelet 证书签名请求 (CSR),证书过期后自动轮转;
  • 各 controller 使用自己的 ServiceAccount 访问 apiserver;

kube-scheduler:

  • 3 节点高可用;
  • 使用 kubeconfig 访问 apiserver 的安全端口;

kubelet:

  • 使用 kubeadm 动态创建 bootstrap token,而不是在 apiserver 中静态配置;
  • 使用 TLS bootstrap 机制自动生成 client 和 server 证书,过期后自动轮转;
  • 在 KubeletConfiguration 类型的 JSON 文件配置主要参数;
  • 关闭只读端口,在安全端口 10250 接收 https 请求,对请求进行认证和授权,拒绝匿名访问和非授权访问;
  • 使用 kubeconfig 访问 apiserver 的安全端口;

kube-proxy:

  • 使用 kubeconfig 访问 apiserver 的安全端口;
  • 在 KubeProxyConfiguration 类型的 JSON 文件配置主要参数;
  • 使用 ipvs 代理模式;

组件版本

组件

版本

连接地址

rocky linux

9.2

官网

kubernetes

v1.27.3

GitHub

etcd

v3.5.9

GitHub

containerd

1.7.2

GitHub

calico

v3.26.1

GitHub

nginx

1.24.0

官网

三台机器混合部署 etcd、master 集群和 woker 集群。

 rocky11 192.168.10.11 master node
 rocky12 192.168.10.12 master node
 rocky13 192.168.10.13 master node
 vip 192.168.10.100
 pod 172.30.0.0/16
 svc 10.254.0.0/16

升级内核

#参考:http://elrepo.org/tiki/HomePage

grubby --info=ALL #查看所有内核
 #1.安装GPG-KEY
 rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
 
 #2.安装epel仓库
 dnf install https://www.elrepo.org/elrepo-release-9.el9.elrepo.noarch.rpm #9
 
 #3.载入elrepo-kernel元数据
 dnf --disablerepo=\* --enablerepo=elrepo-kernel repolist
 
 #4.查看可用内核包
 dnf --disablerepo=\* --enablerepo=elrepo-kernel list kernel*
 
 #5.安装内核
 dnf --enablerepo=elrepo-kernel install kernel-lt kernel-lt-devel -y
 
 #6.查看安装的包
 rpm -qa|grep kernel-lt
 
 grubby --default-kernel #查看默认的内核
 grubby --info=ALL #查看所有内核
 
 
 #7.修改启动顺序
 rubby --set-default /boot/vmlinuz-6.1.37-1.el9.x86_64
 
 grubby --remove-kernel=kernel的路径 #删除不需要的内核
 
 #8.重启机器
 reboot 
 
 #9.查看内核版本
 uname -r

系统设置

 #1.安装依赖包
 dnf install -y epel-release
 dnf install -y gcc gcc-c++ net-tools lrzsz vim telnet make psmisc \
  patch socat conntrack ipset ipvsadm sysstat libseccomp chrony perl curl wget git
  
 #2.关闭防火墙
 systemctl disable --now firewalld
 
 #3.关闭selinux
 getenforce #查看selinux状态
 setenforce 0 #临时关闭selinux
 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config #修改配置文件
 
 #4.关闭swap分区
 swapoff -a
 sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
 
 
 #5.文件句柄数配置
 cat <<EOF >> /etc/security/limits.conf
 * soft nofile 65536
 * hard nofile 65536
 * soft nproc 65536
 * hard nproc 65536
 * soft memlock unlimited
 * hard memlock unlimited
 EOF
 
 
 #6.时间同步
 dnf install -y chrony
 vim /etc/chrony.conf
 server 时间同步服务器 iburst #配置自己的时间服务器
 
 systemctl enable --now chronyd #启动并设置自启动
 
 timedatectl status #查看同步状态
 timedatectl set-timezone Asia/Shanghai #时区不对的可调整系统TimeZone
 timedatectl set-local-rtc 0 #将当前的UTC时间写入硬件时钟
 
 #重启依赖于系统时间的服务
 systemctl restart rsyslog 
 systemctl restart crond
 
 #7.内核参数
 cat > /etc/sysctl.d/kubernetes.conf <<EOF
 net.ipv4.ip_forward=1
 net.bridge.bridge-nf-call-iptables=1
 net.bridge.bridge-nf-call-ip6tables=1
 net.ipv4.neigh.default.gc_thresh1=1024
 net.ipv4.neigh.default.gc_thresh1=2048
 net.ipv4.neigh.default.gc_thresh1=4096
 vm.swappiness=0
 vm.panic_on_oom=0
 vm.overcommit_memory=1
 fs.inotify.max_user_instances=8192
 fs.inotify.max_user_watches=1048576
 fs.file-max=52706963
 fs.nr_open=52706963
 net.ipv6.conf.all.disable_ipv6=1
 net.netfilter.nf_conntrack_max=2310720
 EOF
 
 sysctl -p /etc/sysctl.d/kubernetes.conf
 
 #8.安装ipvs
 #开机加载内核模块,并设置开机自动加载
 cat <<EOF> /etc/modules-load.d/ipvs.conf 
 ip_vs
 ip_vs_rr
 ip_vs_wrr
 ip_vs_sh
 nf_conntrack
 br_netfilter 
 EOF
 
 #重启后查看
 lsmod | grep -e ip_vs -e nf_conntrack

准备证书

cfssl为各个组件生成证书,在某台机器上生成证书,之后将证书拷贝到部署的主机上。

由于各个组件都需要配置证书,并且依赖CA证书来签发证书,所以我们首先要生成好CA证书以及后续的签发配置文件。GitHub

 #1.下载
 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl_1.6.4_linux_amd64 -O /usr/local/sbin/cfssl
 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssljson_1.6.4_linux_amd64 -O /usr/local/sbin/cfssljson
 wget https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl-certinfo_1.6.4_linux_amd64 -O /usr/local/sbin/cfssl-certinfo
 
 chmod +x /usr/local/sbin/*
 
 #2.签发CA证书
 cd /opt/k8s
 cat <<EOF> ca-csr.json
 {
   "CN": "kubernetes",
   "key": {
     "algo": "rsa",
     "size": 2048
   },
   "names": [
     {
       "C": "CN",
       "ST": "ShangHai",
       "L": "ShangHai",
       "O": "Kubernetes",
       "OU": "System"
     }
   ],
   "ca": {
     "expiry": "876000h"
   }
 }
 EOF
 
 CN:apiserver从证书中提取该字段作为请求的用户名,浏览器使用该字段验证网站是否合法;
 O:apiserver从证书中提取该字段作为请求用户所属的组
 kube-apiserver将提取的User、Group作为RBAC授权的用户标识
 
 
 #3.#证书签发配置
 cat <<EOF> ca-config.json 
 {
   "signing": {
     "default": {
       "expiry": "87600h"
     },
     "profiles": {
       "kubernetes": {
         "usages": [
           "signing",
           "key encipherment",
           "server auth",
           "client auth"
         ],
         "expiry": "876000h"
       }
     }
   }
 }
 EOF
 
 signing:表示该证书可用于签名其它证书(生成的 ca.pem 证书中 CA=TRUE);
 server auth:表示 client 可以用该该证书对 server 提供的证书进行验证;
 client auth:表示 server 可以用该该证书对 client 提供的证书进行验证;
 expiry: 876000h:证书有效期设置为 100 年;
 
 
 #4.生成证书,后续组件都依赖CA证书签发证书
 cfssl gencert -initca ca-csr.json | cfssljson -bare ca
 #生成后的文件
 ca-config.json
 ca-csr.json
 ca-key.pem
 ca.csr
 ca.pem
 
 #查看证书有效时间,100年
 openssl x509 -in ca.pem -noout -text | grep 'Not'
   Not Before: Jul  4 14:08:00 2023 GMT
   Not After : Jun 10 14:08:00 2123 GMT

部署etcd

etcd是一个分布式、可靠的key-value存储分布式系统,它不仅能用于存储,还提供共享配置及服务发现。

etcd应用场景

etcd比较多的应用场景是用于服务发现,服务发现解决的是分布式系统中最常见的问题之一,即在同一个分布式集群中的进程或服务如何才能找到对方并建立连接。etcd主要使用场景有:分布式系统配置管理,服务注册发现,选主,应用调度,分布式队列,分布式锁。

etcd如果保证一致性

etcd使用raft协议来维护集群内各个节点状态的一致性,每个etcd节点都维护了一个状态机,并且,任意时刻至多存在一个有效的主节点。主节点处理所有来自客户端写操作,通过 Raft 协议保证写操作对状态机的改动会可靠的同步到其他节点。

准备证书

 #1.签发证书
 cat <<EOF> etcd-csr.json
 {
   "CN": "etcd",
   "hosts": [
     "127.0.0.1",
     "192.168.10.11",
     "192.168.10.12",
     "192.168.10.13",
     "192.168.10.100"
   ],
   "key": {
     "algo": "rsa",
     "size": 2048
   },
   "names": [
     {
       "C": "CN",
       "ST": "ShangHai",
       "L": "ShangHai",
       "O": "Kubernetes",
       "OU": "System"
     }
   ]
 }
 EOF
 
 #hosts字段的IP地址是授权给etcd机器的证书
 
 #2.生成证书和私钥
 cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
 #####
 ca-config.json
 ca-csr.json
 ca-key.pem
 ca.csr
 ca.pem
 etcd-csr.json
 etcd-key.pem #
 etcd.csr
 etcd.pem #
 
 #3.把证书拷贝到各个节点
 mkdir -p /etc/kubernetes/ssl && cp *.pem /etc/kubernetes/ssl/
 
 ssh -n 192.168.10.12 "mkdir -p /etc/kubernetes/ssl && exit"
 ssh -n 192.168.10.13 "mkdir -p /etc/kubernetes/ssl && exit"
 
 scp -r /etc/kubernetes/ssl/*.pem 192.168.10.12:/etc/kubernetes/ssl/
 scp -r /etc/kubernetes/ssl/*.pem 192.168.10.13:/etc/kubernetes/ssl/

部署服务

 #1.解压二进制文件
#下载地址:https://github.com/etcd-io/etcd/releases
 tar -zxvf etcd-v3.5.9-linux-amd64.tar.gz 
 mv etcd-v3.5.9-linux-amd64/etcd* /usr/local/sbin/
 
 scp -r /usr/local/sbin/etcd* 192.168.10.12:/usr/local/sbin
 scp -r /usr/local/sbin/etcd* 192.168.10.13:/usr/local/sbin
 
 mkdir -p /app/etcd
 ssh -n 192.168.10.12 "mkdir -p /app/etcd && exit"
 ssh -n 192.168.10.13 "mkdir -p /app/etcd && exit"
 
 #2.配置启动文件
 cat <<EOF> /etc/systemd/system/etcd.service
 [Unit]
 Description=Etcd Server
 After=network.target
 After=network-online.target
 Wants=network-online.target
 Documentation=https://github.com/coreos
 
 [Service]
 Type=notify
 WorkingDirectory=/app/etcd/
 ExecStart=/usr/local/sbin/etcd \\
  --name=rocky11 \\
  --data-dir=/app/etcd \\
  --cert-file=/etc/kubernetes/ssl/etcd.pem \\
  --key-file=/etc/kubernetes/ssl/etcd-key.pem \\
  --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --peer-cert-file=/etc/kubernetes/ssl/etcd.pem \\
  --peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \\
  --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-cluster-token=etcd-cluster-0 \\
  --listen-peer-urls=https://192.168.10.11:2380 \\
  --advertise-client-urls=https://192.168.10.11:2379 \\
  --initial-advertise-peer-urls=https://192.168.10.11:2380 \\
  --listen-client-urls=https://192.168.10.11:2379,https://127.0.0.1:2379 \\
  --initial-cluster=rocky11=https://192.168.10.11:2380,rocky12=https://192.168.10.12:2380,rocky13=https://192.168.10.13:2380 \\
  --initial-cluster-state=new \\
  --auto-compaction-mode=periodic \\
  --auto-compaction-retention=1 \\
  --max-request-bytes=33554432 \\
  --quota-backend-bytes=6442450944 \\
  --heartbeat-interval=250 \\
  --election-timeout=2000
 Restart=on-failure
 RestartSec=5
 LimitNOFILE=65536
 
 [Install]
 WantedBy=multi-user.target
 EOF
 
 #启动服务
 systemctl enable --now etcd

验证etcd

 #1.查看启动状态
 etcdctl member list \
  --cacert=/etc/kubernetes/ssl/ca.pem \
  --cert=/etc/kubernetes/ssl/etcd.pem \
  --key=/etc/kubernetes/ssl/etcd-key.pem
 ################显示如下################
 1af68d968c7e3f22, started, rocky12, https://192.168.10.12:2380, https://192.168.10.12:2379, false
 7508c5fadccb39e2, started, rocky11, https://192.168.10.11:2380, https://192.168.10.11:2379, false
 e8d9a97b17f26476, started, rocky13, https://192.168.10.13:2380, https://192.168.10.13:2379, false
 
 #2.验证服务状态
 etcdctl endpoint health  --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379"  --cacert=/etc/kubernetes/ssl/ca.pem  --cert=/etc/kubernetes/ssl/etcd.pem  --key=/etc/kubernetes/ssl/etcd-key.pem
 ###########显示如下#############
 https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 22.307663ms
 https://192.168.10.12:2379 is healthy: successfully committed proposal: took = 23.213301ms
 https://192.168.10.13:2379 is healthy: successfully committed proposal: took = 31.741529ms
 
 #3.查看领导者
 etcdctl -w table --cacert=/etc/kubernetes/ssl/ca.pem \
   --cert=/etc/kubernetes/ssl/etcd.pem \
   --key=/etc/kubernetes/ssl/etcd-key.pem \
   --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" endpoint status
 #########显示如下
 +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
 |          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
 +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
 | https://192.168.10.11:2379 | 7508c5fadccb39e2 |   3.5.9 |   20 kB |      true |      false |         2 |         21 |                 21 |        |
 | https://192.168.10.12:2379 | 1af68d968c7e3f22 |   3.5.9 |   20 kB |     false |      false |         2 |         21 |                 21 |        |
 | https://192.168.10.13:2379 | e8d9a97b17f26476 |   3.5.9 |   20 kB |     false |      false |         2 |         21 |                 21 |        |
 +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

部署负载

kube-apiserver是无状态的,通过nginx进行代理访问,从而保证服务可用性

nginx做反向代理,后端连接所有的kube-apiserver实例,并提供健康检查和负载均衡功能;keepalived提供kube-apiserver对外服务的VIP;

nginx监听的端口8443需要与kube-apiserver的端口6443不同,避免冲突。

keepalived在运行过程中周期检查本机的nginx进程状态,如果检测到nginx进程异常,则触发重新选主的过程,VIP将飘移到新选出来的主节点,从而实现VIP的高可用。所有组件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通过VIP和nginx监听的8443端口访问kube-apiserver服务。

部署nginx

 #1.安装nginx
 yum install pcre zlib openssl nginx nginx-mod-stream -y
 
 #2.nginx.conf
 stream {
   upstream apiserver {
     hash $remote_addr consistent;
     server 192.168.10.11:6443 max_fails=3 fail_timeout=30s;
     server 192.168.10.12:6443 max_fails=3 fail_timeout=30s;
     server 192.168.10.13:6443 max_fails=3 fail_timeout=30s;
   }
   server {
     listen 8443;
     proxy_connect_timeout 10s;
     proxy_timeout 120s;
     proxy_pass apiserver;
   }
 }
 
 #3.启动
 systemctl enable --now nginx
 
 netstat -lantup|grep nginx
 tcp   0   0 0.0.0.0:8443   0.0.0.0:*      LISTEN      26961/nginx: master 

部署keepalived

#1.安装
yum install keepalived -y

#2.配置
cat <<EOF> /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
  router_id apiserver
}

#自定义监控脚本
vrrp_script chk_nginx {
  script "/etc/keepalived/nginx_check.sh"
  interval 10
  weight 0
}

vrrp_instance VI_1 {
  state MASTER
  interface ens160
  virtual_router_id 100
  priority 100
  advert_int 1
  mcast_src_ip 192.168.10.11
  authentication {
    auth_type PAAS
    auth_pass kube
  }
  virtual_ipaddress {
    192.168.10.100
  }
  track_script {
    chk_nginx
  }
}
EOF

#配置检测脚本
cat <<EOF> /etc/keepalived/nginx_check.sh
#!/bin/bash
pid=\`ps -ef|grep nginx|grep -v -E "grep|check"|wc -l\`
if [ \$pid -eq 0 ];then
  systemctl start nginx
  sleep 2
  if [ \`ps -ef|grep nginx|grep -v -E "grep|check"|wc -l\` -eq 0 ];then
    killall keepalived
  fi
fi
EOF

chmod 755 /etc/keepalived/nginx_check.sh

#3.启动
systemctl --now enable keepalived && systemctl status keepalived

ip add
1: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:31:2f:cf brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.10.13/24 brd 192.168.10.255 scope global noprefixroute ens160
       valid_lft forever preferred_lft forever
    inet 192.168.10.100/32 scope global ens160  ####
       valid_lft forever preferred_lft forever

部署master

Master 节点的证书操作只需要做一次,将生成的证书拷到每个 Master 节点上以复用。

kubeconfig 主要是各组件以及用户访问 apiserver 的必要配置,包含 apiserver 地址、client 证书与 CA 证书等信息。

k8s组件的启动脚本参数参考官方文档

下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md

#1.解压包
tar -zxvf kubernetes-server-linux-amd64.tar.gz 
cd /kubernetes/server/bin

#2.拷贝文件
cp kube-apiserver kube-aggregator \
 kube-controller-manager kube-scheduler \
 kube-proxy kubeadm kubectl kubelet /usr/local/sbin/

#把二进制文件拷贝到所有k8s节点,包含master和node:/usr/local/sbin
scp /usr/local/sbin/kube* 192.168.10.12:/usr/local/sbin
scp /usr/local/sbin/kube* 192.168.10.13:/usr/local/sbin

kubectl

kubectl 是 kubernetes 集群的命令行管理工具,默认从 ~/.kube/config 文件读取 kube-apiserver 地址、证书、用户名等信息,如果没有配置,执行 kubectl 命令时可能会出错。

kube-apiserver 会提取证书中字段 CN 作为用户名,这里用户名叫 admin,但这只是个名称标识,它有什么权限呢?admin 是预置最高权限的用户名吗?不是的!不过 kube-apiserver 确实预置了一个最高权限的 ClusterRole,叫做 cluster-admin,还有个预置的 ClusterRoleBindingcluster-admin 这个 ClusterRolesystem:masters 这个用户组关联起来了,所以说我们给用户签发证书只要在 system:masters 这个用户组就拥有了最高权限。

~/.kube/config只需要部署一次,然后拷贝到所有节点。

#1.签发最高权限的证书
cat <<EOF> admin-csr.json 
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

#O: 为system:masters,kube-apiserver收到该证书后将请求的Group设置为system:masters;
#预定义的ClusterRoleBinding和cluster-admin将Group system:masters与Role cluster-admin 绑定,该Role授予所有API的权限
#该证书只会被kubectl当做client证书使用,所以hosts没写

#2.生成公钥和私钥
cfssl gencert -ca=ca.pem \
 -ca-key=ca-key.pem \
 -config=ca-config.json \
 -profile=kubernetes \
 admin-csr.json | cfssljson -bare admin
 
#忽略告警
This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements")


#生成证书文件
admin-key.pem #管理员证书私钥
admin.pem #管理员证书公钥

cp admin*.pem /etc/kubernetes/ssl
scp admin*.pem 192.168.10.12:/etc/kubernetes/ssl
scp admin*.pem 192.168.10.13:/etc/kubernetes/ssl

#3.创建kubectl的kubeconfig
#apiserver的VIP是192.168.10.100:8443

#设置集群参数
kubectl config set-cluster kubernetes \
 --certificate-authority=ca.pem \
 --embed-certs=true \
 --server=https://192.168.10.100:8443 \
 --kubeconfig=kubectl.kubeconfig

#设置客户端认证参数
kubectl config set-credentials admin \
 --client-certificate=admin.pem \
 --client-key=admin-key.pem \
 --embed-certs=true \
 --kubeconfig=kubectl.kubeconfig

#设置上下文参数
kubectl config set-context kubernetes \
 --cluster=kubernetes \
 --user=admin \
 --kubeconfig=kubectl.kubeconfig

#设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

#生成文件:kubectl.kubeconfig
#为所有kubectl机器分发文件 -- 拷贝成 ~/.kube/config
mkdir -p /root/.kube/

cp -rp kubectl.kubeconfig /root/.kube/config
scp kubectl.kubeconfig 192.168.10.12:/root/.kube/config
scp kubectl.kubeconfig 192.168.10.13:/root/.kube/config
  • --certificate-authority:验证 kube-apiserver 证书的根证书;
  • --client-certificate--client-key:刚生成的 admin 证书和私钥,与 kube-apiserver https 通信时使用;
  • --embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(否则,写入的是证书文件路径,后续拷贝 kubeconfig 到其它机器时,还需要单独拷贝证书文件,不方便。);
  • --server:指定 kube-apiserver 的地址,这里指向第一个节点上的服务;

apiserver

kube-apiserver是k8s的访问核心,所有K8S组件和用户kubectl操作都会请求kube-apiserver,通常启用tls证书认证,证书里面需要包含kube-apiserver可能被访问的地址,这样client校验kube-apiserver证书时才会通过,集群内的Pod一般通过kube-apiserver的Service名称访问。集群外一般是Master集群节点,CLUSTER IP和Master负载均衡器地址访问。

准备证书

#1.准备CSR文件,hosts字段指定授权使用该证书的IP或域名列表
cat <<EOF> apiserver-csr.json 
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "10.254.0.1",
    "192.168.10.11",
    "192.168.10.12",
    "192.168.10.13",
    "192.168.10.100",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "Kubernetes",
      "OU": "System"
    }
  ]
}
EOF

#2.生成证书和密钥
cfssl gencert -ca=ca.pem \
 -ca-key=ca-key.pem \
 -config=ca-config.json \
 -profile=kubernetes \
 apiserver-csr.json | cfssljson -bare apiserver

#3.生成2个重要文件
apiserver-key.pem #证书密钥
apiserver.pem #证书

#4.把apiserver公钥和私钥拷贝到每个master节点
cp apiserver*.pem /etc/kubernetes/ssl
scp apiserver*.pem 192.168.10.12:/etc/kubernetes/ssl/
scp apiserver*.pem 192.168.10.13:/etc/kubernetes/ssl/

#5.加密配置文件
cat <<EOF> /etc/kubernetes/encryption-config.yaml 
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: $(head -c 32 /dev/urandom | base64)
      - identity: {}
EOF

scp /etc/kubernetes/encryption-config.yaml 192.168.10.12:/etc/kubernetes/
scp /etc/kubernetes/encryption-config.yaml 192.168.10.13:/etc/kubernetes/

#6.metrics-server使用的证书
cat <<EOF> metrics-server-csr.json 
{
  "CN": "aggregator",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "Kubernetes",
      "OU": "System"
    }
  ]
}
EOF

#7.生成证书和密钥
cfssl gencert -ca=ca.pem \
 -ca-key=ca-key.pem  \
 -config=ca-config.json  \
 -profile=kubernetes \
 metrics-server-csr.json | cfssljson -bare metrics-server
 
cp metrics-server*.pem /etc/kubernetes/ssl
scp metrics-server*.pem 192.168.10.12:/etc/kubernetes/ssl/
scp metrics-server*.pem 192.168.10.13:/etc/kubernetes/ssl/

部署服务

#1.启动服务
cat <<EOF> /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes APIServer
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/sbin/kube-apiserver \\
 --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
 --anonymous-auth=false \\
 --secure-port=6443 \\
 --bind-address=0.0.0.0 \\
 --advertise-address=192.168.10.11 \\
 --authorization-mode=Node,RBAC \\
 --runtime-config=api/all=true \\
 --enable-bootstrap-token-auth \\
 --max-mutating-requests-inflight=2000 \\
 --max-requests-inflight=4000 \\
 --delete-collection-workers=2 \\
 --service-node-port-range=30000-40000 \\
 --service-cluster-ip-range=10.254.0.0/16 \\
 --service-account-issuer=api \\
 --service-account-key-file=/etc/kubernetes/ssl/ca.pem \\
 --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
 --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
 --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem \\
 --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem \\
 --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\
 --etcd-certfile=/etc/kubernetes/ssl/apiserver.pem \\
 --etcd-keyfile=/etc/kubernetes/ssl/apiserver-key.pem \\
 --etcd-servers=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 \\
 --kubelet-timeout=10s \\
 --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \\
 --kubelet-client-key=/etc/kubernetes/ssl/apiserver-key.pem \\
 --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver.pem \\
 --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \\
 --proxy-client-cert-file=/etc/kubernetes/ssl/metrics-server.pem \\
 --proxy-client-key-file=/etc/kubernetes/ssl/metrics-server-key.pem \\
 --requestheader-allowed-names="" \\
 --requestheader-group-headers=X-Remote-Group \\
 --requestheader-username-headers=X-Remote-User \\
 --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
 --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \\
 --allow-privileged=true \\
 --apiserver-count=3 \\
 --audit-log-maxage=30 \\
 --audit-log-maxbackup=3 \\
 --audit-log-maxsize=100 \\
 --audit-log-truncate-enabled \\
 --audit-log-path=/var/log/kubernetes/kube-apiserver/apiserver.log \\
 --event-ttl=168h \\
 --v=2
Restart=on-failure
RestartSec=10
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#如果apiserver机器上没运行kube-proxy,则还需要添加--enable-aggregator-routing=true参数

mkdir -p /var/log/kubernetes/kube-apiserver

#启动kube-apiserver
systemctl daemon-reload \
&& systemctl start kube-apiserver \
&& systemctl enable kube-apiserver \
&& systemctl status kube-apiserver

#2.检查
netstat -lntup | grep kube-apiserve
####显示如下
tcp6  0  0  :::6443  :::*  LISTEN  850/kube-apiserver

kubectl cluster-info
####显示如下
Kubernetes control plane is running at https://192.168.10.100:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

###访问有返回说明正常
curl -k https://127.0.0.1:6443/

授权访问kubelet

kube-apiserver有些情况也会访问kubelet,比如获取metrics、查看容器日志或登录容器,这时kubelet 作为server,kube-apiserver作为client,kubelet监听的https,kube-apiserver经过证书认证访问kubelet,但还需要经过授权才能成功调用接口,我们通过创建RBAC规则授权kube-apiserver访问kubelet。

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - proxy
- apiGroups:
  - ""
  resources:
  - nodes/proxy
  - nodes/stats
  - nodes/log
  - nodes/spec
  - nodes/metrics
  verbs:
  - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

#查看
kubectl get clusterrole system:kube-apiserver-to-kubelet
kubectl get ClusterRoleBinding system:kube-apiserver
#或使用 describe查看描述信息

#或者执行下面的命令
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

controller-manager

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

准备证书

#1.准备CSR文件
cat <<EOF> kube-controller-manager-csr.json 
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "hosts": [
    "127.0.0.1",
    "192.168.10.11",
    "192.168.10.12",
    "192.168.10.13",
    "192.168.10.100"
  ],
  "names": [
    {
    "C": "CN",
    "ST": "ShangHai",
    "L": "ShangHai",
    "O": "system:kube-controller-manager",
    "OU": "System"
    }
  ]
}
EOF

#hosts列表包含所有kube-controller-manager节点IP,
#CN和O 为system:kube-controller-manager
#kubernetes内置的ClusterRoleBindings system:kube-controller-manager赋予kube-controller-manager工作所需的权限。

#2.生成证书
cfssl gencert -ca=ca.pem \
 -ca-key=ca-key.pem \
 -config=ca-config.json \
 -profile=kubernetes \
 kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

#3.两个重要的文件
kube-controller-manager-key.pem #kube-controller-manager证书密钥
kube-controller-manager.pem #kube-controller-manager证书
#拷贝到所有master节点
cp kube-controller-manager*.pem /etc/kubernetes/ssl/
scp kube-controller-manager*.pem 192.168.10.12:/etc/kubernetes/ssl/
scp kube-controller-manager*.pem 192.168.10.13:/etc/kubernetes/ssl/

#apiserver有多个实例,前面挂了vip的地址和nginx代理后的端口https://192.168.10.100:8443
#4.创建kubeconfig文件
kubectl config set-cluster kubernetes \
 --certificate-authority=ca.pem \
 --embed-certs=true \
 --server=https://192.168.10.100:8443 \
 --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
 --client-certificate=kube-controller-manager.pem \
 --client-key=kube-controller-manager-key.pem \
 --embed-certs=true \
 --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \
 --cluster=kubernetes \
 --user=system:kube-controller-manager \
 --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

#生成文件:kube-controller-manager.kubeconfig拷贝到所有master节点
cp kube-controller-manager.kubeconfig /etc/kubernetes/
scp kube-controller-manager.kubeconfig 192.168.10.12:/etc/kubernetes/
scp kube-controller-manager.kubeconfig 192.168.10.13:/etc/kubernetes/

部署服务

#1.启动服务
cat <<EOF> /etc/systemd/system/kube-controller-manager.service  
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/sbin/kube-controller-manager \\
 --bind-address=127.0.0.1 \\
 --service-cluster-ip-range=10.254.0.0/16 \\
 --master=https://192.168.10.100:8443 \\
 --concurrent-service-syncs=2 \\
 --concurrent-deployment-syncs=10 \\
 --concurrent-gc-syncs=30 \\
 --controllers=*,bootstrapsigner,tokencleaner \\
 --cluster-cidr=172.30.0.0/16 \\
 --cluster-name=kubernetes \\
 --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
 --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
 --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
 --cluster-signing-duration=876000h \\
 --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\
 --root-ca-file=/etc/kubernetes/ssl/ca.pem \\
 --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \\
 --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \\
 --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
 --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
 --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
 --requestheader-allowed-names="" \\
 --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \\
 --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
 --requestheader-group-headers=X-Remote-Group \\
 --requestheader-username-headers=X-Remote-User \\
 --use-service-account-credentials=true \\
 --feature-gates=RotateKubeletServerCertificate=true \\
 --horizontal-pod-autoscaler-sync-period=10s \\
 --kube-api-qps=1000 \\
 --kube-api-burst=2000 \\
 --leader-elect=true \\
 --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

#2.启动kube-controller-manager
systemctl daemon-reload \
&& systemctl start kube-controller-manager \
&& systemctl enable kube-controller-manager \
&& systemctl status kube-controller-manager

#3.检查
netstat -lantup|grep kube-control
tcp6 0 0 :::10252 :::* LISTEN 9100/kube-controlle 
tcp6 0 0 :::10257 :::* LISTEN 9100/kube-controlle 

scheduler

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

准备证书

#1.准备CSR文件
cat <<EOF> kube-scheduler-csr.json 
{
  "CN": "system:kube-scheduler",
  "hosts": [
    "127.0.0.1",
    "192.168.10.11",
    "192.168.10.12",
    "192.168.10.13",
    "192.168.10.100"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "system:kube-scheduler",
      "OU": "System"
    }
  ]
}
EOF

#hosts:列表包含所有kube-scheduler节点IP;
#CN和O:为system:kube-scheduler,
#kubernetes内置的ClusterRoleBindings system:kube-scheduler将赋予kube-scheduler工作所需的权限。

#2.生成证书
cfssl gencert -ca=ca.pem \
 -ca-key=ca-key.pem \
 -config=ca-config.json \
 -profile=kubernetes \
 kube-scheduler-csr.json | cfssljson -bare kube-scheduler

#3.生成两个重要文件
kube-scheduler-key.pem #kube-scheduler证书密钥
kube-scheduler.pem #kube-scheduler证书公钥
##拷贝到所有master节点
cp kube-scheduler*.pem /etc/kubernetes/ssl/
scp kube-scheduler*.pem 192.168.10.12:/etc/kubernetes/ssl/
scp kube-scheduler*.pem 192.168.10.13:/etc/kubernetes/ssl/

#apiserver https://192.168.10.100:8443
#4.创建kubeconfig文件
kubectl config set-cluster kubernetes \
 --certificate-authority=ca.pem \
 --embed-certs=true \
 --server=https://192.168.10.100:8443 \
 --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
 --client-certificate=kube-scheduler.pem \
 --client-key=kube-scheduler-key.pem \
 --embed-certs=true \
 --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler \
 --cluster=kubernetes \
 --user=system:kube-scheduler \
 --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

#5.生成文件kube-scheduler.kubeconfig
cp kube-scheduler.kubeconfig /etc/kubernetes/
scp kube-scheduler.kubeconfig 192.168.10.12:/etc/kubernetes/
scp kube-scheduler.kubeconfig 192.168.10.13:/etc/kubernetes/

部署服务

#1.systemd文件
cat <<EOF> /etc/systemd/system/kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/sbin/kube-scheduler \\
 --bind-address=127.0.0.1 \\
 --kube-api-burst=200 \\
 --kube-api-qps=100 \\
 --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
 --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
 --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
 --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
 --tls-cert-file=/etc/kubernetes/ssl/kube-scheduler.pem \\
 --tls-private-key-file=/etc/kubernetes/ssl/kube-scheduler-key.pem \\
 --requestheader-allowed-names="" \\
 --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \\
 --requestheader-extra-headers-prefix="X-Remote-Extra-" \\
 --requestheader-group-headers=X-Remote-Group \\
 --requestheader-username-headers=X-Remote-User \\
 --leader-elect=true \\
 --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

#2.启动服务
systemctl daemon-reload \
&& systemctl enable kube-scheduler \
&& systemctl start kube-scheduler \
&& systemctl status kube-scheduler

#3.检查
netstat -lantup|grep kube-schedule
tcp  0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 9495/kube-scheduler 
tcp6 0 0 :::10259        :::*      LISTEN 9495/kube-scheduler

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"} 

部署woker

Worker节点主要安装kubelet来管理、运行工作负载,kube-proxy来实现Service的通信与负载均衡。 (Master 节点也可以部署为特殊 Worker 节点来部署关键服务)。

containerd

kubernetes在1.24版本之后就要抛弃docker-shim组件,容器运行时也是从docker转换到了containerd

containerd 实现了kubernetes的Container Runtime Interface (CRI)接口,提供容器运行时核心功能,如镜像管理、容器管理等,相比dockerd更加简单、健壮和可移植。

#1.拷贝二进制
wget https://github.com/containerd/nerdctl/releases/download/v1.4.0/nerdctl-full-1.4.0-linux-amd64.tar.gz

#2.配置启动文件
cat <<EOF> /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/usr/local/sbin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
Type=notify
TasksMax=infinity

[Install]
WantedBy=multi-user.target
EOF

#3.生成config.toml
mkdir -p /etc/containerd
##自动生成配置,参考我修改好的配置文件
containerd config default > /etc/containerd/config.toml

#4.修改config.toml  
sandbox_image = "registry.k8s.io/pause:3.8"
####镜像如果拉取不到可改成阿里源的####
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"

sed -i 's/registry.k8s.io\/pause:3.8/registry.aliyuncs.com\/google_containers\/pause:3.8/' /etc/containerd/config.toml

#5.启动
systemctl daemon-reload \
&& systemctl start containerd \
&& systemctl enable containerd \
&& systemctl status containerd

nerdctl

nerdctl用来兼容docker cli,可以像docker命令一样来管理本地的镜像和容器

二进制包拷贝到usr/local/sbin,就可以,会监听unix:///run/containerd/containerd.sock

kubelet

kubelet运行在每 worker节点上,接收kube-apiserver发送的请求,管理Pod容器,执行交互式命令,如 exec、run、logs等。kubelet启动时自动向kube-apiserver注册节点信息,内置的cadvisor统计和监控节点的资源使用情况。

kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。

kubeconfig

bootstrap token用于kubelet自动请求签发证书,以Secret形式存储,不需要事先给apiserver配置静态token,这样也易于管理。

创建了bootstrap token后我们利用它使用它来创建kubelet-bootstrap.kubeconfig以供后面部署 Worker节点用 (kubelet使用kubelet-bootstrap.kubeconfig自动创建证书)

export BOOTSTRAP_TOKEN=$(kubeadm token create \
 --description kubelet-bootstrap-token \
 --groups system:bootstrappers:kubelet \
 --kubeconfig ~/.kube/config)

#查看创建的token
kubeadm token list --kubeconfig ~/.kube/config
####
x8cwv4.hqo4ju9kalaecqcj  23h  2023-03-19T13:14:48+08:00  authentication,signing  kubelet-bootstrap-token  system:bootstrappers:kubelet

#查看token关联的Secret
kubectl get secrets -n kube-system|grep bootstrap
###
bootstrap-token-dv49cd  bootstrap.kubernetes.io/token  7  52s

kubectl config set-cluster bootstrap \
 --certificate-authority=ca.pem \
 --embed-certs=true \
 --server=https://192.168.10.100:8443 \
 --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-credentials kubelet-bootstrap \
 --token=${BOOTSTRAP_TOKEN} \
 --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config set-context bootstrap \
 --cluster=bootstrap \
 --user=kubelet-bootstrap \
 --kubeconfig=kubelet-bootstrap.kubeconfig

kubectl config use-context bootstrap \
 --kubeconfig=kubelet-bootstrap.kubeconfig

#生成文件kubelet-bootstrap.kubeconfig,拷贝到所有节点
cp kubelet-bootstrap.kubeconfig /etc/kubernetes/
scp kubelet-bootstrap.kubeconfig 192.168.10.12:/etc/kubernetes/
scp kubelet-bootstrap.kubeconfig 192.168.10.13:/etc/kubernetes/

部署服务

#1.启动配置 config.yaml
cat <<EOF> /etc/kubernetes/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "192.168.10.11"
staticPodPath: ""
syncFrequency: 1m
fileCheckFrequency: 20s
httpCheckFrequency: 20s
staticPodURL: ""
port: 10250
readOnlyPort: 0
rotateCertificates: true
serverTLSBootstrap: true
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
    cacheTTL: 2m0s
  x509:
    clientCAFile: "/etc/kubernetes/ssl/ca.pem"
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
registryPullQPS: 0
registryBurst: 20
eventRecordQPS: 0
eventBurst: 20
enableDebuggingHandlers: true
enableContentionProfiling: true
healthzPort: 10248
healthzBindAddress: "192.168.10.11"
clusterDomain: "cluster.local"
clusterDNS:
  - "10.254.0.2"
nodeStatusUpdateFrequency: 10s
nodeStatusReportFrequency: 1m
imageMinimumGCAge: 2m
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
volumeStatsAggPeriod: 1m
kubeletCgroups: ""
systemCgroups: ""
cgroupRoot: ""
cgroupsPerQOS: true
cgroupDriver: cgroupfs
runtimeRequestTimeout: 10m
hairpinMode: promiscuous-bridge
maxPods: 220
podCIDR: "172.30.0.0/16"
podPidsLimit: -1
resolvConf: "/etc/resolv.conf"
maxOpenFiles: 1000000
kubeAPIQPS: 1000
kubeAPIBurst: 2000
serializeImagePulls: false
evictionHard:
  memory.available:  "100Mi"
  nodefs.available:  "10%"
  nodefs.inodesFree: "5%"
  imagefs.available: "15%"
evictionSoft: {}
enableControllerAttachDetach: true
failSwapOn: true
containerLogMaxSize: 20Mi
containerLogMaxFiles: 10
systemReserved: {}
kubeReserved: {}
systemReservedCgroup: ""
kubeReservedCgroup: ""
enforceNodeAllocatable: ["pods"]
EOF

#2.配置kubelet.service
cat <<EOF> /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
WorkingDirectory=/app/kubelet
ExecStart=/usr/local/sbin/kubelet \\
 --runtime-request-timeout=15m \\
 --container-runtime-endpoint=unix:///run/containerd/containerd.sock \\
 --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
 --config=/etc/kubernetes/kubelet-config.yaml \\
 --cert-dir=/etc/kubernetes/ssl \\
 --hostname-override=192.168.10.11 \\
 --register-node=true \\
 --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

#3.启动服务
mkdir -p /app/kubelet

systemctl daemon-reload
systemctl enable --now kubelet.service && systemctl status kubelet.service 

approveCSR

CSR是什么?

CSR是Certificate Signing Request的英文缩写,即证书签名请求文件,是证书申请者在申请数字证书时由CSP(加密服务提供者)在生成私钥的同时也生成证书请求文件,证书申请者只要把CSR文件提交给证书颁发机构后,证书颁发机构使用其根证书私钥签名就生成了证书公钥文件,也就是颁发给用户的证书。

节点kubelet通过Bootstrap Token调用apiserver CSR API请求签发证书,kubelet通过bootstrap token认证后会在system:bootstrappers用户组里,我们还需要给它授权调用CSR API,为这个用户组绑定预定义的system:node-bootstrapper这个ClusterRole就可以。

kublet启动时查找配置的--kubeletconfig文件是否存在,如果不存在则使用--bootstrap-kubeconfig向kube-apiserver发送证书签名请求 (CSR)。kube-apiserver收到CSR请求后,对其中的 Token进行认证(事先使用 kubeadm 创建的 token),认证通过后将请求的user设置为 system:bootstrap,group设置为system:bootstrappers,这一过程称为Bootstrap Token Auth。

#创建一个clusterrolebinding,将 group system:bootstrappers和clusterrole system:node-bootstrapper绑定:

cat <<EOF | kubectl apply -f -
# enable bootstrapping nodes to create CSR
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: create-csrs-bootstrap
subjects:
- kind: Group
  name: system:bootstrappers
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:node-bootstrapper
  apiGroup: rbac.authorization.k8s.io
EOF

#然后启动
systemctl daemon-reload \
&& systemctl enable kubelet \
&& systemctl start kubelet \
&& systemctl status kubelet

#检查服务
netstat -lantp|grep kubelet
tcp  0  0 127.0.0.1:10248      0.0.0.0:*            LISTEN      44809/kubelet
tcp  0  0 192.168.10.23:52402  192.168.10.200:8443  ESTABLISHED 44809/kubelet
tcp  0  0 192.168.10.23:52390  192.168.10.200:8443  ESTABLISHED 44809/kubelet
tcp6 0  0 :::10250             :::*                 LISTEN      44809/kubelet
#查看CSR列表
kubectl get csr
###显示如下###
NAME      AGE SIGNERNAME                                  REQUESTOR               CONDITION
csr-54b9r 42s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:y6mj17 Pending
csr-bmvfm 43s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:y6mj17 Pending
csr-szxrd 43s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:y6mj17 Pending

#自动approve csr请求
cat <<EOF | kubectl apply -f -
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: auto-approve-csrs-for-group
subjects:
- kind: Group
  name: system:bootstrappers
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
  apiGroup: rbac.authorization.k8s.io
---
# To let a node of the group "system:nodes" renew its own credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: node-client-cert-renewal
subjects:
- kind: Group
  name: system:nodes
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
  apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
# To let a node of the group "system:nodes" renew its own server credentials
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: node-server-cert-renewal
subjects:
- kind: Group
  name: system:nodes
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: approve-node-server-renewal-csr
  apiGroup: rbac.authorization.k8s.io
EOF

#auto-approve-csrs-for-group:自动approve node的第一次CSR,注意第一次CSR时,请求的Group 为system:bootstrappers
#node-client-cert-renewal:自动approve node后续过期的client证书,自动生成的证书Group为 system:nodes;
#node-server-cert-renewal:自动approve node后续过期的server证书,自动生成的证书Group为system:nodes;



#遇到问题:server证书不能自动审批,是基于安全性考虑CSR approving controllers不会自动approve kubelet server证书签名请求,需要手动approve使用命令kubectl certificate approve <csr name>
kubectl get csr|grep Pending
csr-6vs4g 2m16s kubernetes.io/kubelet-serving system:node:192.168.10.12 Pending
csr-pzbph 2m23s kubernetes.io/kubelet-serving system:node:192.168.10.13 Pending
csr-zpmwz 2m23s kubernetes.io/kubelet-serving system:node:192.168.10.11 Pending

kubectl certificate approve <csr name> #批准证书签名请求
kubectl certificate deny <csr name> #拒绝证书签名请求
#批量手动批准
kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve

#查看node,因为没有装网络插件,节点状态会是NotReady
kubectl get node
NAME            STATUS     ROLES    AGE     VERSION
192.168.10.11   NotReady   <none>   19s     v1.27.3
192.168.10.12   NotReady   <none>   35s     v1.27.3
192.168.10.13   NotReady   <none>   8s      v1.27.3

kube-proxy

kube-proxy运行在所有worker节点上,它监听apiserver中service和endpoint的变化情况,创建路由规则以提供服务IP和负载均衡功能。

准备证书

cat <<EOF> kube-proxy-csr.json 
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShangHai",
      "L": "ShangHai",
      "O": "Kubernetes",
      "OU": "System"
    }
  ]
}
EOF

#CN:指定该证书的User为system:kube-proxy;
#预定义的RoleBinding system:node-proxier将User system:kube-proxy与Role system:node-proxier绑定,该Role授予了调用kube-apiserver Proxy相关API的权限。

#生成证书和私钥
cfssl gencert -ca=ca.pem \
 -ca-key=ca-key.pem \
 -config=ca-config.json \
 -profile=kubernetes \
 kube-proxy-csr.json | cfssljson -bare kube-proxy
#生成2个文件
kube-proxy-key.pem
kube-proxy.pem   

cp kube-proxy*.pem /etc/kubernetes/ssl/
scp kube-proxy*.pem 192.168.10.12:/etc/kubernetes/ssl/
scp kube-proxy*.pem 192.168.10.13:/etc/kubernetes/ssl/

#创建kubeconfig文件
kubectl config set-cluster kubernetes \
 --certificate-authority=ca.pem \
 --embed-certs=true \
 --server=https://192.168.10.100:8443 \
 --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
 --client-certificate=kube-proxy.pem \
 --client-key=kube-proxy-key.pem \
 --embed-certs=true \
 --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
 --cluster=kubernetes \
 --user=kube-proxy \
 --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

cp kube-proxy.kubeconfig /etc/kubernetes/
scp kube-proxy.kubeconfig 192.168.10.12:/etc/kubernetes/
scp kube-proxy.kubeconfig 192.168.10.13:/etc/kubernetes/

部署服务

#1.配置文件
cat <<EOF> /etc/kubernetes/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  burst: 200
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  qps: 100
clusterCIDR: 172.30.0.0/16
bindAddress: 192.168.10.11
hostnameOverride: 192.168.10.11
healthzBindAddress: 192.168.10.11:10256
metricsBindAddress: 192.168.10.11:10249
enableProfiling: true
mode: "ipvs"
portRange: ""
iptables:
  masqueradeAll: false
ipvs:
  scheduler: rr
  excludeCIDRs: []
EOF

#2.kube-proxy.service
cat <<EOF> /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kube-Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/app/kube-proxy
ExecStart=/usr/local/sbin/kube-proxy \\
 --config=/etc/kubernetes/kube-proxy-config.yaml \\
 --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

#3.启动服务
mkdir -p /app/kube-proxy

systemctl daemon-reload \
 && systemctl enable kube-proxy \
 && systemctl restart kube-proxy \
 && systemctl status kube-proxy
 
#4.检查
netstat -lantup|grep kube-proxy
tcp  0  0 192.168.10.21:10249  0.0.0.0:*  LISTEN  46560/kube-proxy
tcp  0  0 192.168.10.21:10256  0.0.0.0:*  LISTEN  46560/kube-proxy

ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.10.11:6443  Masq    1      0          0 
  -> 192.168.10.12:6443  Masq    1      0          0
  -> 192.168.10.13:6443  Masq    1      0          0

calico

所有的节点都需要安装calico,主要目的是跨主机的docker能够互相通信,也是保障kubernetes集群的网络基础和保障。

calico使用IPIP或BGP技术(默认为IPIP)为各节点创建一个可以互通的Pod网络。参考官网

#1.下载清单
wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
  
#2.修改pod网络
- name: CALICO_IPV4POOL_CIDR
  value: "172.30.0.0/16"
  
#3.创建
kubectl apply -f calico.yaml

#4.查看
kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS     AGE
kube-system   calico-kube-controllers-85578c44bf-9vxn6   1/1     Running   0            2m
kube-system   calico-node-4qghn                          1/1     Running   0            2m
kube-system   calico-node-6bc44                          1/1     Running   0            2m
kube-system   calico-node-77bf8                          1/1     Running   0            2m

coredns

CoreDNS就是一个DNS服务,而DNS作为一种常见的服务发现手段,所以很多开源项目以及工程师都会使用 CoreDNS为集群提供服务发现的功能,Kubernetes就在集群中使用CoreDNS解决服务发现的问题。

k8s版本包里提供了dns的yaml文件,在kubernetes-src\cluster\addons\dns目录里。

#1.修改配置
sed -i -e "s/__DNS__DOMAIN__/cluster.local/g" \
-e "s/__DNS__MEMORY__LIMIT__/500Mi/g" \
-e "s/__DNS__SERVER__/10.254.0.2/g" coredns.yaml.base

#镜像改成阿里云
image: registry.aliyuncs.com/google_containers/coredns:v1.10.1

#2.创建服务
mv coredns.yaml.base coredns.yaml
kubectl create -f coredns.yaml -n kube-system

#3.查看pod
kubectl get pod -n kube-system
kube-system   coredns-5bfcdcfd96-pgttd   1/1     running   0    11s

验证集群

#1.检查节点状态
kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
192.168.10.11   Ready    <none>   6m     v1.27.3
192.168.10.12   Ready    <none>   5m     v1.27.3
192.168.10.13   Ready    <none>   6m     v1.27.3
#2.部署服务
cat <<EOF nginx.yml | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: nginx-ds
  labels:
    app: nginx-ds
spec:
  type: NodePort
  selector:
    app: nginx-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      app: nginx-ds
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
EOF

#3.验证
node节点上访问svc和nodeport端口
进入pod访问,其他节点的pod IP

相关推荐

前端开发之用以处理表单的jQuery控件之AJAX请求

介绍介绍我们的TFUMS的网页模板基本上都做好了,但是大家都发现了我们的模板里面的表单是不能提交的,这是因为缺少驱动程序,这个驱动程序就是指Javascript代码。在用户填写完表单项之后,点击了提交...

AJAX with JSP使用jQuery示例_ajax和jquery先学哪个

在这里,您将获得使用jQuery的JSP的AJAX示例。AJAX用于从服务器发送和接收数据,而无需重新加载页面。我们可以使用jQuery轻松实现AJAX。它为AJAX功能提供了各种方法。我使用Ecli...

华杉科技-jQuery与AJAX基础入门到实战精通

华杉科技提供的“jQuery与AJAX基础入门到实战精通”课程是一个涵盖了jQuery和AJAX技术的全面学习路径。下面是该课程的一个大致的学习大纲,以帮助你了解你将学到什么。1.jQuery基础入...

jQuery实现Ajax功能分析「与Flask后台交互」

本文实例讲述了jQuery实现Ajax功能。分享给大家供大家参考,具体如下:jQuery是一个小型的JavaScript库,它通常被用来简化DOM和JavaScript操作。通过在服务器...

jQuery - AJAX load() 方法_jqueryajax全部用法

jQueryload()方法jQueryload()方法是简单但强大的AJAX方法。load()方法从服务器加载数据,并把返回的数据放入被选元素中。语法:$(selector).load...

原生异步请求方法ajax,及jQuery相关方法,如何采用ES6封装ajax

知识已经过时了,可以直接跳到文章末尾看ES6封装ajax。怀念曾经的jQuery一.ajax方法jQuery:JavaScript代码包装成拿过来就能实现特定功能的代码库,基本淘汰了;json:简单...

JS类库Jquery(二):优雅的使用JQuery写Ajax实现前后端完美交互

Jquery虽然属于比较老的技术,但是相较于原生的JS写起来还是反方便很多,现在流行使用VUE等开源的框架,但是这并非不妨碍咱们进行Jquery的学习,前端程序员成长的过程中Jquery是必须了解的类...

Python Web详解:(Ajax+JSON+JQuery)

JOSN:JavascriptObjectNotation作用:主要约束前后端交互数据的格式JSON的格式表示单个对象使用{}采用键值对的格式保存数据键必须使用双引号引起来相当于...

JavaScript、Ajax、jQuery全部知识点,1分钟速懂!

本文将详细解读JavaScript、ajax、jQuery是什么?他们可以实现什么?1、JavaScript定义:javaScript的简写形式就是JS,是由Netscape公司开发的一种脚本语言,一...

一文读懂Ajax与Axios、jquery之间的关系与区别

1、关系1)Ajax与jQuery:jQuery提供了对Ajax技术的封装,使得使用Ajax变得更加方便。jQuery中的Ajax方法是对原生的Ajax技术(基于XMLHttpRequest对象)进行...

Javascript应用-jQuery Ajax DOM 元素、遍历、数据操作和方法

jQuery库拥有完整的Ajax兼容套件。其中的函数和方法允许在不刷新浏览器的情况下从服务器加载数据,具体如下:函数描述jQuery.ajax()执行异步HTTP(Ajax)请求。.aja...

Jquery中ajax的使用_jquery.ajax

声明:本栏目所使用的素材都是凯哥学堂VIP学员所写,学员有权匿名,对文章有最终解释权;凯哥学堂旨在促进VIP学员互相学习的基础上公开笔记。Jquery包装的ajax操作如下:$get$post操作...

全新web前端开发教程之Jquery Ajax

1、$.ajaxjquery调用ajax方法:格式:$.ajax({});参数:type:请求方式GET/POSTurl:请求地址urlasync:是否异步,默认是true表示异步data:发送到服务...

jquery对ajax的支持_ajax是什么

...

jquery中的Ajax请求详解各个参数_jquery ajax实例

比较适合初学的人$.ajax({url:"接收数据的页面地址",data:{参数:值,参数:值........},type:'post',或者getdataType:'json',async:t...

取消回复欢迎 发表评论: