1 环境介绍

虽然 kubeadm, kops, kubespray 以及 rke, kubesphere 等工具可以快速部署 K8s 集群,但是依然会有很多人热衷与使用二进制部署 K8s 集群。

二进制部署可以加深对 K8s 各组件的理解,可以灵活地将各个组件部署到不同的机器,以满足自身的要求。还可以生成一个超长时间自签证书,比如 99 年,免去忘记更新证书过期带来的生产事故。

本文基于当前(2021-12-31)最新版本 K8s 1.23.1,总体和网上的 1.20,1.22 等版本的部署方式没有太大的区别,主要参考了韩先超老师的 K8s 1.20 版本的二进制部署教程。

另外,我的环境是使用 m1 芯片的 macbook 运行的 ubuntu 20.04 TLS 虚拟机搭建,因此本次环境搭建 K8s 是基于 arm64 架构的。

1.0 书写约定

  • 命令行输入,均以 符号表示
  • 注释使用 #// 表示
  • 执行命令输出结果,以空行分隔

1.1 规划

角色主机名IP组件
master nodeubuntu-k8s-master-0110.211.55.4etcd, kube-apiserver, kube-controller-manager,
kube-scheduler, kube-proxy, kubelet
worker nodeubuntu-k8s-worker-0110.211.55.5kubelet, kube-proxy

1.2 环境配置

  • 设置主机名

    # 10.211.55.4 主机
    ➜ sudo hostnamectl set-hostname ubuntu-k8s-master-01
    # 10.211.55.5 主机
    ➜ sudo hostnamectl set-hostname ubuntu-k8s-worker-01
  • 时间同步

    # 设置时区
    ➜ sudo timedatectl set-timezone Asia/Shanghai
    
    # 安装时间同步服务
    ➜ sudo apt-get update
    ➜ sudo apt-get install chrony
    ➜ sudo systemctl enable --now chrony
  • 主机名解析

    ➜ sudo cat >> /etc/hosts << EOF
    10.211.55.4 ubuntu-k8s-master-01
    10.211.55.5 ubuntu-k8s-worker-01
    EOF
  • 创建 kubernetes 证书存放目录

    ➜ mkdir /etc/kubernetes/pki -p

1.3 下载 k8s 二进制程序

从官方发布地址下载二进制包 下载地址

下载 Server Binaries 即可,这个包含了所有所需的二进制文件。解压后,复制二进制 kube-apiserver, kube-scheduler, kube-controller-manager, kube-proxy,kubelet, kubectl 到 master 节点 /usr/local/bin 目录下,复制二进制 kube-proxy,kubelet 到 worker 节点 /usr/local/bin 目录下。

➜ ll /usr/local/bin/kube*

-rwxr-xr-x 1 root root 128516096 Dec 29 14:59 /usr/local/bin/kube-apiserver
-rwxr-xr-x 1 root root 118489088 Dec 29 14:59 /usr/local/bin/kube-controller-manager
-rwxr-xr-x 1 root root  46202880 Dec 29 14:59 /usr/local/bin/kubectl
-rwxr-xr-x 1 root root 122352824 Dec 29 14:59 /usr/local/bin/kubelet
-rwxr-xr-x 1 root root  43581440 Dec 29 14:59 /usr/local/bin/kube-proxy
-rwxr-xr-x 1 root root  49020928 Dec 29 14:59 /usr/local/bin/kube-scheduler

2 安装 docker

参考地址 安装docker,Docker 需要在各个节点上安装

Docker 配置

# 把 docker 的 cgroupdriver 设置为 systemd,因为 kubelet 默认是 systemd,不设置会导致 kubelet 启动失败
➜ sudo cat >> /etc/docker/daemon.json << EOF
{
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

安装

# 卸载旧版本,没有安装过则不用操作
➜ sudo apt-get remove docker docker-engine docker.io containerd runc

# 设置仓库源
➜ sudo apt-get update
➜ sudo apt-get install ca-certificates curl gnupg lsb-release

# 添加 GPG Key
➜ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# 添加软件源
➜ echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# 安装 docker
➜ sudo apt-get update
➜ sudo apt-get install docker-ce docker-ce-cli containerd.io

# 设置 docker 开机自启
➜ sudo systemctl enable docker

3 签署 ca 证书

3.1 编译 cfssl

cfssl 是一款证书签署工具,使用 cfssl 工具可以很简化证书签署过程,方便颁发自签证书。

cfssl 官方没有发行 arm64 版本的二进制程序,因此需要自行编译,如果在 amd64 架构下部署,则直接下载官方发布的二进制程序即可。

# 下载 golang 
➜ wget https://dl.google.com/go/go1.17.5.linux-arm64.tar.gz
 
# 解压到 /usr/local/go
➜ sudo tar xf go1.17.5.linux-arm64.tar.gz -C /usr/local/

# 验证
➜ go version

# 编译 cfssl, cfssljson
➜ go install github.com/cloudflare/cfssl/cmd/cfssl
➜ go install github.com/cloudflare/cfssl/cmd/cfssljson

# 复制二进制到 /usr/local/bin
➜ cp ~/go/bin/cfssl ~/go/bin/cfssljson /usr/local/bin

3.2 签署 ca 证书

# 签署的证书统一放到 ~/ssl 目录,签署后复制到 /etc/kubernetes/pki 目录
➜ mkdir ~/ssl
➜ cd ~/ssl

# 证书配置文件
# expiry 字段为证书有效期,这里写了将近 99 年,,基本不用担心证书过期的问题。建议写 10 年以上,反正不用钱
➜ cat > ca-ssl.json << EOF
{
    "signing": {
        "default": {
            "expiry": "867240h"
        },
        "profiles": {
            "kubernetes": {
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ],
                "expiry": "867240h"
            }
        }
    }
}
EOF

# ca 证书签署申请
➜ cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Guangdong",
            "L": "Zhuhai",
            "O": "k8s",
            "OU": "system"
        }
    ],
    "ca": {
        "expiry": "867240h"
    }
}
EOF

# 签署 ca 证书
➜ cfssl gencert -initca ca-csr.json | cfssljson -bare ca

# 验证结果,会生成两个证书文件
➜ ll ca*pem

-rw------- 1 haxi haxi 1675 Dec 30 11:32 ca-key.pem
-rw-rw-r-- 1 haxi haxi 1314 Dec 30 11:32 ca.pem

# 复制 ca 证书到 /etc/kubernetes/pki
➜ sudo cp ca*pem /etc/kubernetes/pki

4 部署 etcd

etcd 版本选择的是最新版本 3.5.1,下载二进制 etcd下载链接

4.1 颁发证书

# etcd 证书签署申请
# hosts 字段中,IP 为所有 etcd 集群节点地址,这里可以做好规划,预留几个 IP,以备以后扩容。我这里写 6 个 IP,将来做 etcd 集群,以及预留一些 IP 备用
➜ cat > etcd-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "10.211.55.2",
        "10.211.55.3",
        "10.211.55.4",
        "10.211.55.22",
        "10.211.55.23"
        "10.211.55.24"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Guangdong",
            "L": "Zhuhai",
            "O": "k8s",
            "OU": "system"
        }
    ]
}
EOF

# 签署 etcd 证书
➜ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

# 验证结果,会生成两个证书文件
➜ ll etcd*pem

-rw------- 1 haxi haxi 1679 Dec 30 11:32 etcd-key.pem
-rw-rw-r-- 1 haxi haxi 1440 Dec 30 11:32 etcd.pem

# 复制 etcd 证书到 /etc/kubernetes/pki
➜ sudo cp etcd*pem /etc/kubernetes/pki

4.2 部署 etcd

下载二进制 etcd下载链接 并解压,将二进制程序 etcd etcdctl 复制到 /usr/local/bin 目录下

➜ ll /usr/local/bin/etcd*

-rwxrwxr-x 1 root root 21823488 Dec 29 14:13 /usr/local/bin/etcd
-rwxrwxr-x 1 root root 16711680 Dec 29 14:13 /usr/local/bin/etcdctl

编写服务配置文件

➜ sudo cat > /etc/etcd/etcd.conf << EOF
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.211.55.4:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.211.55.4:2379"

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.211.55.4:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.211.55.4:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://10.211.55.4:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

# 配置文件解释
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR: 数据保存目录
ETCD_LISTEN_PEER_URLS:集群内部通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址列表
ETCD_INITIAL_CLUSTER_TOKEN:集群通信token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

编写服务启动脚本

➜ sudo cat > /lib/systemd/system/etcd.service << EOF
[Unit]
Description=etcd server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd
ExecStart=/usr/local/bin/etcd \
  --cert-file=/etc/kubernetes/pki/etcd.pem \
  --key-file=/etc/kubernetes/pki/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/pki/ca.pem \
  --peer-cert-file=/etc/kubernetes/pki/etcd.pem \
  --peer-key-file=/etc/kubernetes/pki/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/pki/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF

启动 etcd 服务

➜ systemctl daemon-reload
➜ systemctl enable --now etcd

# 验证结果
➜ systemctl status etcd
# 查看日志
➜ journalctl -u etcd

5 部署 kube-apiserver

5.1 颁发证书

# apiserver 证书签署申请
# hosts 字段中,IP 为所有 apiserver 节点地址,这里可以做好规划,预留几个 IP,以备以后扩容。我这里写 6 个 IP
# 10.211.55.2 10.211.55.3 10.211.55.4 10.211.55.22 10.211.55.23 10.211.55.24
# 10.96.0.1 是 service 网段的第一个 IP
# kubernetes.default.svc.cluster.local 这一串是 apiserver 的 service 域名
➜ cat > etcd-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "10.211.55.2",
        "10.211.55.3",
        "10.211.55.4",
        "10.211.55.22",
        "10.211.55.23"
        "10.211.55.24"
        "10.96.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Guangdong",
            "L": "Zhuhai",
            "O": "k8s",
            "OU": "system"
        }
    ]
}
EOF

# 签署 apiserver 证书
➜ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

# 验证结果,会生成两个证书文件
➜ ll kube-apiserver*pem

-rw------- 1 haxi haxi 1675 Dec 30 11:33 kube-apiserver-key.pem
-rw-rw-r-- 1 haxi haxi 1590 Dec 30 11:33 kube-apiserver.pem

# 复制 apiserver 证书到 /etc/kubernetes/pki
➜ sudo cp kube-apiserver*pem /etc/kubernetes/pki

5.2 部署 kube-apiserver

编写服务配置文件

➜ sudo cat > /etc/kubernetes/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=0.0.0.0 \
  --secure-port=6443 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.96.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem \
  --tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/pki/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/pki/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/pki/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/pki/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/kubernetes/pki/ca.pem \
  --etcd-certfile=/etc/kubernetes/pki/etcd.pem \
  --etcd-keyfile=/etc/kubernetes/pki/etcd-key.pem \
  --etcd-servers=https://10.211.55.4:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=1 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"
EOF

生成 token 文件

➜ sudo cat > /etc/kubernetes/token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

编写服务启动脚本

➜ sudo cat > /lib/systemd/system/kube-apiserver.service << "EOF"
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF

启动 kube-apiserver 服务

➜ systemctl daemon-reload
➜ systemctl enable --now kube-apiserver

# 验证结果
➜ systemctl status kube-apiserver
# 查看日志
➜ journalctl -u kube-apiserver

6 部署 kubectl

部署完 kube-apiserver 后,就可以部署 kubectl 了,因为 kubectl 可以验证 apiserver 是否已经正常工作了。

6.1 颁发证书

# kubectl 证书签署申请
# O 参数的值必须为 system:masters,因为这是 apiserver 一个内置好的角色,拥有集群管理的权限
➜ cat > kubectl-csr.json << EOF
{
    "CN": "clusteradmin",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Guangdong",
            "L": "Zhuhai",
            "O": "system:masters",
            "OU": "system"
        }
    ]
}
EOF

# 签署 kubectl 证书
➜ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubectl-csr.json | cfssljson -bare kubectl

# 验证结果,会生成两个证书文件
➜ ll kubectl*pem

-rw------- 1 haxi haxi 1675 Dec 30 11:34 kubectl-key.pem
-rw-rw-r-- 1 haxi haxi 1415 Dec 30 11:34 kubectl.pem

6.2 生成配置文件

➜ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kube.config

➜ kubectl config set-credentials clusteradmin --client-certificate=kubectl.pem --client-key=kubectl-key.pem --embed-certs=true --kubeconfig=kube.config

➜ kubectl config set-context kubernetes --cluster=kubernetes --user=clusteradmin --kubeconfig=kube.config

➜ kubectl config use-context kubernetes --kubeconfig=kube.config

➜ mkdir -p ~/.kube
➜ cp kube.config ~/.kube/config

6.3 获取集群信息

➜ kubectl cluster-info

Kubernetes control plane is running at https://10.211.55.4:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

➜ kubectl get all -A

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  23h

➜ kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}

7 部署 kube-controller-manager

7.1 颁发证书

# controller-manager 证书签署申请
# hosts 字段中,IP 为所有  节点地址,这里可以做好规划,预留几个 IP,以备以后扩容。我这里写 6 个 IP
➜ cat > kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "hosts": [
        "127.0.0.1",
        "10.211.55.2",
        "10.211.55.3",
        "10.211.55.4",
        "10.211.55.22",
        "10.211.55.23",
        "10.211.55.24"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Guangdong",
            "L": "Zhuhai",
            "O": "system:kube-controller-manager",
            "OU": "system"
        }
    ]
}
EOF

# 签署 controller-manager 证书
➜ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-acontroller-manager-csr.json | cfssljson -bare kube-controller-manager

# 验证结果,会生成两个证书文件
➜ ll kube-controller-manager*pem

-rw------- 1 haxi haxi 1679 Dec 30 12:13 kube-controller-manager-key.pem
-rw-rw-r-- 1 haxi haxi 1513 Dec 30 12:13 kube-controller-manager.pem

# 复制 controler-manager 证书到 /etc/kubernetes/pki
➜ sudo cp kube-controller-manager*pem /etc/kubernetes/pki

7.2 部署 kube-controller-manager

编写服务配置文件

➜ sudo cat > /etc/kubernetes/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
  --secure-port=10257 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.96.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
  --cluster-signing-duration=867240h \
  --tls-cert-file=/etc/kubernetes/pki/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/pki/kube-controller-manager-key.pem \
  --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \
  --root-ca-file=/etc/kubernetes/pki/ca.pem \
  --leader-elect=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --use-service-account-credentials=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.244.0.0/12 \
  --v=4"
EOF

生成 kubeconfig

➜ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kube-controller-manager.kubeconfig

➜ kubectl config set-credentials kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig

➜ kubectl config set-context default --cluster=kubernetes --user=kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

➜ kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

➜ sudo cp kube-controller-manager.kubeconfig /etc/kubernetes/

编写服务启动脚本

➜ sudo cat > /lib/systemd/system/kube-controller-manager.service << "EOF"
[Unit]
Description=Kubernetes controller manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target network-online.target
Wants=network-online.target

[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF

启动 kube-controller-manager 服务

➜ systemctl daemon-reload
➜ systemctl enable --now kube-controller-manager

# 验证结果
➜ systemctl status kube-controller-manager
# 查看日志
➜ journalctl -u kube-controller-manager

查看组件状态

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}

8 部署 kube-scheduler

8.1 颁发证书

# scheduler 证书签署申请
# hosts 字段中,IP 为所有  节点地址,这里可以做好规划,预留几个 IP,以备以后扩容。我这里写 6 个 IP
➜ cat > kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
        "127.0.0.1",
        "10.211.55.2",
        "10.211.55.3",
        "10.211.55.4",
        "10.211.55.22",
        "10.211.55.23",
        "10.211.55.24"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Guangdong",
            "L": "Zhuhai",
            "O": "system:kube-scheduler",
            "OU": "system"
        }
    ]
}
EOF

# 签署 scheduler 证书
➜ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

# 验证结果,会生成两个证书文件
➜ ll kube-scheduler*pem

-rw------- 1 haxi haxi 1679 Dec 30 13:19 kube-scheduler-key.pem
-rw-rw-r-- 1 haxi haxi 1489 Dec 30 13:19 kube-scheduler.pem

# 复制 scheduler 证书到 /etc/kubernetes/pki
➜ sudo cp kube-scheduler*pem /etc/kubernetes/pki

8.2 部署 kube-scheduler

编写服务配置文件

➜ sudo cat /etc/kubernetes/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
  --leader-elect=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"
EOF

生成 kubeconfig

➜ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kube-scheduler.kubeconfig

➜ kubectl config set-credentials kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig

➜ kubectl config set-context default --cluster=kubernetes --user=kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

➜ kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

➜ sudo cp kube-scheduler.kubeconfig /etc/kubernetes/

编写服务启动脚本

➜ sudo cat > /lib/systemd/system/kube-scheduler.service << "EOF"
[Unit]
Description=Kubernetes scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target network-online.target
Wants=network-online.target

[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF

启动 kube-scheduler 服务

➜ systemctl daemon-reload
➜ systemctl enable --now kube-scheduler

# 验证结果
➜ systemctl status kube-scheduler
# 查看日志
➜ journalctl -u kube-scheduler

查看组件状态

➜ kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}

9 部署 kubelet

master 节点上部署 kubelet 是可选的,一旦部署 kubelet,master 节点也可以运行 Pod,如果不希望 master 节点上运行 Pod,则可以给 master 节点打上污点。

master 节点部署 kubelet 是有好处的,一是可以通过诸如 kubectl get node 等命令查看节点信息,二是可以在上面部署监控系统,日志采集系统等。

9.1 生成 kubeconfig

➜ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kubelet.kubeconfig

➜ kubectl config set-credentials kubelet-bootstrap --token=$(awk -F, '{print $1}' /etc/kubernetes/token.csv) --kubeconfig=kubelet.kubeconfig

➜ kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet.kubeconfig

➜ kubectl config use-context default --kubeconfig=kubelet.kubeconfig

➜ sudo cp kubelet.kubeconfig /etc/kubernetes/

9.2 部署 kubelet

编写服务配置文件

➜ sudo cat > /etc/kubernetes/kubelet.conf << EOF
KUBELET_OPTS="--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  --config=/etc/kubernetes/kubelet.yaml \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --cert-dir=/etc/kubernetes/pki \
  --network-plugin=cni \
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 \
  --logtostderr=false \
  --v=4 \
  --log-dir=/var/log/kubernetes \
  --fail-swap-on=false"
EOF

➜ sudo cat > /etc/kubernetes/kubelet.yaml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 0
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
healthzBindAddress: 127.0.0.1
healthzPort: 10248
rotateCertificates: true
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

生成 kubeconfig

➜ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kubelet-bootstrap.kubeconfig

➜ kubectl config set-credentials kubelet-bootstrap --token=$(awk -F, '{print $1}' /etc/kubernetes/token.csv) --kubeconfig=kubelet-bootstrap.kubeconfig

➜ kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig

➜ kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

➜ sudo cp kubelet-bootstrap.kubeconfig /etc/kubernetes/

编写服务启动脚本

➜ sudo cat > /lib/systemd/system/kubelet.service << "EOF"
[Unit]
Description=Kubernetes kubelet
After=network.target network-online.targer docker.service
Wants=docker.service

[Service]
EnvironmentFile=-/etc/kubernetes/kubelet.conf
ExecStart=/usr/local/bin/kubelet $KUBELET_OPTS
Restart=on-failure
RestartSec=5
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF

启动 kubelet 服务

➜ systemctl daemon-reload
➜ systemctl enable --now kubelet

# 验证结果
➜ systemctl status kubelet
# 查看日志
➜ journalctl -u kubelet

批准节点加入集群

➜ kubectl get csr

NAME        AGE   SIGNERNAME                                    REQUESTOR           CONDITION
csr-nhjj4   87s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

➜ kubectl certificate approve csr-nhjj4

➜ kubectl get csr

NAME        AGE   SIGNERNAME                                      REQUESTOR           CONDITION
csr-nhjj4   2m10s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

查看节点

➜ kubectl get node

NAME                   STATUS      ROLES    AGE     VERSION
ubuntu-k8s-master-01   NotReady    <none>   2m40s   v1.23.1

# 此时节点状态还是 NotReady,因为还没有安装网络插件,正确安装网络插件后,状态会变为 Ready.

10 部署 kube-proxy

10.1 颁发证书

# kube-proxy 证书签署申请
➜ cat > kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Guangdong",
            "L": "Zhuhai",
            "O": "k8s",
            "OU": "system"
        }
    ]
}
EOF

# 签署 kube-proxy 证书
➜ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

# 验证结果,会生成两个证书文件
➜ ll kube-proxy*pem

-rw------- 1 haxi haxi 1679 Dec 31 10:26 kube-proxy-key.pem
-rw-rw-r-- 1 haxi haxi 1407 Dec 31 10:26 kube-proxy.pem

# 复制 kube-proxy 证书到 /etc/kubernetes/pki
➜ sudo cp kube-proxy*pem /etc/kubernetes/pki

8.2 部署 kube-proxy

编写服务配置文件

➜ sudo cat > /etc/kubernetes/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--config=/etc/kubernetes/kube-proxy.yaml \
  --logtostderr=false \
  --v=4 \
  --log-dir=/var/log/kubernetes"
EOF

➜ sudo cat > /etc/kubernetes/kube-proxy.yaml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
bindAddress: 0.0.0.0
clusterCIDR: 10.244.0.0/12
healthzBindAddress: 0.0.0.0:10256
metricsBindAddress: 0.0.0.0:10249
mode: ipvs
ipvs:
  scheduler: "rr"
EOF

生成 kubeconfig

➜ kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://10.211.55.4:6443 --kubeconfig=kube-proxy.kubeconfig

➜ kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

➜ kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

➜ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

➜ sudo cp kube-proxy.kubeconfig /etc/kubernetes/

编写服务启动脚本

➜ sudo cat > /lib/systemd/system/kube-proxy.service << "EOF"
[Unit]
Description=Kubernetes Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target network-online.target
Wants=network-online.target

[Service]
EnvironmentFile=-/etc/kubernetes/kube-proxy.conf
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
RestartSec=5
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
EOF

启动 kube-proxy 服务

➜ systemctl daemon-reload
➜ systemctl enable --now kubelet

# 验证结果
➜ systemctl status kube-proxy
# 查看日志
➜ journalctl -u kube-proxy

11 部署 calico

参考地址 calico

➜ curl https://docs.projectcalico.org/manifests/calico.yaml -O

# 修改 Pod IP 地址段,找到 CALICO_IPV4POOL_CIDR 变量,取消注释并修改如下
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/12"

➜ kubectl apply -f calico.yaml

12 部署 coredns

参考地址 coredns

下载 yaml 文件,做以下修改:

  • CLUSTER_DOMAIN 改为 cluster.local
  • REVERSE_CIDRS 改为 in-addr.arpa ip6.arpa
  • UPSTREAMNAMESERVER 改为 /etc/resolv.conf,如果报错,则改成当前网络所使用的 DNS 地址
  • 删除 STUBDOMAINS
  • CLUSTER_DNS_IP 改为 10.96.0.10(应与 /etc/kubernetes/kubelet.yaml 中配置的clusterDNS保持一致)
➜ kubectl apply -f coredns.yaml

验证

➜ kubectl -n kube-system get pod

NAME                                       READY   STATUS    RESTARTS        AGE
calico-kube-controllers-647d84984b-xh55r   1/1     Running   0               41m
calico-node-d9jqp                          1/1     Running   0               40m
coredns-f89fb968f-sxq4m                    1/1     Running   0               30m

➜ kubectl get node

NAME                   STATUS   ROLES    AGE     VERSION
ubuntu-k8s-master-01   Ready    <none>   5d23h   v1.23.1

13 添加 worker 节点

worker 节点需要部署两个组件 kubelet, kube-proxy.

从 master 节点上复制以下几个文件到 worker 节点

  • /etc/kubernetes/pki/ca.pem
  • /etc/kubernetes/kubelet-bootstrap.kubeconfig
  • /etc/kubernetes/kubelet.yaml
  • /etc/kubernetes/kubelet.conf
  • /lib/systemd/system/kubelet.service
  • /etc/kubernetes/pki/kube-proxy-key.pem
  • /etc/kubernetes/pki/kube-proxy.pem
  • /etc/kubernetes/kube-proxy.conf
  • /etc/kubernetes/kube-proxy.yaml
  • /lib/systemd/system/kube-proxy.service

复制 kubelet, kube-proxy 二进制程序到 /usr/local/bin

启动 kube-proxy 服务

➜ systemctl daemon-reload
➜ systemctl enable --now kube-proxy

# 验证结果
➜ systemctl status kube-proxy
# 查看日志
➜ journalctl -u kube-proxy

启动 kubelet 服务

➜ systemctl daemon-reload
➜ systemctl enable --now kubelet

# 验证结果
➜ systemctl status kubelet
# 查看日志
➜ journalctl -u kubelet

批准节点加入集群

➜ kubectl get csr

NAME        AGE   SIGNERNAME                                    REQUESTOR           CONDITION
csr-xxxxx   87s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

➜ kubectl certificate approve csr-xxxxx

➜ kubectl get csr

NAME        AGE   SIGNERNAME                                      REQUESTOR           CONDITION
csr-xxxxx   2m10s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

查看节点

➜ kubectl get node

NAME                   STATUS   ROLES    AGE     VERSION
ubuntu-k8s-master-01   Ready    <none>   2m40s   v1.23.1
ubuntu-k8s-worker-01   Ready    <none>   2m40s   v1.23.1

后记

至此 1 master 1 worker 的 k8s 二进制集群已搭建完毕。

此外,还可以给节点打上角色标签,使得查看节点信息更加直观

# 给 master 节点打上 controlplane,etcd 角色标签
➜ kubectl label node ubuntu-k8s-master-01 node-role.kubernetes.io/controlplane=true ode-role.kubernetes.io/etcd=true

# 给 worker 节点打上 worker 角色标签
➜ kubectl label node ubuntu-k8s-worker-01 node-role.kubernetes.io/worker=true

如果不希望 master 节点运行 Pod,则给 master 打上污点

➜ kubectl taint node ubuntu-k8s-master-01 node-role.kubernetes.io/controlplane=true:NoSchedule

接下来我将新增 2 个 etcd 节点组成 etcd 集群,新增 2 个控制平面,避免单点故障。

标签: none

已有 5 条评论

  1. luo luo

    kube-apiserver中创建证书签名申请文件时名字写错了,写成了etcd-csr.json
    ➜ cat > etcd-csr.json

  2. jerbin jerbin

    部署 kube-proxy章节,kube-scheduler-csr.json也写错了,应该是kube-proxy-csr.json

  3. jerbin jerbin

    部署 kube-scheduler
    sudo cat /etc/kubernetes/kube-scheduler.conf ,应该是
    sudo cat > /etc/kubernetes/kube-scheduler.conf

  4. jerbin jerbin

    启动 kube-proxy 服务
    systemctl enable --now kubelet
    应该是
    systemctl enable --now kube-proxy

  5. [...]二进制部署 K8s 集群 1.23.1 版本 - 陈日志[...]

添加新评论