Kubernetes集群 for openEuler 22.03 LTS 二进制手动部署

Kubernetes集群 for openEuler 22.03 LTS 二进制手动部署

本文档介绍在 openEuler 操作系统上,通过二进制部署 K8S 集群的一个参考方法。

说明:本文所有操作均使用root权限执行。

一、集群状态

本文所使用的集群状态如下:

  • 集群结构:6 台openEuler 22.03 LTS系统的虚拟机,3 个 master 和 3 个 node 节点。
  • 物理机:openEuler 22.03 LTSx86/ARM服务器

二、手动部署集群

本章介绍手动部署 Kubernets 集群的方法。

环境说明

虚拟机列表:

HostName配置IPv4
k8smaster08C/8G/200G192.168.122.154/24
k8smaster18C/8G/200G192.168.122.155/24
k8smaster28C/8G/200G192.168.122.156/24
k8snode18C/8G/300G192.168.122.157/24
k8snode28C/8G/300G192.168.122.158/24
k8snode38C/8G/300G192.168.122.159/24

三、安装 Kubernetes 软件包

  1. # dnf install -y docker conntrack-tools socat

EPOL 之后,可以直接通过 dnf 安装 K8S

  1. # rpm -ivh kubernetes*.rpm

四、准备证书

声明:本文使用的证书为自签名,不能用于商用环境

部署集群前,需要生成集群各组件之间通信所需的证书。本文使用开源 CFSSL 作为验证部署工具,以便用户了解证书的配置和集群组件之间证书的关联关系。用户可以根据实际情况选择合适的工具,例如 OpenSSL 。

4.1 编译安装 CFSSL

编译安装 CFSSL 的参考命令如下(需要互联网下载权限,需要配置代理的请先完成配置),

  1. # wget --no-check-certificate https://github.com/cloudflare/cfssl/archive/v1.5.0.tar.gz
  2. # tar -zxf v1.5.0.tar.gz
  3. # cd cfssl-1.5.0/
  4. # make -j6
  5. # cp bin/* /usr/local/bin/

4.2 生成根证书

编写 CA 配置文件,例如 ca-config.json:

  1. # cat ca-config.json | jq
  2. {
  3. "signing": {
  4. "default": {
  5. "expiry":"8760h"
  6. },
  7. "profiles": {
  8. "kubernetes": {
  9. "usages": [
  10. "signing",
  11. "key encipherment",
  12. "server auth",
  13. "client auth"
  14. ],
  15. "expiry":"8760h"
  16. }
  17. }
  18. }
  19. }

编写 CA CSR 文件,例如 ca-csr.json:

  1. # cat ca-csr.json | jq
  2. {
  3. "CN":"Kubernetes",
  4. "key": {
  5. "algo":"rsa",
  6. "size":2048
  7. },
  8. "names": [
  9. {
  10. "C":"CN",
  11. "L":"HangZhou",
  12. "O":"openEuler",
  13. "OU":"WWW",
  14. "ST":"BinJiang"
  15. }
  16. ]
  17. }

生成 CA 证书和密钥:

  1. # cfssl gencert -initca ca-csr.json | cfssljson -bare ca

得到如下证书:

  1. ca.csr ca-key.pem ca.pem

4.3 生成 admin 账户证书

admin 是 K8S 用于系统管理的一个账户,编写 admin 账户的 CSR 配置,例如 admin-csr.json:

  1. # cat admin-csr.json | jq
  2. {
  3. "CN":"admin",
  4. "key": {
  5. "algo":"rsa",
  6. "size":2048
  7. },
  8. "names": [
  9. {
  10. "C":"CN",
  11. "L":"HangZhou",
  12. "O":"system:masters",
  13. "OU":"Containerum",
  14. "ST":"BinJiang"
  15. }
  16. ]
  17. }

生成证书:

  1. # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

结果如下:

  1. admin.csr admin-key.pem admin.pem

4.4 生成 service-account 账户证书

编写 service-account 账户的 CSR 配置文件,例如 service-account-csr.json:

  1. # cat service-account-csr.json | jq
  2. {
  3. "CN":"service-accounts",
  4. "key": {
  5. "algo":"rsa",
  6. "size":2048
  7. },
  8. "names": [
  9. {
  10. "C":"CN",
  11. "L":"HangZhou",
  12. "O":"Kubernetes",
  13. "OU":"openEuler k8s install",
  14. "ST":"BinJiang"
  15. }
  16. ]
  17. }

生成证书:

  1. # cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes service-account-csr.json | cfssljson -bare service-account

结果如下:

  1. service-account.csr service-account-key.pem service-account.pem

4.5 生成 kube-controller-manager 组件证书

编写 kube-controller-manager 的 CSR 配置:

  1. {
  2. "CN":"system:kube-controller-manager",
  3. "key": {
  4. "algo":"rsa",
  5. "size":2048
  6. },
  7. "names": [
  8. {
  9. "C":"CN",
  10. "L":"HangZhou",
  11. "O":"system:kube-controller-manager",
  12. "OU":"openEuler k8s kcm",
  13. "ST":"BinJiang"
  14. }
  15. ]
  16. }

生成证书:

  1. # cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

结果如下:

  1. kube-controller-manager.csr kube-controller-manager-key.pem kube-controller-manager.pem

4.6 生成 kube-proxy 证书

编写 kube-proxy 的 CSR 配置:

  1. {
  2. "CN":"system:kube-proxy",
  3. "key": {
  4. "algo":"rsa",
  5. "size":2048
  6. },
  7. "names": [
  8. {
  9. "C":"CN",
  10. "L":"HangZhou",
  11. "O":"system:node-proxier",
  12. "OU":"openEuler k8s kube proxy",
  13. "ST":"BinJiang"
  14. }
  15. ]
  16. }

生成证书:

  1. # cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

结果如下:

  1. kube-proxy.csr kube-proxy-key.pem kube-proxy.pem

4.7 生成 kube-scheduler 证书

编写 kube-scheduler 的 CSR 配置:

  1. {
  2. "CN":"system:kube-scheduler",
  3. "key": {
  4. "algo":"rsa",
  5. "size":2048
  6. },
  7. "names": [
  8. {
  9. "C":"CN",
  10. "L":"HangZhou",
  11. "O":"system:kube-scheduler",
  12. "OU":"openEuler k8s kube scheduler",
  13. "ST":"BinJiang"
  14. }
  15. ]
  16. }

生成证书:

  1. # cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

结果如下:

  1. kube-scheduler.csr kube-scheduler-key.pem kube-scheduler.pem

4.8 生成 kubelet 证书

由于证书涉及到 kubelet 所在机器的 hostname 和 IP 地址信息,因此每个 node 节点配置不尽相同,所以编写脚本完成,生成脚本如下:

  1. # cat node_csr_gen.bash
  2. #!/bin/bash
  3. nodes=(k8snode1 k8snode2 k8snode3)
  4. IPs=("192.168.122.157""192.168.122.158""192.168.122.159")
  5. for iin"${!nodes[@]}";do
  6. cat >"${nodes[$i]}-csr.json" <<EOF
  7. {
  8. "CN":"system:node:${nodes[$i]}",
  9. "key": {
  10. "algo":"rsa",
  11. "size":2048
  12. },
  13. "names": [
  14. {
  15. "C":"CN",
  16. "L":"HangZhou",
  17. "O":"system:nodes",
  18. "OU":"openEuler k8s kubelet",
  19. "ST":"BinJiang"
  20. }
  21. ]
  22. }
  23. EOF
  24. # generate ca
  25. echo"generate:${nodes[$i]}${IPs[$i]}"
  26. cfssl gencert-ca=../ca/ca.pem-ca-key=../ca/ca-key.pem-config=../ca/ca-config.json-hostname=${nodes[$i]},${IPs[$i]}-profile=kubernetes${nodes[$i]}-csr.json | cfssljson-bare${nodes[$i]}
  27. done

说明:如果节点存在多个 IP 或者其他别名,-hostname 可以增加其他的 IP 或者 hostname

结果如下:

  1. k8snode1.csr k8snode1.pem k8snode2-key.pem k8snode3-csr.json
  2. k8snode1-csr.json k8snode2.csr k8snode2.pem k8snode3-key.pem
  3. k8snode1-key.pem k8snode2-csr.json k8snode3.csr k8snode3.pem

CSR 配置信息,以 k8snode1 为例如下:

  1. # cat k8snode1-csr.json
  2. {
  3. "CN":"system:node:k8snode1",
  4. "key": {
  5. "algo":"rsa",
  6. "size":2048
  7. },
  8. "names": [
  9. {
  10. "C":"CN",
  11. "L":"HangZhou",
  12. "O":"system:nodes",
  13. "OU":"openEuler k8s kubelet",
  14. "ST":"BinJiang"
  15. }
  16. ]
  17. }

注意:由于每个 node 所属的账户组为system:node,因此 CSR 的 CN 字段都为system:node加上hostname

4.9 生成 kube-apiserver 证书

编写 kube api server 的 CSR 配置文件:

  1. # cat kubernetes-csr.json | jq
  2. {
  3. "CN":"kubernetes",
  4. "key": {
  5. "algo":"rsa",
  6. "size":2048
  7. },
  8. "names": [
  9. {
  10. "C":"CN",
  11. "L":"HangZhou",
  12. "O":"Kubernetes",
  13. "OU":"openEuler k8s kube api server",
  14. "ST":"BinJiang"
  15. }
  16. ]
  17. }

生成证书和密钥:

  1. # cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -hostname=10.32.0.1,192.168.122.154,192.168.122.155,192.168.122.156,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

结果如下:

  1. kubernetes.csr kubernetes-key.pem kubernetes.pem

说明:10.32.0.1 是内部 services 使用的 IP 地址区间,可以设置为其他值,后面启动 apiserver 服务时,会设置该参数。

4.10 生成 etcd 证书(可选)

部署 etcd 有两种方式:

  • 在每个 api-server 对应的机器都启动一个 etcd 服务
  • 独立部署一个 etcd 集群服务

如果是和 api-server 一起部署,那么直接使用上面生成的kubernetes-key.pemkubernetes.pem证书即可。

如果是独立的etcd集群,那么需要创建证书如下:

编写 etcd 的 CSR 配置:

  1. # cat etcd-csr.json | jq
  2. {
  3. "CN":"ETCD",
  4. "key": {
  5. "algo":"rsa",
  6. "size":2048
  7. },
  8. "names": [
  9. {
  10. "C":"CN",
  11. "L":"HangZhou",
  12. "O":"ETCD",
  13. "OU":"openEuler k8s etcd",
  14. "ST":"BinJiang"
  15. }
  16. ]
  17. }

生成证书:

  1. # cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -hostname=192.168.122.154,192.168.122.155,192.168.122.156,127.0.0.1 -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

说明:假设 etcd 集群的 IP地址是 192.168.122.154,192.168.122.155,192.168.122.156

结果如下:

  1. etcd.csr etcd-key.pem etcd.pem

五、安装 etcd

5.1 准备环境

使能 etcd 使用的端口:

  1. # firewall-cmd --zone=public --add-port=2379/tcp
  2. # firewall-cmd --zone=public --add-port=2380/tcp

5.2 安装 etcd 二进制

当前是通过 rpm 包安装

  1. # rpm -ivh etcd*.rpm

准备目录

  1. # mkdir -p /etc/etcd /var/lib/etcd
  2. # cp ca.pem /etc/etcd/
  3. # cp kubernetes-key.pem /etc/etcd/
  4. # cp kubernetes.pem /etc/etcd/
  5. --- 关闭selinux
  6. # setenforce 0
  7. --- 禁用/etc/etcd/etcd.conf文件的默认配置
  8. --- 注释掉即可,例如:ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"

5.3 编写 etcd.service 文件

k8smaster0机器为例:

  1. # cat /usr/lib/systemd/system/etcd.service
  2. [Unit]
  3. Description=Etcd Server
  4. After=network.target
  5. After=network-online.target
  6. Wants=network-online.target
  7. [Service]
  8. Type=notify
  9. WorkingDirectory=/var/lib/etcd/
  10. EnvironmentFile=-/etc/etcd/etcd.conf
  11. # set GOMAXPROCS to number of processors
  12. ExecStart=/bin/bash-c"ETCD_UNSUPPORTED_ARCH=arm64 /usr/bin/etcd --name=k8smaster0 --cert-file=/etc/etcd/kubernetes.pem --key-file=/etc/etcd/kubernetes-key.pem --peer-cert-file=/etc/etcd/kubernetes.pem --peer-key-file=/etc/etcd/kubernetes-key.pem --trusted-ca-file=/etc/etcd/ca.pem --peer-trusted-ca-file=/etc/etcd/ca.pem --peer-client-cert-auth --client-cert-auth --initial-advertise-peer-urls https://192.168.122.154:2380 --listen-peer-urls https://192.168.122.154:2380 --listen-client-urls https://192.168.122.154:2379,https://127.0.0.1:2379 --advertise-client-urls https://192.168.122.154:2379 --initial-cluster-token etcd-cluster-0 --initial-cluster k8smaster0=https://192.168.122.154:2380,k8smaster1=https://192.168.122.155:2380,k8smaster2=https://192.168.122.156:2380 --initial-cluster-state new --data-dir /var/lib/etcd"
  13. Restart=always
  14. RestartSec=10s
  15. LimitNOFILE=65536
  16. [Install]
  17. WantedBy=multi-user.target

注意:

  • arm64上面需要增加启动设置ETCD_UNSUPPORTED_ARCH=arm64
  • 由于本文把etcd和k8s control部署在相同机器,所以使用了kubernetes.pemkubernetes-key.pem证书来启动;
  • ca证书,在整个部署流程里面使用了一个,etcd可以生成自己的ca,然后用自己的ca签名其他证书,但是需要在apiserver访问etcd的client用该ca签名的证书;
  • initial-cluster需要把所有部署etcd的配置加上;
  • 为了提高etcd的存储效率,可以使用ssd硬盘的目录,作为data-dir

启动服务

  1. # systemctl enable etcd
  2. # systemctl start etcd

然后,依次部署其他机器即可。

5.4 验证基本功能

  1. # ETCDCTL_API=3 etcdctl -w table endpoint status --endpoints=https://192.168.122.155:2379,https://192.168.122.156:2379,https://192.168.122.154:2379 --cacert=/etc/etcd/ca.pem --cert=/etc/etcd/kubernetes.pem --key=/etc/etcd/kubernetes-key.pem
  2. +------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  3. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFTAPPLIED INDEX | ERRORS |
  4. +------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
  5. | https://192.168.122.155:2379 | b50ec873e253ebaa | 3.4.14 | 262 kB | false | false | 819 | 21 | 21 | |
  6. | https://192.168.122.156:2379 | e2b0d126774c6d02 | 3.4.14 | 262 kB | true | false | 819 | 21 | 21 | |
  7. | https://192.168.122.154:2379 | f93b3808e944c379 | 3.4.14 | 328 kB | false | false | 819 | 21 | 21 | |
  8. +------------------------------+------------------+---------+---------+-----------+------------+-----------+------------

六、部署控制面组件

6.1 准备所有组件的 kubeconfig

6.1.1 kube-proxy

  1. # kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.122.154:6443 --kubeconfig=kube-proxy.kubeconfig
  2. # kubectl config set-credentials system:kube-proxy --client-certificate=/etc/kubernetes/pki/kube-proxy.pem --client-key=/etc/kubernetes/pki/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
  3. # kubectl config set-context default --cluster=openeuler-k8s --user=system:kube-proxy --kubeconfig=kube-proxy.kubeconfig
  4. # kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
6.1.2 kube-controller-manager
  1. # kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kube-controller-manager.kubeconfig
  2. # kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/kube-controller-manager.pem --client-key=/etc/kubernetes/pki/kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
  3. # kubectl config set-context default --cluster=openeuler-k8s --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
  4. # kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
6.1.3 kube-scheduler
  1. # kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kube-scheduler.kubeconfig
  2. # kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/kube-scheduler.pem --client-key=/etc/kubernetes/pki/kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
  3. # kubectl config set-context default --cluster=openeuler-k8s --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
  4. # kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
6.1.4 admin
  1. # kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=admin.kubeconfig
  2. # kubectl config set-credentials admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=admin.kubeconfig
  3. # kubectl config set-context default --cluster=openeuler-k8s --user=admin --kubeconfig=admin.kubeconfig
  4. # kubectl config use-context default --kubeconfig=admin.kubeconfig
6.1.5 获得相关 kubeconfig 配置文件

admin.kubeconfig kube-proxy.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig

6.2 生成密钥提供者的配置

api-server 启动时需要提供一个密钥对--encryption-provider-config=/etc/kubernetes/pki/encryption-config.yaml,本文通过 urandom 生成一个:

  1. # cat generate.bash
  2. #!/bin/bash
  3. ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
  4. cat > encryption-config.yaml <<EOF
  5. kind: EncryptionConfig
  6. apiVersion: v1
  7. resources:
  8. - resources:
  9. - secrets
  10. providers:
  11. - aescbc:
  12. keys:
  13. - name: key1
  14. secret:${ENCRYPTION_KEY}
  15. - identity: {}
  16. EOF
  17. # api-server启动配置 --encryption-provider-config=/etc/kubernetes/pki/encryption-config.yaml

6.3 拷贝证书

本文把所有组件使用的证书、密钥以及配置统一放到/etc/kubernetes/pki/目录下。

  1. --- 准备证书目录
  2. # mkdir -p /etc/kubernetes/pki/
  3. # ls /etc/kubernetes/pki/
  4. admin-key.pem encryption-config.yaml kube-proxy-key.pem kubernetes.pem service-account-key.pem
  5. admin.pem kube-controller-manager-key.pem kube-proxy.kubeconfig kube-scheduler-key.pem service-account.pem
  6. ca-key.pem kube-controller-manager.kubeconfig kube-proxy.pem kube-scheduler.kubeconfig
  7. ca.pem kube-controller-manager.pem kubernetes-key.pem kube-scheduler.pem

6.4 部署 admin 角色的 RBAC

使能 admin role

  1. # cat admin_cluster_role.yaml
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. kind: ClusterRole
  4. metadata:
  5. annotations:
  6. rbac.authorization.kubernetes.io/autoupdate:"true"
  7. labels:
  8. kubernetes.io/bootstrapping: rbac-defaults
  9. name: system:kube-apiserver-to-kubelet
  10. rules:
  11. - apiGroups:
  12. -""
  13. resources:
  14. - nodes/proxy
  15. - nodes/stats
  16. - nodes/log
  17. - nodes/spec
  18. - nodes/metrics
  19. verbs:
  20. -"*"
  21. --- 使能admin role
  22. # kubectl apply --kubeconfig admin.kubeconfig -f admin_cluster_role.yaml

绑定 admin role

  1. # cat admin_cluster_rolebind.yaml
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. kind: ClusterRoleBinding
  4. metadata:
  5. name: system:kube-apiserver
  6. namespace:""
  7. roleRef:
  8. apiGroup: rbac.authorization.k8s.io
  9. kind: ClusterRole
  10. name: system:kube-apiserver-to-kubelet
  11. subjects:
  12. - apiGroup: rbac.authorization.k8s.io
  13. kind: User
  14. name: kubernetes
  15. ---# 绑定admin role
  16. # kubectl apply --kubeconfig admin.kubeconfig -f admin_cluster_rolebind.yaml

6.5 部署 api server 服务

6.5.1 修改 apiserver 的 etc 配置文件
  1. # cat /etc/kubernetes/apiserver
  2. KUBE_ADVERTIS_ADDRESS="--advertise-address=192.168.122.154"
  3. KUBE_ALLOW_PRIVILEGED="--allow-privileged=true"
  4. KUBE_AUTHORIZATION_MODE="--authorization-mode=Node,RBAC"
  5. KUBE_ENABLE_ADMISSION_PLUGINS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"
  6. KUBE_SECURE_PORT="--secure-port=6443"
  7. KUBE_ENABLE_BOOTSTRAP_TOKEN_AUTH="--enable-bootstrap-token-auth=true"
  8. KUBE_ETCD_CAFILE="--etcd-cafile=/etc/kubernetes/pki/ca.pem"
  9. KUBE_ETCD_CERTFILE="--etcd-certfile=/etc/kubernetes/pki/kubernetes.pem"
  10. KUBE_ETCD_KEYFILE="--etcd-keyfile=/etc/kubernetes/pki/kubernetes-key.pem"
  11. KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.122.154:2379,https://192.168.122.155:2379,https://192.168.122.156:2379"
  12. KUBE_CLIENT_CA_FILE="--client-ca-file=/etc/kubernetes/pki/ca.pem"
  13. KUBE_KUBELET_CERT_AUTH="--kubelet-certificate-authority=/etc/kubernetes/pki/ca.pem"
  14. KUBE_KUBELET_CLIENT_CERT="--kubelet-client-certificate=/etc/kubernetes/pki/kubernetes.pem"
  15. KUBE_KUBELET_CLIENT_KEY="--kubelet-client-key=/etc/kubernetes/pki/kubernetes-key.pem"
  16. KUBE_KUBELET_HTTPS="--kubelet-https=true"
  17. KUBE_PROXY_CLIENT_CERT_FILE="--proxy-client-cert-file=/etc/kubernetes/pki/kube-proxy.pem"
  18. KUBE_PROXY_CLIENT_KEY_FILE="--proxy-client-key-file=/etc/kubernetes/pki/kube-proxy-key.pem"
  19. KUBE_TLS_CERT_FILE="--tls-cert-file=/etc/kubernetes/pki/kubernetes.pem"
  20. KUBE_TLS_PRIVATE_KEY_FILE="--tls-private-key-file=/etc/kubernetes/pki/kubernetes-key.pem"
  21. KUBE_SERVICE_CLUSTER_IP_RANGE="--service-cluster-ip-range=10.32.0.0/16"
  22. KUBE_SERVICE_ACCOUNT_ISSUER="--service-account-issuer=https://kubernetes.default.svc.cluster.local"
  23. KUBE_SERVICE_ACCOUNT_KEY_FILE="--service-account-key-file=/etc/kubernetes/pki/service-account.pem"
  24. KUBE_SERVICE_ACCOUNT_SIGN_KEY_FILE="--service-account-signing-key-file=/etc/kubernetes/pki/service-account-key.pem"
  25. KUBE_SERVICE_NODE_PORT_RANGE="--service-node-port-range=30000-32767"
  26. KUB_ENCRYPTION_PROVIDER_CONF="--encryption-provider-config=/etc/kubernetes/pki/encryption-config.yaml"
  27. KUBE_REQUEST_HEADER_ALLOWED_NAME="--requestheader-allowed-names=front-proxy-client"
  28. KUBE_REQUEST_HEADER_EXTRA_HEADER_PREF="--requestheader-extra-headers-prefix=X-Remote-Extra-"
  29. KUBE_REQUEST_HEADER_GROUP_HEADER="--requestheader-group-headers=X-Remote-Group"
  30. KUBE_REQUEST_HEADER_USERNAME_HEADER="--requestheader-username-headers=X-Remote-User"
  31. KUBE_API_ARGS=""

所有apiserver的配置都/etc/kubernetes/config文件中定义,然后在后面的service文件中直接使用即可。

大部分配置都是比较固定的,部分需要注意:

  • --service-cluster-ip-range该地址需要和后面的设置的clusterDNS需要一致;
6.5.2 编写 apiserver 的 systemd 配置
  1. # cat /usr/lib/systemd/system/kube-apiserver.service
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://kubernetes.io/docs/reference/generated/kube-apiserver/
  5. After=network.target
  6. After=etcd.service
  7. [Service]
  8. EnvironmentFile=-/etc/kubernetes/config
  9. EnvironmentFile=-/etc/kubernetes/apiserver
  10. ExecStart=/usr/bin/kube-apiserver \
  11. $KUBE_ADVERTIS_ADDRESS \
  12. $KUBE_ALLOW_PRIVILEGED \
  13. $KUBE_AUTHORIZATION_MODE \
  14. $KUBE_ENABLE_ADMISSION_PLUGINS \
  15. $KUBE_SECURE_PORT \
  16. $KUBE_ENABLE_BOOTSTRAP_TOKEN_AUTH \
  17. $KUBE_ETCD_CAFILE \
  18. $KUBE_ETCD_CERTFILE \
  19. $KUBE_ETCD_KEYFILE \
  20. $KUBE_ETCD_SERVERS \
  21. $KUBE_CLIENT_CA_FILE \
  22. $KUBE_KUBELET_CERT_AUTH \
  23. $KUBE_KUBELET_CLIENT_CERT \
  24. $KUBE_KUBELET_CLIENT_KEY \
  25. $KUBE_PROXY_CLIENT_CERT_FILE \
  26. $KUBE_PROXY_CLIENT_KEY_FILE \
  27. $KUBE_TLS_CERT_FILE \
  28. $KUBE_TLS_PRIVATE_KEY_FILE \
  29. $KUBE_SERVICE_CLUSTER_IP_RANGE \
  30. $KUBE_SERVICE_ACCOUNT_ISSUER \
  31. $KUBE_SERVICE_ACCOUNT_KEY_FILE \
  32. $KUBE_SERVICE_ACCOUNT_SIGN_KEY_FILE \
  33. $KUBE_SERVICE_NODE_PORT_RANGE \
  34. $KUBE_LOGTOSTDERR \
  35. $KUBE_LOG_LEVEL \
  36. $KUBE_API_PORT \
  37. $KUBELET_PORT \
  38. $KUBE_ALLOW_PRIV \
  39. $KUBE_SERVICE_ADDRESSES \
  40. $KUBE_ADMISSION_CONTROL \
  41. $KUB_ENCRYPTION_PROVIDER_CONF \
  42. $KUBE_REQUEST_HEADER_ALLOWED_NAME \
  43. $KUBE_REQUEST_HEADER_EXTRA_HEADER_PREF \
  44. $KUBE_REQUEST_HEADER_GROUP_HEADER \
  45. $KUBE_REQUEST_HEADER_USERNAME_HEADER \
  46. $KUBE_API_ARGS
  47. Restart=on-failure
  48. Type=notify
  49. LimitNOFILE=65536
  50. [Install]
  51. WantedBy=multi-user.target

6.6 部署 controller-manager 服务

6.6.1 修改 controller-manager 配置文件
  1. # cat /etc/kubernetes/controller-manager
  2. KUBE_BIND_ADDRESS="--bind-address=127.0.0.1"
  3. KUBE_CLUSTER_CIDR="--cluster-cidr=10.200.0.0/16"
  4. KUBE_CLUSTER_NAME="--cluster-name=kubernetes"
  5. KUBE_CLUSTER_SIGNING_CERT_FILE="--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem"
  6. KUBE_CLUSTER_SIGNING_KEY_FILE="--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem"
  7. KUBE_KUBECONFIG="--kubeconfig=/etc/kubernetes/pki/kube-controller-manager.kubeconfig"
  8. KUBE_LEADER_ELECT="--leader-elect=true"
  9. KUBE_ROOT_CA_FILE="--root-ca-file=/etc/kubernetes/pki/ca.pem"
  10. KUBE_SERVICE_ACCOUNT_PRIVATE_KEY_FILE="--service-account-private-key-file=/etc/kubernetes/pki/service-account-key.pem"
  11. KUBE_SERVICE_CLUSTER_IP_RANGE="--service-cluster-ip-range=10.32.0.0/24"
  12. KUBE_USE_SERVICE_ACCOUNT_CRED="--use-service-account-credentials=true"
  13. KUBE_CONTROLLER_MANAGER_ARGS="--v=2"

6.6.2 编写 controller-manager 的 systemd 配置文件

  1. # cat /usr/lib/systemd/system/kube-controller-manager.service
  2. [Unit]
  3. Description=Kubernetes Controller Manager
  4. Documentation=https://kubernetes.io/docs/reference/generated/kube-controller-manager/
  5. [Service]
  6. EnvironmentFile=-/etc/kubernetes/config
  7. EnvironmentFile=-/etc/kubernetes/controller-manager
  8. ExecStart=/usr/bin/kube-controller-manager \
  9. $KUBE_BIND_ADDRESS \
  10. $KUBE_LOGTOSTDERR \
  11. $KUBE_LOG_LEVEL \
  12. $KUBE_CLUSTER_CIDR \
  13. $KUBE_CLUSTER_NAME \
  14. $KUBE_CLUSTER_SIGNING_CERT_FILE \
  15. $KUBE_CLUSTER_SIGNING_KEY_FILE \
  16. $KUBE_KUBECONFIG \
  17. $KUBE_LEADER_ELECT \
  18. $KUBE_ROOT_CA_FILE \
  19. $KUBE_SERVICE_ACCOUNT_PRIVATE_KEY_FILE \
  20. $KUBE_SERVICE_CLUSTER_IP_RANGE \
  21. $KUBE_USE_SERVICE_ACCOUNT_CRED \
  22. $KUBE_CONTROLLER_MANAGER_ARGS
  23. Restart=on-failure
  24. LimitNOFILE=65536
  25. [Install]
  26. WantedBy=multi-user.target

6.7 部署 scheduler 服务

6.7.1 修改 scheduler 配置文件
  1. # cat /etc/kubernetes/scheduler
  2. KUBE_CONFIG="--kubeconfig=/etc/kubernetes/pki/kube-scheduler.kubeconfig"
  3. KUBE_AUTHENTICATION_KUBE_CONF="--authentication-kubeconfig=/etc/kubernetes/pki/kube-scheduler.kubeconfig"
  4. KUBE_AUTHORIZATION_KUBE_CONF="--authorization-kubeconfig=/etc/kubernetes/pki/kube-scheduler.kubeconfig"
  5. KUBE_BIND_ADDR="--bind-address=127.0.0.1"
  6. KUBE_LEADER_ELECT="--leader-elect=true"
  7. KUBE_SCHEDULER_ARGS=""

6.7.2 编写 scheduler 的 systemd 配置文件

  1. # cat /usr/lib/systemd/system/kube-scheduler.service
  2. [Unit]
  3. Description=Kubernetes Scheduler Plugin
  4. Documentation=https://kubernetes.io/docs/reference/generated/kube-scheduler/
  5. [Service]
  6. EnvironmentFile=-/etc/kubernetes/config
  7. EnvironmentFile=-/etc/kubernetes/scheduler
  8. ExecStart=/usr/bin/kube-scheduler \
  9. $KUBE_LOGTOSTDERR \
  10. $KUBE_LOG_LEVEL \
  11. $KUBE_CONFIG \
  12. $KUBE_AUTHENTICATION_KUBE_CONF \
  13. $KUBE_AUTHORIZATION_KUBE_CONF \
  14. $KUBE_BIND_ADDR \
  15. $KUBE_LEADER_ELECT \
  16. $KUBE_SCHEDULER_ARGS
  17. Restart=on-failure
  18. LimitNOFILE=65536
  19. [Install]
  20. WantedBy=multi-user.target

6.8 使能各组件

# systemctl enable kube-controller-manager kube-scheduler kube-proxy
# systemctl restart kube-controller-manager kube-scheduler kube-proxy

6.9 基本功能验证

# curl --cacert /etc/kubernetes/pki/ca.pem https://192.168.122.154:6443/version
{
 "major":"1",
 "minor":"20",
 "gitVersion":"v1.20.2",
 "gitCommit":"faecb196815e248d3ecfb03c680a4507229c2a56",
 "gitTreeState":"archive",
 "buildDate":"2021-03-02T07:26:14Z",
 "goVersion":"go1.15.7",
 "compiler":"gc",
 "platform":"linux/arm64"
}

七、部署 Node 节点组件

本章节仅以k8snode1节点为例。

7.1 环境准备

  1. # 内网需要配置代理
  2. # dnf install -y docker iSulad conntrack-tools socat containernetworking-plugins
  3. # swapoff -a
  4. # mkdir -p /etc/kubernetes/pki/
  5. # mkdir -p /etc/cni/net.d
  6. # mkdir -p /opt/cni
  7. --- 删除默认kubeconfig
  8. # rm /etc/kubernetes/kubelet.kubeconfig
  9. --- 使用isulad作为运行时########
  10. --- 配置iSulad
  11. # cat /etc/isulad/daemon.json
  12. {
  13. "registry-mirrors": [
  14. "docker.io"
  15. ],
  16. "insecure-registries": [
  17. "k8s.gcr.io",
  18. "quay.io"
  19. ],
  20. "pod-sandbox-image":"k8s.gcr.io/pause:3.2",# pause类型
  21. "network-plugin":"cni",# 置空表示禁用cni网络插件则下面两个路径失效, 安装插件后重启isulad即可
  22. "cni-bin-dir":"/usr/libexec/cni/",
  23. "cni-conf-dir":"/etc/cni/net.d",
  24. }
  25. --- iSulad环境变量中添加代理,下载镜像
  26. # cat /usr/lib/systemd/system/isulad.service
  27. [Service]
  28. Type=notify
  29. Environment="HTTP_PROXY=http://name:password@proxy:8080"
  30. Environment="HTTPS_PROXY=http://name:password@proxy:8080"
  31. --- 重启iSulad并设置为开机自启
  32. # systemctl daemon-reload
  33. # systemctl restart isulad
  34. --- 如果使用docker作为运行时########
  35. # dnf install -y docker
  36. --- 如果需要代理的环境,可以给docker配置代理,新增配置文件http-proxy.conf,并编写如下内容,替换namepasswordproxy-addr为实际的配置。
  37. # cat /etc/systemd/system/docker.service.d/http-proxy.conf
  38. [Service]
  39. Environment="HTTP_PROXY=http://name:password@proxy-addr:8080"
  40. # systemctl daemon-reload
  41. # systemctl restart docker

7.2 创建 kubeconfig 配置文件

对各节点依次如下操作创建配置文件:

  1. # kubectl config set-cluster openeuler-k8s \
  2. --certificate-authority=/etc/kubernetes/pki/ca.pem \
  3. --embed-certs=true \
  4. --server=https://192.168.122.154:6443 \
  5. --kubeconfig=k8snode1.kubeconfig
  6. # kubectl config set-credentials system:node:k8snode1 \
  7. --client-certificate=/etc/kubernetes/pki/k8snode1.pem \
  8. --client-key=/etc/kubernetes/pki/k8snode1-key.pem \
  9. --embed-certs=true \
  10. --kubeconfig=k8snode1.kubeconfig
  11. # kubectl config set-context default \
  12. --cluster=openeuler-k8s \
  13. --user=system:node:k8snode1 \
  14. --kubeconfig=k8snode1.kubeconfig
  15. # kubectl config use-context default --kubeconfig=k8snode1.kubeconfig

注:修改k8snode1为对应节点名

7.3 拷贝证书

和控制面一样,所有证书、密钥和相关配置都放到/etc/kubernetes/pki/目录。

  1. # ls /etc/kubernetes/pki/
  2. ca.pem k8snode1.kubeconfig kubelet_config.yaml kube-proxy-key.pem kube-proxy.pem
  3. k8snode1-key.pem k8snode1.pem kube_proxy_config.yaml kube-proxy.kubeconfig

7.4 CNI 网络配置

先通过 containernetworking-plugins 作为 kubelet 使用的 cni 插件,后续可以引入 calico,flannel 等插件,增强集群的网络能力。

  1. --- 桥网络配置
  2. # cat /etc/cni/net.d/10-bridge.conf
  3. {
  4. "cniVersion":"0.3.1",
  5. "name":"bridge",
  6. "type":"bridge",
  7. "bridge":"cnio0",
  8. "isGateway":true,
  9. "ipMasq":true,
  10. "ipam": {
  11. "type":"host-local",
  12. "subnet":"10.244.0.0/16",
  13. "gateway":"10.244.0.1"
  14. },
  15. "dns": {
  16. "nameservers": [
  17. "10.244.0.1"
  18. ]
  19. }
  20. }
  21. --- 回环网络配置
  22. # cat /etc/cni/net.d/99-loopback.conf
  23. {
  24. "cniVersion":"0.3.1",
  25. "name":"lo",
  26. "type":"loopback"
  27. }

7.5 部署 kubelet 服务

7.5.1 kubelet 依赖的配置文件
  1. # cat /etc/kubernetes/pki/kubelet_config.yaml
  2. kind: KubeletConfiguration
  3. apiVersion: kubelet.config.k8s.io/v1beta1
  4. authentication:
  5. anonymous:
  6. enabled:false
  7. webhook:
  8. enabled:true
  9. x509:
  10. clientCAFile: /etc/kubernetes/pki/ca.pem
  11. authorization:
  12. mode: Webhook
  13. clusterDNS:
  14. -10.32.0.10
  15. clusterDomain: cluster.local
  16. runtimeRequestTimeout:"15m"
  17. tlsCertFile:"/etc/kubernetes/pki/k8snode1.pem"
  18. tlsPrivateKeyFile:"/etc/kubernetes/pki/k8snode1-key.pem"

注意:clusterDNS 的地址为:10.32.0.10,必须和之前设置的 service-cluster-ip-range 一致

7.5.2编写 systemd 配置文件
  1. # cat /usr/lib/systemd/system/kubelet.service
  2. [Unit]
  3. Description=kubelet: The Kubernetes Node Agent
  4. Documentation=https://kubernetes.io/docs/
  5. Wants=network-online.target
  6. After=network-online.target
  7. [Service]
  8. ExecStart=/usr/bin/kubelet \
  9. --config=/etc/kubernetes/pki/kubelet_config.yaml \
  10. --network-plugin=cni \
  11. --pod-infra-container-image=k8s.gcr.io/pause:3.2 \
  12. --kubeconfig=/etc/kubernetes/pki/k8snode1.kubeconfig \
  13. --register-node=true \
  14. --hostname-override=k8snode1 \
  15. --cni-bin-dir="/usr/libexec/cni/" \
  16. --v=2
  17. Restart=always
  18. StartLimitInterval=0
  19. RestartSec=10
  20. [Install]
  21. WantedBy=multi-user.target

注意:如果使用isulad作为runtime,需要增加如下配置

  1. --container-runtime=remote \
  2. --container-runtime-endpoint=unix:///var/run/isulad.sock \

7.6 部署 kube-proxy

7.6.1 kube-proxy 依赖的配置文件
  1. # cat /etc/kubernetes/pki/kube_proxy_config.yaml
  2. kind: KubeProxyConfiguration
  3. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  4. clientConnection:
  5. kubeconfig: /etc/kubernetes/pki/kube-proxy.kubeconfig
  6. clusterCIDR:10.244.0.0/16
  7. mode:"iptables"

7.6.2 编写 systemd 配置文件

  1. # cat /usr/lib/systemd/system/kube-proxy.service
  2. [Unit]
  3. Description=Kubernetes Kube-Proxy Server
  4. Documentation=https://kubernetes.io/docs/reference/generated/kube-proxy/
  5. After=network.target
  6. [Service]
  7. EnvironmentFile=-/etc/kubernetes/config
  8. EnvironmentFile=-/etc/kubernetes/proxy
  9. ExecStart=/usr/bin/kube-proxy \
  10. $KUBE_LOGTOSTDERR \
  11. $KUBE_LOG_LEVEL \
  12. --config=/etc/kubernetes/pki/kube_proxy_config.yaml \
  13. --hostname-override=k8snode1 \
  14. $KUBE_PROXY_ARGS
  15. Restart=on-failure
  16. LimitNOFILE=65536
  17. [Install]
  18. WantedBy=multi-user.target

7.7 启动组件服务

  1. # systemctl enable kubelet kube-proxy
  2. # systemctl start kubelet kube-proxy

其他节点依次部署即可。

7.8 验证集群状态

等待几分钟,使用如下命令查看node状态:

  1. # kubectl get nodes --kubeconfig /etc/kubernetes/pki/admin.kubeconfig
  2. NAME STATUS ROLES AGE VERSION
  3. k8snode1 Ready <none> 17h v1.20.2
  4. k8snode2 Ready <none> 19m v1.20.2
  5. k8snode3 Ready <none> 12m v1.20.2

7.9 部署 coredns

coredns可以部署到node节点或者master节点,本文这里部署到节点k8snode1

7.9.1 编写 coredns 配置文件
  1. # cat /etc/kubernetes/pki/dns/Corefile
  2. .:53 {
  3. errors
  4. health {
  5. lameduck 5s
  6. }
  7. ready
  8. kubernetes cluster.local in-addr.arpa ip6.arpa {
  9. pods insecure
  10. endpoint https://192.168.122.154:6443
  11. tls /etc/kubernetes/pki/ca.pem /etc/kubernetes/pki/admin-key.pem /etc/kubernetes/pki/admin.pem
  12. kubeconfig /etc/kubernetes/pki/admin.kubeconfig default
  13. fallthrough in-addr.arpa ip6.arpa
  14. }
  15. prometheus :9153
  16. forward . /etc/resolv.conf {
  17. max_concurrent1000
  18. }
  19. cache30
  20. loop
  21. reload
  22. loadbalance
  23. }

说明:

  • 监听53端口;
  • 设置kubernetes插件配置:证书、kube api的URL;
7.9.2 准备 systemd 的 service 文件
  1. # cat /usr/lib/systemd/system/coredns.service
  2. [Unit]
  3. Description=Kubernetes Core DNS server
  4. Documentation=https://github.com/coredns/coredns
  5. After=network.target
  6. [Service]
  7. ExecStart=bash-c"KUBE_DNS_SERVICE_HOST=10.32.0.10 coredns -conf /etc/kubernetes/pki/dns/Corefile"
  8. Restart=on-failure
  9. LimitNOFILE=65536
  10. [Install]
  11. WantedBy=multi-user.target

7.9.3 启动服务

  1. # systemctl enable coredns
  2. # systemctl start coredns

7.9.4 创建 coredns 的 Service 对象

  1. # cat coredns_server.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: kube-dns
  6. namespace: kube-system
  7. annotations:
  8. prometheus.io/port:"9153"
  9. prometheus.io/scrape:"true"
  10. labels:
  11. k8s-app: kube-dns
  12. kubernetes.io/cluster-service:"true"
  13. kubernetes.io/name:"CoreDNS"
  14. spec:
  15. clusterIP:10.32.0.10
  16. ports:
  17. - name: dns
  18. port:53
  19. protocol: UDP
  20. - name: dns-tcp
  21. port:53
  22. protocol: TCP
  23. - name: metrics
  24. port:9153
  25. protocol: TCP

7.9.5 创建 coredns 的 endpoint 对象

  1. # cat coredns_ep.yaml
  2. apiVersion: v1
  3. kind: Endpoints
  4. metadata:
  5. name: kube-dns
  6. namespace: kube-system
  7. subsets:
  8. - addresses:
  9. - ip:192.168.122.157
  10. ports:
  11. - name: dns-tcp
  12. port:53
  13. protocol: TCP
  14. - name: dns
  15. port:53
  16. protocol: UDP
  17. - name: metrics
  18. port:9153
  19. protocol: TCP

7.9.6 确认 coredns 服务

  1. --- 查看service对象
  2. # kubectl get service -n kube-system kube-dns
  3. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  4. kube-dns ClusterIP 10.32.0.10 <none> 53/UDP,53/TCP,9153/TCP 51m
  5. --- 查看endpoint对象
  6. # kubectl get endpoints -n kube-system kube-dns
  7. NAME ENDPOINTS AGE
  8. kube-dns 192.168.122.157:53,192.168.122.157:53,192.168.122.157:9153 52m

八、运行测试 pod

8.1 配置文件

  1. # cat nginx.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nginx-deployment
  6. labels:
  7. app: nginx
  8. spec:
  9. replicas:3
  10. selector:
  11. matchLabels:
  12. app: nginx
  13. template:
  14. metadata:
  15. labels:
  16. app: nginx
  17. spec:
  18. containers:
  19. - name: nginx
  20. image: nginx:1.14.2
  21. ports:
  22. - containerPort:80

8.2 启动 pod

通过kubectl命令运行nginx。

  1. # kubectl apply -f nginx.yaml
  2. deployment.apps/nginx-deployment created
  3. # kubectl get pods
  4. NAME READY STATUS RESTARTS AGE
  5. nginx-deployment-66b6c48dd5-6rnwz 1/1 Running 0 33s
  6. nginx-deployment-66b6c48dd5-9pq49 1/1 Running 0 33s
  7. nginx-deployment-66b6c48dd5-lvmng 1/1 Running 0 34s
声明: 本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。
Kubernetes

kubeadm方式快速部署一套K8S V1.25集群

2024-12-7 22:35:52

Kubernetes

k8s集群经历断电后无法启动的故障处理分享

2024-12-10 8:55:09

0 条回复 A文章作者 M管理员
欢迎您,新朋友,感谢参与互动!
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
私信列表
搜索