Question 1

Task weight: 1%

You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.

Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.

Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.

题目解析:

  • 考点

  • 解题

    • 根据题意:Write all those context names into /opt/course/1/contexts

      ➜ kubectl config get-contexts -o name > /opt/course/1/contexts
    • 根据题意:Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl

      ➜ vim /opt/course/1/context_default_kubectl.sh
      kubectl config current-context
    • 根据题意:Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl

      ➜ vim /opt/course/1/context_default_no_kubectl.sh
      grep "current-context: " ~/.kube/config | awk '{print $2}'

Question 2

Task weight: 3%

Use context: kubectl config use-context k8s-c1-H

Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on a master node, do not add new labels any nodes.

Shortly write the reason on why Pods are by default not scheduled on master nodes into /opt/course/2/master_schedule_reason .

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
      Switched to context "k8s-c1-H".
    • 快速生成创建 Pod 的 yaml 模板

      ➜ kubectl run pod1 --image=httpd:2.4.41-alpine --dry-run=client -o yaml > 2.yaml
    • 根据题意,修改模板

      ➜ vim 2.yaml
      apiVersion: v1
      kind: Pod
      metadata:
        creationTimestamp: null
        labels:
          run: pod1
        name: pod1
      spec:
        containers:
        - image: httpd:2.4.41-alpine
          name: pod1-container                  # 修改, the container should be named `pod1-container`
          resources: {}
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        tolerations:                            # 新增, This Pod should only be scheduled on a master node,所以要容忍master的污点
        - effect: NoSchedule                    # 新增
          key: node-role.kubernetes.io/master   # 新增
        nodeSelector:                           # 新增,指定nodeSelector,标签通过kubectl get node --show-labels查看
          node-role.kubernetes.io/master: ""    # 新增
      status: {}
    • 创建资源

      kubectl apply -f 2.yaml
    • 根据题意:Shortly write the reason on why Pods are by default not scheduled on master nodes into /opt/course/2/master_schedule_reason

      ➜ vim /opt/course/2/master_schedule_reason
      master nodes usually have a taint defined

Question 3

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources. Record the action.

题目解析:

  • 考点

    • Pod伸缩
  • 解题

    • 切换上下文

      kubectl config use-context k8s-c1-H
    • 根据题意,注意两点

      • 将副本数设为 1
      • 记录操作
      ➜ kubectl -n project-c13 scale sts o3db --replicas 1 --record

Question 4

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Do the following in Namespace default. Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. Configure a LivenessProbe which simply runs true. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe.

Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready. The already existing Service service-am-i-ready should now have that second Pod as endpoint.

Now the first Pod should be in ready state, confirm that.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 快速生成创建 Pod 的 yaml 模版

      ➜ kubectl run ready-if-service-ready --image=nginx:1.16.1-alpine --dry-run=client -o yaml > 4.yaml
    • 根据题意:Create a single Pod named ready-if-service-ready...,修改模板

      ➜ vim 4.yaml
      apiVersion: v1
      kind: Pod
      metadata:
        creationTimestamp: null
        labels:
          run: ready-if-service-ready
        name: ready-if-service-ready
      spec:
        containers:
        - image: nginx:1.16.1-alpine
          name: ready-if-service-ready
          resources: {}
          livenessProbe:                                   # 新增
            httpGet:                                       # 新增
              path: /                                      # 新增
              port: 80                                     # 新增
          readinessProbe:                                  # 新增
            exec:                                          # 新增
              command:                                     # 新增
              - sh                                         # 新增
              - -c                                         # 新增
              - wget -T2 -O- http://service-am-i-ready:80  # 新增
        dnsPolicy: ClusterFirst
        restartPolicy: Always
      status: {}
    • 根据题意:Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready...

      # 快速生成创建 Pod 的 yaml 模板
      ➜ kubectl run am-i-ready --image=nginx:1.16.1-alpine --dry-run=client -o yaml > 4-2.yaml
      # 修改模板
      ➜ vim 4-2.yaml
      apiVersion: v1
      kind: Pod
      metadata:
        creationTimestamp: null
        labels:
          id: cross-server-ready       # 修改,with label id: cross-server-ready
        name: am-i-ready
      spec:
        containers:
        - image: nginx:1.16.1-alpine
          name: am-i-ready
          resources: {}
        dnsPolicy: ClusterFirst
        restartPolicy: Always
      status: {}
    • 创建资源

      ➜ kubectl apply -f 4.yaml -f 4-2.yaml

Question 5

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).

Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.

题目解析:

  • 考点

  • 解题

    • 列出所有 Pod 并以创建时间排序

      ➜ vim /opt/course/5/find_pods.sh
      kubectl get pod --all-namespaces --sort-by=metadata.creationTimestamp
    • 列出所有 Pod 并以创建 uid 排序

      ➜ vim /opt/course/5/find_pods_uid.sh
      kubectl get pod --all-namespaces --sort-by=metadata.uid

Question 6

Task weight: 8%

Use context: kubectl config use-context k8s-c1-H

Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.

Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.

Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • pv, pvc 的 yaml 要自己写,可以从官网拷贝示例来修改

      ➜ vim 6.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: safari-pv
      spec:
        capacity:
          storage: 2Gi
        accessModes:
        - ReadWriteOnce
        hostPath:
          path: "/Volumes/Data"
      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: safari-pvc
        namespace: project-tiger
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 2Gi
        volumeName: safari-pv
      ---
    • 快速生成创建 deployment 的 yaml 模板

      ➜ kubectl -n project-tiger create deployment safari --image=httpd:2.4.41-alpine --dry-run -o yaml >> 6.yaml
    • 根据题意,修改模板

      ➜ vim 6.yaml
      ...
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: safari
        namespace: project-tiger
        labels:
          app: safari
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: safari
        template:
          metadata:
            labels:
              app: safari
          spec:
            containers:
            - name: httpd
              image: httpd:2.4.41-alpine
              volumeMounts:                    # 新增
              - name: data                     # 新增
                mountPath: /tmp/safari-data    # 新增
            volumes:                           # 新增
            - name: data                       # 新增
              persistentVolumeClaim:           # 新增
                claimName: safari-pvc          # 新增
    • 创建资源

      ➜ kubectl apply -f 6.yaml

Question 7

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

The metrics-server hasn't been installed yet in the cluster, but it's something that should be done soon. Your college would already like to know the kubectl commands to:

  1. show node resource usage
  2. show Pod and their containers resource usage

Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.

题目解析:

  • 考点

    • metric-server
  • 解题

    ➜ vim /opt/course/7/node.sh
    kubectl top node
    
    ➜ vim /opt/course/7/pod.sh
    kubectl top pod --containers=true

Question 8

Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

Ssh into the master node with ssh cluster1-master1. Check how the master components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the master node. Also find out the name of the DNS application and how it's started/installed on the master node.

Write your findings into file /opt/course/8/master-components.txt. The file should be structured like:

# /opt/course/8/master-components.txt
kubelet: [TYPE]
kube-apiserver: [TYPE]
kube-scheduler: [TYPE]
kube-controller-manager: [TYPE]
etcd: [TYPE]
dns: [TYPE] [NAME]

Choices of [TYPE] are: not-installed, process, static-pod, pod

题目解析:

  • 考点

    • 系统组件, 部署
  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 通过 kubectl -n kube-system get pod 可以看出 kube-apiserver, kube-scheduler, kube-controller-manager, etcd 是静态 pod. dns 是 coredns, 由 deployment 管理的,因此是 pod 类型

      ➜ vim /opt/course/8/master-components.txt
      kubelet: process
      kube-apiserver: static-pod
      kube-scheduler: static-pod
      kube-controller-manager: static-pod
      etcd: static-pod
      dns: pod coredns

Question 9

Use context: kubectl config use-context k8s-c2-AC

Ssh into the master node with ssh cluster2-master1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.

Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm its started but not scheduled on any node.

Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-master1. Make sure it's running.

Start the kube-scheduler again and confirm its running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-worker1.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c2-AC
    • 停止 kube-scheduler

      ➜ ssh cluster2-master1
      ➜ root@cluster2-master1:~# cd /etc/kubernetes/manifests/
      ➜ root@cluster2-master1:~# mv kube-scheduler.yaml ..
    • 创建 Pod:Create a single Pod named manual-schedule of image httpd:2.4-alpine

      ➜ kubectl run manual-schedule --image=httpd:2.4-alpine
    • 查看状态

      ➜ kubectl get pod manual-schedule -o wide
      NAME              READY   STATUS    ...   NODE     NOMINATED NODE
      manual-schedule   0/1     Pending   ...   <none>   <none>   
    • 手动调度

      ➜ kubectl edit pod manual-schedule
      apiVersion: v1
      kind: Pod
      metadata:
      ...
      spec:
        nodeName: cluster2-master1        # 新增
        containers:
        - image: httpd:2.4-alpine
          imagePullPolicy: IfNotPresent
          name: manual-schedule
      ...
    • 再次查看状态

      ➜ kubectl get pod manual-schedule -o wide
      NAME              READY   STATUS    ...   NODE            
      manual-schedule   1/1     Running   ...   cluster2-master1
    • 启动 kube-scheduler

      ➜ ssh cluster2-master1
      ➜ root@cluster2-master1:~# cd /etc/kubernetes/
      ➜ root@cluster2-master1:~# mv kube-scheduler.yaml manifests/
    • 创建 Pod:

      ➜ kubectl run manual-schedule2 --image=httpd:2.4-alpine
    • 查看状态

      ➜ kubectl get pod -o wide | grep schedule
      manual-schedule    1/1     Running   ...   cluster2-master1
      manual-schedule2   1/1     Running   ...   cluster2-worker1

Question 10

Task weight: 6%

Use context: kubectl config use-context k8s-c1-H

Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 创建 ServiceAccount

      ➜ kubectl -n project-hamster create sa processor
    • 创建 Role

      ➜ kubectl -n project-hamster create role processor --verb=create --resource=secrets,configmap
    • 创建 RoleBanding

      ➜ kubectl -n project-hamster create rolebinding processor --role=processor --serviceaccount=project-hamster:processor
    • 验证

      ➜ kubectl -n project-hamster auth can-i create secret --as system:serviceaccount:project-hamster:processor
      yes
      
      ➜ kubectl -n project-hamster auth can-i create configmap --as system:serviceaccount:project-hamster:processor
      yes
      
      ➜ kubectl -n project-hamster auth can-i create pod --as system:serviceaccount:project-hamster:processor
      no
      
      ➜ kubectl -n project-hamster auth can-i delete secret --as system:serviceaccount:project-hamster:processor
      no
      
      ➜ kubectl -n project-hamster auth can-i get configmap --as system:serviceaccount:project-hamster:processor
      no

Question 11

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 megabytes memory. The Pods of that DaemonSet should run on all nodes, master and worker.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 在官网找一份 DaemonSet 示例修改

      ➜ vim 11.yaml
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: ds-important
        namespace: project-tiger
        labels:
          k8s-app: ds-important
      spec:
        selector:
          matchLabels:
            id: ds-important
            uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
        template:
          metadata:
            labels:
              id: ds-important
              uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
          spec:
            tolerations:
            # this toleration is to have the daemonset runnable on master nodes
            # remove it if your masters can't run pods
            - key: node-role.kubernetes.io/master
              operator: Exists
              effect: NoSchedule
            containers:
            - name: httpd
              image: httpd:2.4-alpine
              resources:
                requests:
                  cpu: 10m
                  memory: 10Mi
    • 创建资源

      ➜ kubectl apply -f 11.yaml

Question 12

Task weight: 6%

Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image kubernetes/pause.

There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-worker1 and cluster1-worker2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added.

In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 快速生成部署 deployment 的 yaml 文件

      ➜ kubectl -n project-tiger create deployment --image=nginx:1.17.6-alpine deploy-important --dry-run=client -o yaml > 12.yaml
    • 根据题意修改模板

      ➜ vim 12.yaml
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: deploy-important
        namespace: project-tiger
        labels:
          id: very-important                            # 修改
      spec:
        replicas: 3                                     # 修改
        selector:
          matchLabels:
            id: very-important                          # 修改
        template:
          metadata:
            labels:
              id: very-important                        # 修改
          spec:
            containers:
            - name: container1                          # 修改
              image: nginx:1.17.6-alpine
            - name: container2                          # 新增
              image: kubernetes/pause                   # 新增
            affinity:
              podAntiAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:    # 新增
                - topologyKey: kubernetes.io/hostname              # 新增
                  labelSelector:                                   # 新增
                    #matchExpressions:                             # 新增
                    #- key: id                                     # 新增
                    #  operator: In                                # 新增
                    #  values:                                     # 新增
                    #  - very-important                            # 新增
                    matchLabels:                                   # 新增
                      id: very-important                           # 新增

Question 13

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Create a Pod named multi-container-playground in Namespace default with three containers, named c1, c2 and c3. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn't be persisted or shared with other Pods.

Container c1 should be of image nginx:1.17.6-alpine and have the name of the node where its Pod is running on value available as environment variable MY_NODE_NAME.

Container c2 should be of image busybox:1.31.1 and write the output of the date command every second in the shared volume into file date.log. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done for this.

Container c3 should be of image busybox:1.31.1 and constantly write the content of file date.log from the shared volume to stdout. You can use tail -f /your/vol/path/date.log for this.

Check the logs of container c3 to confirm correct setup.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 快速生成部署 pod 的 yaml 模板

      ➜ kubectl run multi-container-playground --image=nginx:1.17.6-alpine > 13.yaml
    • 根据题意,修改模板

      ➜ vim 13.yaml
      apiVersion: v1
      kind: Pod
      metadata:
        creationTimestamp: null
        labels:
          run: multi-container-playground
        name: multi-container-playground
      spec:
        containers:
        - image: nginx:1.17.6-alpine
          name: c1                       # 修改
          volumeMounts:                  # 新增
          - name: data                   # 新增
            mountPath: /data             # 新增
          env:                           # 新增
          - name: MY_NODE_NAME           # 新增
            valueFrom:                   # 新增
              fieldRef:                  # 新增
                fieldPath: spec.nodeName # 新增
        - image: busybox:1.31.1          # 新增
          name: c2                       # 新增
          command:                       # 新增
          - sh                           # 新增
          - -c                           # 新增
          - while true; do date >> /data/date.log; sleep 1; done  # 新增
          volumeMounts:                  # 新增
          - name: data                   # 新增
            mountPath: /data             # 新增
        - image: busybox:1.31.1          # 新增
          name: c3                       # 新增
          command:                       # 新增
          - sh                           # 新增
          - -c                           # 新增
          - tail -f /data/date.log       # 新增
          volumeMounts:                  # 新增
          - name: data                   # 新增
            mountPath: /data             # 新增
        volumes:                         # 新增
        - name: data                     # 新增
          emptyDir: {}                   # 新增

Question 14

Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

You're ask to find out following information about the cluster k8s-c1-H:

  1. How many master nodes are available?
  2. How many worker nodes are available?
  3. What is the Service CIDR?
  4. Which Networking (or CNI Plugin) is configured and where is its config file?
  5. Which suffix will static pods have that run on cluster1-worker1?

Write your answers into file /opt/course/14/cluster-info, structured like this:

# /opt/course/14/cluster-info
1: [ANSWER]
2: [ANSWER]
3: [ANSWER]
4: [ANSWER]
5: [ANSWER]

题目解析:

  • 考点

    • 集群信息
  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 通过查看节点,1,2 可解

      ➜ kubectl get node
    • 查看配置文件,3,4 可解

      ➜ ssh cluster1-master1
      ➜ root@cluster1-master1:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep range
          - --service-cluster-ip-range=10.96.0.0/12
      ➜ root@cluster1-master1:~# ls /etc/cni/net.d/
      10-weave.conflist
    • 可得答案

      1: 1
      2: 2
      3: 10.96.0.0/12
      4: Weave, /etc/cni/net.d/10-weave.conflist
      5: -cluster1-worker1

Question 15

Task weight: 3%

Use context: kubectl config use-context k8s-c2-AC

Write a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time. Use kubectl for it.

Now kill the kube-proxy Pod running on node cluster2-worker1 and write the events this caused into /opt/course/15/pod_kill.log.

Finally kill the containerd container of the kube-proxy Pod on node cluster2-worker1 and write the events into /opt/course/15/container_kill.log.

Do you notice differences in the events both actions caused?

题目解析:

  • 考点

    • event
  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c2-AC
    • 获取集群events,以时间排序

      kubectl get events -A --sort-by=.metadata.creationTimestamp
    • 删除 cluster2-worker1 上的 kube-proxy pod

      ➜ kubectl -n kube-system get pod
      ➜ kubectl -n kube-system delete pod kube-proxy-xxxx
    • 获取 events

      ➜ sh /opt/course/15/cluster_events.sh
      # 将最新的 events 写入 /opt/course/15/pod_kill.log
    • 删除 cluster2-worker1 上的 kube-proxy 容器

      ➜ ssh cluster2-worker1
      ➜ root@cluster2-worker1:~# crictl ps
      ➜ root@cluster2-worker1:~# crictl rm CONTAINER_ID
    • 获取 events

      ➜ sh /opt/course/15/cluster_events.sh
      # 将最新的 events 写入 /opt/course/15/container_kill.log

Question 16

Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

Create a new Namespace called cka-master.

Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap...) into /opt/course/16/resources.txt.

Find the project-* Namespace with the highest number of Roles defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt.

题目解析:

  • 考点

    • kubectl
  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • Create a new Namespace called cka-master

      ➜ kubectl create ns cka-master
    • Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap...) into /opt/course/16/resources.txt

      ➜ kubectl api-resources --namespaced=true -o name > /opt/course/16/resources.txt
    • Find the project-* Namespace with the highest number of Roles defined in it...

      ➜ kubectl get role -A | grep ^project- | awk '{print $1}' | sort | uniq -c
      
      # 将结果写入/opt/course/16/crowded-namespace.txt

Question 17

Task weight: 3%

Use context: kubectl config use-context k8s-c1-H

In Namespace project-tiger create a Pod named tigers-reunite of image httpd:2.4.41-alpine with labels pod=container and container=pod. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.

Using command crictl:

  1. Write the ID of the container and the info.runtimeType into /opt/course/17/pod-container.txt
  2. Write the logs of the container into /opt/course/17/pod-container.log

题目解析:

  • 考点

    • containerd, crictl
  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 部署 Pod

      ➜ kubectl -n project-tiger run tigers-reunite --image=httpd:2.4.41-alpine --labels=pod=container,container=pod
    • 查看 info.runtimeType

      # 先查看 Pod 调度到了哪个节点
      ➜ kubectl -n project-tiger get pod -o wide
      
      ➜ ssh cluster1-worker2 crictl ps | grep tigers-reunite
      b01edbe6f89ed    54b0995a63052    5 seconds ago    Running        tigers-reunite ...
      
      ➜ ssh cluster1-worker2 crictl inspect b01edbe6f89ed | grep runtimeType
          "runtimeType": "io.containerd.runc.v2",
      
      # 将container id 和 runtimeType 写入 /opt/course/17/pod-container.txt
      b01edbe6f89ed io.containerd.runc.v2
    • 获取容器日志

      ➜ kubectl -n project-tiger logs tigers-reunite > /opt/course/17/pod-container.log

Question 18

Task weight: 8%

Use context: kubectl config use-context k8s-c3-CCC

There seems to be an issue with the kubelet not running on cluster3-worker1. Fix it and confirm that cluster3 has node cluster3-worker1 available in Ready state afterwards. You should be able to schedule a Pod on cluster3-worker1 afterwards.

Write the reason of the issue into /opt/course/18/reason.txt.

题目解析:

  • 考点

    • 故障排查
  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c3-CCC
    • 检查 cluster3-worker1 节点 kubelet

      ➜ ssh cluster3-worker1
      
      ➜ root@cluster3-worker1:~# ps aux | grep kubelet
      root     29294  0.0  0.2  14856  1016 pts/0    S+   11:30   0:00 grep --color=auto kubelet
      
      ➜ root@cluster3-worker1:~# systemctl status kubelet
      ● kubelet.service - kubelet: The Kubernetes Node Agent
         Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
        Drop-In: /etc/systemd/system/kubelet.service.d
                 └─10-kubeadm.conf
         Active: inactive (dead) since Sun 2019-12-08 11:30:06 UTC; 50min 52s ago
         
      ➜ root@cluster3-worker1:~# systemctl start kubelet
      ➜ root@cluster3-worker1:~# systemctl status kubelet
      ➜ root@cluster3-worker1:~# journalctl -xe
      
      # 通过查看 `systemctl status kubelet` 和 journalctl -xe,发现报错信息:/usr/local/bin/kubelet: No such file or directory
      # 查看配置文件:/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
      # 修改 /usr/local/bin/kubelet 为 /usr/bin/kubelet
      
      ➜ root@cluster3-worker1:~# systemctl start kubelet
    • 故障原因

      ➜ vim /opt/course/18/reason.txt
      wrong path to kubelet binary specified in service config

Question 19

Task weight: 3%

this task can only be solved if questions 18 or 20 have been successfully implemented and the k8s-c3-CCC cluster has a functioning worker node

Use context: kubectl config use-context k8s-c3-CCC

Do the following in a new Namespace secret. Create a Pod named secret-pod of image busybox:1.31.1 which should keep running for some time, it should be able to run on master nodes as well.

There is an existing Secret located at /opt/course/19/secret1.yaml, create it in the secret Namespace and mount it readonly into the Pod at /tmp/secret1.

Create a new Secret in Namespace secret called secret2 which should contain user=user1 and pass=1234. These entries should be available inside the Pod's container as environment variables APP_USER and APP_PASS.

Confirm everything is working.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c3-CCC
    • 创建命名空间 secret

      ➜ kubectl create ns secret
    • 快速生成部署 Pod 的 yaml 模板

      ➜ kubectl -n secret run secret-pod --image=busybox:1.31.1 --dry-run=client -o yaml > 19.yaml
    • 修改模板

      ➜ vim 19.yaml
      apiVersion: v1
      kind: Pod
      metadata:
        creationTimestamp: null
        labels:
          run: secret-pod
        name: secret-pod
        namespace: secret
      spec:
        containers:
        - image: busybox:1.31.1
          name: secret-pod
          resources: {}
          command:          # 新增
          - sh              # 新增
          - -c              # 新增
          - sleep 1d        # 新增
          volumeMounts:               # 新增,mount it readonly into the *Pod* at `/tmp/secret1`
          - name: secret1             # 新增
            mountPath: /tmp/secret1   # 新增
            readOnly: true            # 新增
          env:                        # 新增,environment variables APP_USER and APP_PASS.
          - name: APP_USER            # 新增
            valueFrom:                # 新增
              secretKeyRef:           # 新增
                name: secret2         # 新增
                key: pass             # 新增
          - name: APP_PASS            # 新增
            valueFrom:                # 新增
              secretKeyRef:           # 新增
                name: secret2         # 新增
                key: pass             # 新增
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        tolerations:                              # 新增,it should be able to run on master nodes as well.
        - key: "node-role.kubernetes.io/master"   # 新增
          operator: "Exists"                      # 新增
          effect: "NoSchedule"                    # 新增
        volumes:                            # 新增
        - name: secret1                     # 新增
          secret:                           # 新增
            secretName: secret1             # 新增
      status: {}
    • 创建 secret1

      ➜ cp /opt/course/19/secret1.yaml 19_secret1.yaml
      ➜ vim 19_secret1.yaml
      apiVersion: v1
      data:
        halt: IyEgL2Jpbi9zaAo...
      kind: Secret
      metadata:
        creationTimestamp: null
        name: secret1
        namespace: secret           # 修改
      
      ➜ kubectl apply -f 19_secret1.yaml
    • 创建secret2

      ➜ kubectl -n secret create secret generic secret2 --from-literal=user=user1 --from-literal=pass=1234
    • 创建 Pod

      kubectl apply -f 19.yaml

Question 20

Task weight: 10%

Use context: kubectl config use-context k8s-c3-CCC

Your coworker said node cluster3-worker2 is running an older Kubernetes version and is not even part of the cluster. Update kubectl and kubeadm to the version that's running on cluster3-master1. Then add this node to the cluster, you can use kubeadm for this.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c3-CCC
    • 查看版本

      ➜ kubectl get node
      NAME               STATUS     ROLES                  AGE    VERSION
      cluster3-master1   Ready      control-plane,master   116m   v1.22.1
      cluster3-worker1   NotReady   <none>                 112m   v1.22.1
    • 查看 cluster3-worker2 节点 kubelet, kubectl, kubeadm 版本

      ➜ ssh cluster3-worker2
      
      ➜ root@cluster3-worker2:~# kubeadm version
      ubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:44:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
      
      ➜ root@cluster3-worker2:~# kubectl version
      Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
      The connection to the server localhost:8080 was refused - did you specify the right host or port?
      
      ➜ root@cluster3-worker2:~# kubelet --version
      Kubernetes v1.21.4
    • 尝试一下 kubeadm 升级

      ➜ root@cluster3-worker2:~# kubeadm upgrade node
      couldn't create a Kubernetes client from file "/etc/kubernetes/kubelet.conf": failed to load admin kubeconfig: open /etc/kubernetes/kubelet.conf: no such file or directory
      To see the stack trace of this error execute with --v=5 or higher
      
      # 看到这个报错,说明这个节点没有做过初始化,即从没有加入过集群
      # 这种情况,就升级 kubectl, kubeadm, kubelet 到指定版本,然后 kubeadm join 加入集群
    • 升级 kubectl, kubeadm, kubelet

      ➜ root@cluster3-worker2:~# apt-get update
      
      ➜ root@cluster3-worker2:~# apt-cache show kubelet | grep 1.22
      Version: 1.22.1-00
      Filename: pool/kubectl_1.22.1-00_amd64_2a00cd912bfa610fe4932bc0a557b2dd7b95b2c8bff9d001dc6b3d34323edf7d.deb
      Version: 1.22.0-00
      Filename: pool/kubectl_1.22.0-00_amd64_052395d9ddf0364665cf7533aa66f96b310ec8a2b796d21c42f386684ad1fc56.deb
      Filename: pool/kubectl_1.17.1-00_amd64_0dc19318c9114db2931552bb8bf650a14227a9603cb73fe0917ac7868ec7fcf0.deb
      SHA256: 0dc19318c9114db2931552bb8bf650a14227a9603cb73fe0917ac7868ec7fcf0
      ...
      
      ➜ root@cluster3-worker2:~# apt install kubectl=1.22.1-00 kubeadm=1.22.1-00 kubelet=1.22.1-00
    • 获取 kubeadm join 命令

      ➜ ssh cluster3-master1
      
      ➜ root@cluster3-master1:~# kubeadm token create --print-join-command
      kubeadm join 192.168.100.31:6443 --token leqq1l.1hlg4rw8mu7brv73 --discovery-token-ca-cert-hash sha256:2e2c3407a256fc768f0d8e70974a8e24d7b9976149a79bd08858c4d7aa2ff79a
    • cluster3-worker2 节点加入集群

      ➜ ssh cluster3-worker2
      
      ➜ root@cluster3-worker2:~# kubeadm join 192.168.100.31:6443 --token leqq1l.1hlg4rw8mu7brv73 --discovery-token-ca-cert-hash sha256:2e2c3407a256fc768f0d8e70974a8e24d7b9976149a79bd08858c4d7aa2ff79a

Question 21

Task weight: 2%

Use context: kubectl config use-context k8s-c3-CCC

Create a Static Pod named my-static-pod in Namespace default on cluster3-master1. It should be of image nginx:1.16-alpine and have resource requests for 10m CPU and 20Mi memory.

Then create a NodePort Service named static-pod-service which exposes that static Pod on port 80 and check if it has Endpoints and if its reachable through the cluster3-master1 internal IP address. You can connect to the internal node IPs from your main terminal.

题目解析:

  • 考点

    • 静态 Pod
  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c3-CCC
    • 创建静态Pod

      ➜ ssh cluster3-master1
      
      ➜ root@cluster1-master1:~# cd /etc/kubernetes/manifests/
      
      ➜ root@cluster1-master1:~# kubectl run my-static-pod --image=nginx:1.16-alpine --requests "cpu=10m,memory=20Mi" --dry-run=client -o yaml > my-static-pod.yaml
    • 创建Service

      ➜ kubectl expose pod my-static-pod-cluster3-master1 --name static-pod-service --type=NodePort --port 80

Questin 22

Task weight: 2%

Use context: kubectl config use-context k8s-c2-AC

Check how long the kube-apiserver server certificate is valid on cluster2-master1. Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration.

Also run the correct kubeadm command to list the expiration dates and confirm both methods show the same date.

Write the correct kubeadm command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c2-AC
    • 查看证书有效期

      ➜ ssh cluster2-master1
      
      ➜ root@cluster2-master1:~# openssl x509  -noout -text -in /etc/kubernetes/pki/apiserver.crt
      ...
              Validity
                  Not Before: Jan 14 18:18:15 2021 GMT
                  Not After : Jan 14 18:49:40 2022 GMT
      ...
      
      # 将结果写入 /opt/course/22/expiration
      Jan 14 18:49:40 2022 GMT
    • 证书续期命令

      ➜ vim /opt/course/22/kubeadm-renew-certs.sh
      kubeadm certs renew apiserver

Question 23

Task weight: 2%

Use context: kubectl config use-context k8s-c2-AC

Node cluster2-worker1 has been added to the cluster using kubeadm and TLS bootstrapping.

Find the "Issuer" and "Extended Key Usage" values of the cluster2-worker1:

  1. kubelet client certificate, the one used for outgoing connections to the kube-apiserver.
  2. kubelet server certificate, the one used for incoming connections from the kube-apiserver.

Write the information into file /opt/course/23/certificate-info.txt.

Compare the "Issuer" and "Extended Key Usage" fields of both certificates and make sense of these.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c2-AC
    • 查看证书信息

      ➜ ssh cluster2-worker1
      
      ➜ root@cluster2-worker1:~# openssl x509  -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep Issuer
              Issuer: CN = kubernetes
              
      ➜ root@cluster2-worker1:~# openssl x509  -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep "Extended Key Usage" -A1
                  X509v3 Extended Key Usage: 
                      TLS Web Client Authentication
      
      ➜ root@cluster2-worker1:~# openssl x509  -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep Issuer
                Issuer: CN = cluster2-worker1-ca@1588186506
      
      ➜ root@cluster2-worker1:~# openssl x509  -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep "Extended Key Usage" -A1
                  X509v3 Extended Key Usage: 
                      TLS Web Server Authentication
                      
      # 将证书信息写入 /opt/course/23/certificate-info.txt

Question 24

Task weight: 9%

Use context: kubectl config use-context k8s-c1-H

There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.

To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to:

  • connect to db1-* Pods on port 1111
  • connect to db2-* Pods on port 2222

Use the app label of Pods in your policy.

After implementation, connections from backend-* Pods to vault-* Pods on port 3333 should for example no longer work.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 从官网找一份示例 yaml 来修改

      ➜ vim 24.yaml
      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: np-backend
        namespace: project-snake
      spec:
        podSelector:
          matchLabels:
            app: backend
        policyTypes:
        - Egress
        egress:
        - to:
          - podSelector:
              matchLabels:
                app: db1
          ports:
          - protocol: TCP
            port: 1111
        - to:
          - podSelector:
              matchLabels:
                app: db2
          ports:
          - protocol: TCP
            port: 2222

Question 25

Task weight: 8%

Use context: kubectl config use-context k8s-c3-CCC

Make a backup of etcd running on cluster3-master1 and save it on the master node at /tmp/etcd-backup.db.

Then create a Pod of your kind in the cluster.

Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c3-CCC
    • 备份 etcd

      ➜ cluster3-master1
      
      ➜ root@cluster3-master1:~# ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db \
      --cacert /etc/kubernetes/pki/etcd/ca.crt \
      --cert /etc/kubernetes/pki/etcd/server.crt \
      --key /etc/kubernetes/pki/etcd/server.key
    • 创建 Pod

      ➜ kubectl run test --image=nginx
    • 恢复etcd

      root@cluster3-master1:~# cd /etc/kubernetes/manifests/
      
      root@cluster3-master1:/etc/kubernetes/manifests# mv * ..
      
      # 等系统组件静态 Pod 停止运行 
      
      ➜ root@cluster3-master1:~# ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db \
      --data-dir /var/lib/etcd-backup \
      --cacert /etc/kubernetes/pki/etcd/ca.crt \
      --cert /etc/kubernetes/pki/etcd/server.crt \
      --key /etc/kubernetes/pki/etcd/server.key
      
      ➜ root@cluster3-master1:~# vim /etc/kubernetes/etcd.yaml
      ...
        - hostPath:
            path: /var/lib/etcd-backup                # 修改
            type: DirectoryOrCreate
      ...
      
      ➜ root@cluster3-master1:/etc/kubernetes/manifests# mv ../*.yaml .
      
      # 等系统组件静态 Pod 启动

Extra Question 1

Use context: kubectl config use-context k8s-c1-H

Check all available Pods in the Namespace project-c13 and find the names of those that would probably be terminated first if the Nodes run out of resources (cpu or memory) to schedule all Pods. Write the Pod names into /opt/course/e1/pods-not-stable.txt.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 查看 Pod 的 QoS 级别

      ➜ kubectl get pods -n project-c13 -o jsonpath="{range .items[*]}{.metadata.name} {.status.qosClass}{'\n'}"
      c13-2x3-api-86784557bd-cgs8g Burstable
      c13-2x3-api-86784557bd-lnxvj Burstable
      c13-2x3-api-86784557bd-mnp77 Burstable
      c13-2x3-web-769c989898-6hbgt Burstable
      c13-2x3-web-769c989898-g57nq Burstable
      c13-2x3-web-769c989898-hfd5v Burstable
      c13-2x3-web-769c989898-jfx64 Burstable
      c13-2x3-web-769c989898-r89mg Burstable
      c13-2x3-web-769c989898-wtgxl Burstable
      c13-3cc-runner-98c8b5469-dzqhr Burstable
      c13-3cc-runner-98c8b5469-hbtdv Burstable
      c13-3cc-runner-98c8b5469-n9lsw Burstable
      c13-3cc-runner-heavy-65588d7d6-djtv9 BestEffort
      c13-3cc-runner-heavy-65588d7d6-v8kf5 BestEffort
      c13-3cc-runner-heavy-65588d7d6-wwpb4 BestEffort
      c13-3cc-web-675456bcd-glpq6 Burstable
      c13-3cc-web-675456bcd-knlpx Burstable
      c13-3cc-web-675456bcd-nfhp9 Burstable
      c13-3cc-web-675456bcd-twn7m Burstable
      o3db-0 BestEffort
      o3db-1 BestEffort
      
      # BestEffort 为最低级别,会优先被 kill
    • 写入答案

      ➜ vim /opt/course/e1/pods-not-stable.txt
      c13-3cc-runner-heavy-65588d7d6-djtv9
      c13-3cc-runner-heavy-65588d7d6-v8kf5
      c13-3cc-runner-heavy-65588d7d6-wwpb4
      o3db-0
      o3db-1

Extra Question 2

Use context: kubectl config use-context k8s-c1-H

There is an existing ServiceAccount secret-reader in Namespace project-hamster. Create a Pod of image curlimages/curl:7.65.3 named tmp-api-contact which uses this ServiceAccount. Make sure the container keeps running.

Exec into the Pod and use curl to access the Kubernetes Api of that cluster manually, listing all available secrets. You can ignore insecure https connection. Write the command(s) for this into file /opt/course/e4/list-secrets.sh.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 创建 Pod

      apiVersion: v1
      kind: Pod
      metadata:
        creationTimestamp: null
        labels:
          run: tmp-api-contact
        name: tmp-api-contact
        namespace: project-hamster
      spec:
        serviceAccountName: secret-reader
        containers:
        - command:
          - sh
          - -c
          - sleep 1d
          image: curlimages/curl:7.65.3
          name: tmp-api-contact
          resources: {}
        dnsPolicy: ClusterFirst
        restartPolicy: Always
      status: {}
    • 命令

      ➜ vim /opt/course/e4/list-secrets.sh
      TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
      curl -k https://kubernetes.default/api/v1/secrets -H "Authorization: Bearer ${TOKEN}"
      
      # 或者
      ➜ vim /opt/course/e4/list-secrets.sh
      TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
      CACERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      curl --cacert ${CACERT} https://kubernetes.default/api/v1/secrets -H "Authorization: Bearer ${TOKEN}"

Preview Question 1

Use context: kubectl config use-context k8s-c2-AC

The cluster admin asked you to find out the following information about etcd running on cluster2-master1:

  • Server private key location
  • Server certificate expiration date
  • Is client certificate authentication enabled

Write these information into /opt/course/p1/etcd-info.txt

Finally you're asked to save an etcd snapshot at /etc/etcd-snapshot.db on cluster2-master1 and display its status.

题目解析:

  • 考点

  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c2-AC
    • 证书信息

      # 查看 /etc/kubernetes/manifests/etcd.yaml 可得证书路径和鉴权是否开启
      # openssl x509  -noout -text -in /etc/kubernetes/pki/etcd/server.crt 可得证书有效期
      
      ➜ vim /opt/course/p1/etcd-info.txt
      Server private key location: /etc/kubernetes/pki/etcd/server.key
      Server certificate expiration date: Sep 13 13:01:31 2022 GMT
      Is client certificate authentication enabled: yes
    • 备份etcd

      ➜ root@cluster2-master1:~# ETCDCTL_API=3 etcdctl snapshot save /etc/etcd-snapshot.db \
      --cacert /etc/kubernetes/pki/etcd/ca.crt \
      --cert /etc/kubernetes/pki/etcd/server.crt \
      --key /etc/kubernetes/pki/etcd/server.key
      
      # 查看备份状态
      ➜ root@cluster2-master1:~# ETCDCTL_API=3 etcdctl snapshot status /etc/etcd-snapshot.db
      4d4e953, 7213, 1291, 2.7 MB

Preview Question 2

Use context: kubectl config use-context k8s-c1-H

You're asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster:

Create a new Pod named p2-pod with two containers, one of image nginx:1.21.3-alpine and one of image busybox:1.31. Make sure the busybox container keeps running for some time.

Create a new Service named p2-service which exposes that Pod internally in the cluster on port 3000->80.

Find the kube-proxy container on all nodes cluster1-master1, cluster1-worker1 and cluster1-worker2 and make sure that it's using iptables. Use command crictl for this.

Write the iptables rules of all nodes belonging the created Service p2-service into file /opt/course/p2/iptables.txt.

Finally delete the Service and confirm that the iptables rules are gone from all nodes.

题目解析:

  • 考点

    • iptables
  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c1-H
    • 部署清单

      apiVersion: v1
      kind: Pod
      metadata:
        creationTimestamp: null
        labels:
          run: p2-pod
        name: p2-pod
        namespace: project-hamster
      spec:
        containers:
        - image: nginx:1.21.3-alpine
          name: c1
          resources: {}
        - image: busybox:1.31
          name: c2
          resources: {}
          command: ["sh", "-c", "sleep 1d"]
        dnsPolicy: ClusterFirst
        restartPolicy: Always
      status: {}
      --- 
      apiVersion: v1
      kind: Service
      metadata:
        creationTimestamp: null
        labels:
          app: p2-service
        name: p2-service
        namespace: project-hamster
      spec:
        ports:
        - name: 3000-80
          port: 3000
          protocol: TCP
          targetPort: 80
        selector:
          run: p2-pod
        type: ClusterIP
      status:
        loadBalancer: {}
    • iptables规则

      ➜ ssh cluster1-master1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt
      ➜ ssh cluster1-worker1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt
      ➜ ssh cluster1-worker2 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt

Preview Question 3

Use context: kubectl config use-context k8s-c2-AC

Create a Pod named check-ip in Namespace default using image httpd:2.4.41-alpine. Expose it on port 80 as a ClusterIP Service named check-ip-service. Remember/output the IP of that Service.

Change the Service CIDR to 11.96.0.0/12 for the cluster.

Then create a second Service named check-ip-service2 pointing to the same Pod to check if your settings did take effect. Finally check if the IP of the first Service has changed.

题目解析:

  • 考点

    • iptables
  • 解题

    • 切换上下文

      ➜ kubectl config use-context k8s-c2-AC
    • 部署清单

      apiVersion: v1
      kind: Pod
      metadata:
        creationTimestamp: null
        labels:
          run: check-ip
        name: check-ip
      spec:
        containers:
        - image: httpd:2.4.41-alpine
          name: check-ip
          resources: {}
        dnsPolicy: ClusterFirst
        restartPolicy: Always
      status: {}
      ---
      apiVersion: v1
      kind: Service
      metadata:
        creationTimestamp: null
        labels:
          app: check-ip-service
        name: check-ip-service
      spec:
        ports:
        - name: 80-80
          port: 80
          protocol: TCP
          targetPort: 80
        selector:
          run: check-ip
        type: ClusterIP
      status:
        loadBalancer: {}
    • 修改网段

      ➜ root@cluster2-master1:~# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
          - --service-cluster-ip-range=11.96.0.0/12         # 修改
    • 创建另一个 service

      ➜ kubectl expose pod check-ip --name check-ip-service2 --port 80

标签: CKA

添加新评论