k8s gc机制详解-总结

k8s gc机制详解-总结

接上篇gc源码分析,这篇主要从事件角度总结以在不同的删除策略(孤儿,前台,后台)模式下,删除k8s资源发生了什么。

以下都是以 deployA , rsA, podA作为介绍。(这个可以类比为任何有这种依赖关系的资源)

1. 孤儿模式

孤儿模式删除deployA: deployA会被删除,rsA不会删除,但是rsA的OwnerReference里deployA会被删除。

具体的流程如下:

(1) 客户端发起kubectl delete deploy deployA –cascade=false

(2)apiserver接收到请求,发现删除模式是organ。这个时候apiserver会做俩件事情:

  • 设置deployA的deletionStamp
  • 增加一个finalizer,organ

这个时候apiserver会直接返回,不会一直阻塞在这里等

(3)这个时候由于apiserver对deployA更新了。所以gc收到了deployA的更新事件,然后开始处理工作:

  • 一,维护uidToNode图,就是删除了deployA这个node节点,并且将rsA节点的onwer删除。
  • 二,将rsA这个对象的OwnerReference中的deployA删除;
  • 三,将deployA这个对象的organ finalizer删除

(4)将deployA这个对象的organ finalizer删除实际上是一个更新事件。这个时候apiserver收到这个更新事件,发现deployA的所以finalizer被删除了,这个时候调用restful接口真正的删除 deployA。

2. 后台模式

后台模式删除deployA: deployA会被马上删除,然后删除rsA,最后删除pod

具体的流程如下:

(1) 客户端发起kubectl delete deployA propagationPolicy”:“Background”

(2)apiserver接收到请求,发现删除模式是Background。这个时候apiserver会直接将deployA删除。

(3)这个时候由于apiserver删除了deployA。所以gc收到了deployA的删除事件,然后开始处理工作:

  • 一,维护uidToNode图,就是删除了deployA这个node节点,并且将rsA扔进attemptToDelete队列
  • 二,处理rsA时,发现它的owner已经不存在了,所以马上以backgroud的方式,再删除rsA。
  • 三,然后就是同样的操作,先删除了rsA,然后删除了pod。

3. 前台模式

前台模式删除deployA: podA会先删除,然后是rsA,最后是deployA。

具体的流程如下:

(1) 客户端发起kubectl delete deployA propagationPolicy:Foreground

(2)apiserver接收到请求,发现删除模式是Foreground。这个时候apiserver会做俩件事情:

  • 设置deployA的deletionStamp
  • 增加一个finalizer,Foreground

这个时候apiserver会直接返回,不会一直阻塞在这里等

(3)这个时候由于apiserver对deployA更新了。所以gc收到了deployA的更新事件,然后开始处理工作。

具体为:

一,维护uidToNode图。

首先是deployA这个node节点,会标记为 删除depents中。然后将 deployA的依赖(rsA)加入 attempToDelete队列。

处理rsA时,发现rsA的owner在等待删除depents。并且rsA还有自己的 depends。所以这个时候就调用前台删除接口删除 来删除rsA。

同样,前台删除rsA时,先标记rsA这个node节点,为 删除depents中,然后将 rsA的依赖(podA)加入 attempToDelete队列。

处理podA时,发现PodA的owner在等待删除depents。但是podA没有自己的 depends。所以这个时候就调用后台删除接口删除 来删除podA。

后台删除podA后,apiserver会直接将podA这个对象删除。所以gc收到了 删除事件。这个时候会将 podA这个节点删除,然后再将rsA加入删除队列。

接下来rsA发现自己的depents删除了,所以rsA的finalizer就会删除。然后apiserver就会将rsA删除。

然后gc收到了rsA的删除事件,同样的操作再将deployA删除。

4. 总结

gc的机制非常巧妙,而且和apiserver进行了联动。在实际过程中运用这种gc机制也非常有用。比如有俩个不相关的对象,通过设置OwnerReference, 就可以实现,俩个对象的级联删除。

5. 方法论

以上的流程,通过代码和实践进行验证。

代码分析见上一篇。实践就是通过实验,主要做了以下观察:

(1)看deployA的yaml发生了什么变化

(2)增大kcm的日志等级,查看gc的日志

(3)增大apiserver的日志等级,查看apiserver的处理

5.1 看deployA的yaml发生了什么变化

// -w 一直监控删除前后的变化
root@k8s-master:~/testyaml/hpa# kubectl get deploy zx-hpa -oyaml -w
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2021-07-09T07:21:48Z"
  generation: 1
  labels:
    app: zx-hpa-test
  name: zx-hpa
  namespace: default
  resourceVersion: "6975175"
  selfLink: /apis/apps/v1/namespaces/default/deployments/zx-hpa
  uid: 6ccbe990-e4d3-4ba1-b67f-56a9bfbd69a0
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: zx-hpa-test
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: zx-hpa-test
      name: zx-hpa-test
    spec:
      containers:
      - command:
        - sleep
        - "3600"
        image: busybox:latest
        imagePullPolicy: IfNotPresent
        name: busybox
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 5
status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: "2021-07-09T07:21:50Z"
    lastUpdateTime: "2021-07-09T07:21:50Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2021-07-09T07:21:49Z"
    lastUpdateTime: "2021-07-09T07:21:50Z"
    message: ReplicaSet "zx-hpa-7b56cddd95" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 2
  replicas: 2
  updatedReplicas: 2












---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2021-07-09T07:21:48Z"
  generation: 1
  labels:
    app: zx-hpa-test
  name: zx-hpa
  namespace: default
  resourceVersion: "6975316"
  selfLink: /apis/apps/v1/namespaces/default/deployments/zx-hpa
  uid: 6ccbe990-e4d3-4ba1-b67f-56a9bfbd69a0
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: zx-hpa-test
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: zx-hpa-test
      name: zx-hpa-test
    spec:
      containers:
      - command:
        - sleep
        - "3600"
        image: busybox:latest
        imagePullPolicy: IfNotPresent
        name: busybox
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 5
status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: "2021-07-09T07:21:50Z"
    lastUpdateTime: "2021-07-09T07:21:50Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2021-07-09T07:21:49Z"
    lastUpdateTime: "2021-07-09T07:21:50Z"
    message: ReplicaSet "zx-hpa-7b56cddd95" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 2
  replicas: 2
  updatedReplicas: 2

5.2 增大kcm的日志等级,查看gc的日志

I0709 15:17:45.089271    3183 resource_quota_monitor.go:354] QuotaMonitor process object: apps/v1, Resource=deployments, namespace kube-system, name kube-hpa, uid 639d5269-d73d-4964-a7de-d6f386c9c7e4, event type delete
I0709 15:17:45.089320    3183 graph_builder.go:543] GraphBuilder process object: apps/v1/Deployment, namespace kube-system, name kube-hpa, uid 639d5269-d73d-4964-a7de-d6f386c9c7e4, event type delete
I0709 15:17:45.089346    3183 garbagecollector.go:404] processing item [apps/v1/ReplicaSet, namespace: kube-system, name: kube-hpa-84c884f994, uid: e66e45c0-5695-4c93-82f1-067b20aa035f]
I0709 15:17:45.089576    3183 deployment_controller.go:193] Deleting deployment kube-hpa
I0709 15:17:45.089591    3183 deployment_controller.go:564] Started syncing deployment "kube-system/kube-hpa" (2021-07-09 15:17:45.089588305 +0800 CST m=+38.708727198)
I0709 15:17:45.089611    3183 deployment_controller.go:575] Deployment kube-system/kube-hpa has been deleted
I0709 15:17:45.089615    3183 deployment_controller.go:566] Finished syncing deployment "kube-system/kube-hpa" (24.606µs)
I0709 15:17:45.093463    3183 garbagecollector.go:329] according to the absentOwnerCache, object e66e45c0-5695-4c93-82f1-067b20aa035f's owner apps/v1/Deployment, kube-hpa does not exist
I0709 15:17:45.093480    3183 garbagecollector.go:455] classify references of [apps/v1/ReplicaSet, namespace: kube-system, name: kube-hpa-84c884f994, uid: e66e45c0-5695-4c93-82f1-067b20aa035f].
solid: []v1.OwnerReference(nil)
dangling: []v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"Deployment", Name:"kube-hpa", UID:"639d5269-d73d-4964-a7de-d6f386c9c7e4", Controller:(*bool)(0xc000ab3817), BlockOwnerDeletion:(*bool)(0xc000ab3818)}}
waitingForDependentsDeletion: []v1.OwnerReference(nil)
I0709 15:17:45.093517    3183 garbagecollector.go:517] delete object [apps/v1/ReplicaSet, namespace: kube-system, name: kube-hpa-84c884f994, uid: e66e45c0-5695-4c93-82f1-067b20aa035f] with propagation policy Background
I0709 15:17:45.107563    3183 resource_quota_monitor.go:354] QuotaMonitor process object: apps/v1, Resource=replicasets, namespace kube-system, name kube-hpa-84c884f994, uid e66e45c0-5695-4c93-82f1-067b20aa035f, event type delete
I0709 15:17:45.107635    3183 replica_set.go:349] Deleting ReplicaSet "kube-system/kube-hpa-84c884f994"
I0709 15:17:45.107687    3183 replica_set.go:658] ReplicaSet kube-system/kube-hpa-84c884f994 has been deleted
I0709 15:17:45.107692    3183 replica_set.go:649] Finished syncing ReplicaSet "kube-system/kube-hpa-84c884f994" (16.069µs)
I0709 15:17:45.107720    3183 graph_builder.go:543] GraphBuilder process object: apps/v1/ReplicaSet, namespace kube-system, name kube-hpa-84c884f994, uid e66e45c0-5695-4c93-82f1-067b20aa035f, event type delete
I0709 15:17:45.107753    3183 garbagecollector.go:404] processing item [v1/Pod, namespace: kube-system, name: kube-hpa-84c884f994-7gwpz, uid: 9833c399-b139-4432-98f7-cec13158f804]
I0709 15:17:45.111155    3183 garbagecollector.go:329] according to the absentOwnerCache, object 9833c399-b139-4432-98f7-cec13158f804's owner apps/v1/ReplicaSet, kube-hpa-84c884f994 does not exist
I0709 15:17:45.111174    3183 garbagecollector.go:455] classify references of [v1/Pod, namespace: kube-system, name: kube-hpa-84c884f994-7gwpz, uid: 9833c399-b139-4432-98f7-cec13158f804].
solid: []v1.OwnerReference(nil)
dangling: []v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"kube-hpa-84c884f994", UID:"e66e45c0-5695-4c93-82f1-067b20aa035f", Controller:(*bool)(0xc000bde7bf), BlockOwnerDeletion:(*bool)(0xc000bde800)}}
waitingForDependentsDeletion: []v1.OwnerReference(nil)
I0709 15:17:45.111213    3183 garbagecollector.go:517] delete object [v1/Pod, namespace: kube-system, name: kube-hpa-84c884f994-7gwpz, uid: 9833c399-b139-4432-98f7-cec13158f804] with propagation policy Background
I0709 15:17:45.124112    3183 graph_builder.go:543] GraphBuilder process object: v1/Pod, namespace kube-system, name kube-hpa-84c884f994-7gwpz, uid 9833c399-b139-4432-98f7-cec13158f804, event type update
I0709 15:17:45.124236    3183 endpoints_controller.go:385] About to update endpoints for service "kube-system/kube-hpa"
I0709 15:17:45.124275    3183 endpoints_controller.go:420] Pod is being deleted kube-system/kube-hpa-84c884f994-7gwpz
I0709 15:17:45.124293    3183 endpoints_controller.go:512] Update endpoints for kube-system/kube-hpa, ready: 0 not ready: 0
I0709 15:17:45.124481    3183 disruption.go:394] updatePod called on pod "kube-hpa-84c884f994-7gwpz"
I0709 15:17:45.124523    3183 disruption.go:457] No PodDisruptionBudgets found for pod kube-hpa-84c884f994-7gwpz, PodDisruptionBudget controller will avoid syncing.
I0709 15:17:45.124527    3183 disruption.go:397] No matching pdb for pod "kube-hpa-84c884f994-7gwpz"
I0709 15:17:45.131011    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-hpa, uid 17a8623b-2bd6-4253-b7cd-88a7af615220, event type update
I0709 15:17:45.132261    3183 endpoints_controller.go:353] Finished syncing service "kube-system/kube-hpa" endpoints. (8.020508ms)
I0709 15:17:45.132951    3183 graph_builder.go:543] GraphBuilder process object: events.k8s.io/v1beta1/Event, namespace kube-system, name kube-hpa-84c884f994-7gwpz.16900e30134087ab, uid 7c55e936-801b-4eb9-a828-085d92983134, event type add
I0709 15:17:45.310041    3183 graph_builder.go:543] GraphBuilder process object: apiregistration.k8s.io/v1/APIService, namespace , name v1beta1.custom.metrics.k8s.io, uid 71617a10-8136-4a2a-af65-d64bcd6c78c3, event type update
I0709 15:17:45.660593    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-scheduler, uid d1e00c1e-7803-4c0f-ab8a-b3eeb0644879, event type update
I0709 15:17:45.668379    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid 9aed1771-031a-4fce-826a-11d98ee81740, event type update
I0709 15:17:46.143691    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-controller-manager, uid 5d530096-9b10-45bb-a11e-43f1f8733fa5, event type update
I0709 15:17:46.143962    3183 graph_builder.go:543] GraphBuilder process object: v1/Pod, namespace kube-system, name kube-hpa-84c884f994-7gwpz, uid 9833c399-b139-4432-98f7-cec13158f804, event type update
I0709 15:17:46.144055    3183 endpoints_controller.go:385] About to update endpoints for service "kube-system/kube-hpa"
I0709 15:17:46.144095    3183 endpoints_controller.go:420] Pod is being deleted kube-system/kube-hpa-84c884f994-7gwpz
I0709 15:17:46.144126    3183 endpoints_controller.go:512] Update endpoints for kube-system/kube-hpa, ready: 0 not ready: 0
I0709 15:17:46.144329    3183 disruption.go:394] updatePod called on pod "kube-hpa-84c884f994-7gwpz"
I0709 15:17:46.144347    3183 disruption.go:457] No PodDisruptionBudgets found for pod kube-hpa-84c884f994-7gwpz, PodDisruptionBudget controller will avoid syncing.
I0709 15:17:46.144350    3183 disruption.go:397] No matching pdb for pod "kube-hpa-84c884f994-7gwpz"
I0709 15:17:46.144361    3183 pvc_protection_controller.go:342] Enqueuing PVCs for Pod kube-system/kube-hpa-84c884f994-7gwpz (UID=9833c399-b139-4432-98f7-cec13158f804)
I0709 15:17:46.150410    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-hpa, uid 17a8623b-2bd6-4253-b7cd-88a7af615220, event type update
I0709 15:17:46.150749    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 036d9292-1152-4f8c-8a85-0879c5424cfb, event type update
I0709 15:17:46.151231    3183 leaderelection.go:283] successfully renewed lease kube-system/kube-controller-manager
I0709 15:17:46.151321    3183 endpoints_controller.go:353] Finished syncing service "kube-system/kube-hpa" endpoints. (7.269404ms)



I0709 15:17:46.978486    3183 cronjob_controller.go:129] Found 4 jobs
I0709 15:17:46.978503    3183 cronjob_controller.go:135] Found 1 groups
I0709 15:17:46.982118    3183 event.go:281] Event(v1.ObjectReference{Kind:"CronJob", Namespace:"default", Name:"hello", UID:"b9648456-0b0a-44a4-b4c7-4c1db9be4085", APIVersion:"batch/v1beta1", ResourceVersion:"6974347", FieldPath:""}): type: 'Normal' reason: 'SawCompletedJob' Saw completed job: hello-1625815020, status: Complete
I0709 15:17:46.986941    3183 graph_builder.go:543] GraphBuilder process object: batch/v1beta1/CronJob, namespace default, name hello, uid b9648456-0b0a-44a4-b4c7-4c1db9be4085, event type update
I0709 15:17:46.987073    3183 cronjob_controller.go:278] No unmet start times for default/hello
I0709 15:17:46.987091    3183 cronjob_controller.go:203] Cleaning up 1/4 jobs from default/hello
I0709 15:17:46.987096    3183 cronjob_controller.go:207] Removing job hello-1625814840 from default/hello
I0709 15:17:46.987694    3183 graph_builder.go:543] GraphBuilder process object: events.k8s.io/v1beta1/Event, namespace default, name hello.16900e3081ed9288, uid 21dc6f32-9c3b-479a-8a69-c71946be3b7a, event type add
I0709 15:17:46.998396    3183 job_controller.go:452] Job has been deleted: default/hello-1625814840
I0709 15:17:46.998407    3183 job_controller.go:439] Finished syncing job "default/hello-1625814840" (42.057µs)
I0709 15:17:46.998436    3183 graph_builder.go:543] GraphBuilder process object: batch/v1/Job, namespace default, name hello-1625814840, uid ce65b016-b3c4-4a65-b01d-f81381fca20a, event type delete
I0709 15:17:46.998463    3183 garbagecollector.go:404] processing item [v1/Pod, namespace: default, name: hello-1625814840-9tmbk, uid: 7aabf04b-31c5-4602-af5e-87a7e0079d1a]
I0709 15:17:46.998715    3183 resource_quota_monitor.go:354] QuotaMonitor process object: batch/v1, Resource=jobs, namespace default, name hello-1625814840, uid ce65b016-b3c4-4a65-b01d-f81381fca20a, event type delete
I0709 15:17:46.999144    3183 event.go:281] Event(v1.ObjectReference{Kind:"CronJob", Namespace:"default", Name:"hello", UID:"b9648456-0b0a-44a4-b4c7-4c1db9be4085", APIVersion:"batch/v1beta1", ResourceVersion:"6974464", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted job hello-1625814840
I0709 15:17:47.002267    3183 garbagecollector.go:329] according to the absentOwnerCache, object 7aabf04b-31c5-4602-af5e-87a7e0079d1a's owner batch/v1/Job, hello-1625814840 does not exist
I0709 15:17:47.002298    3183 garbagecollector.go:455] classify references of [v1/Pod, namespace: default, name: hello-1625814840-9tmbk, uid: 7aabf04b-31c5-4602-af5e-87a7e0079d1a].
solid: []v1.OwnerReference(nil)
dangling: []v1.OwnerReference{v1.OwnerReference{APIVersion:"batch/v1", Kind:"Job", Name:"hello-1625814840", UID:"ce65b016-b3c4-4a65-b01d-f81381fca20a", Controller:(*bool)(0xc000bdf480), BlockOwnerDeletion:(*bool)(0xc000bdf481)}}
waitingForDependentsDeletion: []v1.OwnerReference(nil)
I0709 15:17:47.002325    3183 garbagecollector.go:517] delete object [v1/Pod, namespace: default, name: hello-1625814840-9tmbk, uid: 7aabf04b-31c5-4602-af5e-87a7e0079d1a] with propagation policy Background
I0709 15:17:47.005713    3183 graph_builder.go:543] GraphBuilder process object: events.k8s.io/v1beta1/Event, namespace default, name hello.16900e3082f15365, uid 903283d1-63da-4ba7-b200-69d6a30a1d5c, event type add
I0709 15:17:47.011868    3183 graph_builder.go:543] GraphBuilder process object: v1/Pod, namespace default, name hello-1625814840-9tmbk, uid 7aabf04b-31c5-4602-af5e-87a7e0079d1a, event type update
I0709 15:17:47.011938    3183 disruption.go:394] updatePod called on pod "hello-1625814840-9tmbk"
I0709 15:17:47.011960    3183 disruption.go:457] No PodDisruptionBudgets found for pod hello-1625814840-9tmbk, PodDisruptionBudget controller will avoid syncing.
I0709 15:17:47.011964    3183 disruption.go:397] No matching pdb for pod "hello-1625814840-9tmbk"
I0709 15:17:47.011977    3183 pvc_protection_controller.go:342] Enqueuing PVCs for Pod default/hello-1625814840-9tmbk (UID=7aabf04b-31c5-4602-af5e-87a7e0079d1a)
I0709 15:17:47.026287    3183 graph_builder.go:543] GraphBuilder process object: v1/Pod, namespace default, name hello-1625814840-9tmbk, uid 7aabf04b-31c5-4602-af5e-87a7e0079d1a, event type delete
I0709 15:17:47.026312    3183 deployment_controller.go:356] Pod hello-1625814840-9tmbk deleted.
I0709 15:17:47.026350    3183 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"default", Name:"hello-1625814840-9tmbk"}
I0709 15:17:47.026389    3183 disruption.go:423] deletePod called on pod "hello-1625814840-9tmbk"
I0709 15:17:47.026409    3183 disruption.go:457] No PodDisruptionBudgets found for pod hello-1625814840-9tmbk, PodDisruptionBudget controller will avoid syncing.
I0709 15:17:47.026413    3183 disruption.go:426] No matching pdb for pod "hello-1625814840-9tmbk"
I0709 15:17:47.026425    3183 pvc_protection_controller.go:342] Enqueuing PVCs for Pod default/hello-1625814840-9tmbk (UID=7aabf04b-31c5-4602-af5e-87a7e0079d1a)
I0709 15:17:47.026449    3183 resource_quota_monitor.go:354] QuotaMonitor process object: /v1, Resource=pods, namespace default, name hello-1625814840-9tmbk, uid 7aabf04b-31c5-4602-af5e-87a7e0079d1a, event type delete
I0709 15:17:47.164797    3183 graph_builder.go:543] GraphBuilder process object: v1/Pod, namespace kube-system, name kube-hpa-84c884f994-7gwpz, uid 9833c399-b139-4432-98f7-cec13158f804, event type update
I0709 15:17:47.164886    3183 endpoints_controller.go:385] About to update endpoints for service "kube-system/kube-hpa"
I0709 15:17:47.164929    3183 endpoints_controller.go:420] Pod is being deleted kube-system/kube-hpa-84c884f994-7gwpz
I0709 15:17:47.164945    3183 endpoints_controller.go:512] Update endpoints for kube-system/kube-hpa, ready: 0 not ready: 0
I0709 15:17:47.165093    3183 disruption.go:394] updatePod called on pod "kube-hpa-84c884f994-7gwpz"
I0709 15:17:47.165108    3183 disruption.go:457] No PodDisruptionBudgets found for pod kube-hpa-84c884f994-7gwpz, PodDisruptionBudget controller will avoid syncing.
I0709 15:17:47.165111    3183 disruption.go:397] No matching pdb for pod "kube-hpa-84c884f994-7gwpz"
I0709 15:17:47.165122    3183 pvc_protection_controller.go:342] Enqueuing PVCs for Pod kube-system/kube-hpa-84c884f994-7gwpz (UID=9833c399-b139-4432-98f7-cec13158f804)
I0709 15:17:47.165142    3183 resource_quota_monitor.go:354] QuotaMonitor process object: /v1, Resource=pods, namespace kube-system, name kube-hpa-84c884f994-7gwpz, uid 9833c399-b139-4432-98f7-cec13158f804, event type update
I0709 15:17:47.169973    3183 endpoints_controller.go:353] Finished syncing service "kube-system/kube-hpa" endpoints. (5.082912ms)
I0709 15:17:47.172446    3183 graph_builder.go:543] GraphBuilder process object: v1/Pod, namespace kube-system, name kube-hpa-84c884f994-7gwpz, uid 9833c399-b139-4432-98f7-cec13158f804, event type delete
I0709 15:17:47.172467    3183 deployment_controller.go:356] Pod kube-hpa-84c884f994-7gwpz deleted.
I0709 15:17:47.172474    3183 deployment_controller.go:424] Cannot get replicaset "kube-hpa-84c884f994" for pod "kube-hpa-84c884f994-7gwpz": replicaset.apps "kube-hpa-84c884f994" not found
I0709 15:17:47.172507    3183 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"kube-system", Name:"kube-hpa-84c884f994-7gwpz"}
I0709 15:17:47.172564    3183 endpoints_controller.go:385] About to update endpoints for service "kube-system/kube-hpa"
I0709 15:17:47.172614    3183 endpoints_controller.go:512] Update endpoints for kube-system/kube-hpa, ready: 0 not ready: 0
I0709 15:17:47.172779    3183 disruption.go:423] deletePod called on pod "kube-hpa-84c884f994-7gwpz"
I0709 15:17:47.172796    3183 disruption.go:457] No PodDisruptionBudgets found for pod kube-hpa-84c884f994-7gwpz, PodDisruptionBudget controller will avoid syncing.
I0709 15:17:47.172799    3183 disruption.go:426] No matching pdb for pod "kube-hpa-84c884f994-7gwpz"
I0709 15:17:47.172808    3183 pvc_protection_controller.go:342] Enqueuing PVCs for Pod kube-system/kube-hpa-84c884f994-7gwpz (UID=9833c399-b139-4432-98f7-cec13158f804)
I0709 15:17:47.172843    3183 resource_quota_monitor.go:354] QuotaMonitor process object: /v1, Resource=pods, namespace kube-system, name kube-hpa-84c884f994-7gwpz, uid 9833c399-b139-4432-98f7-cec13158f804, event type delete
I0709 15:17:47.173978    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-hpa, uid 17a8623b-2bd6-4253-b7cd-88a7af615220, event type update
I0709 15:17:47.178093    3183 endpoints_controller.go:353] Finished syncing service "kube-system/kube-hpa" endpoints. (5.525822ms)
I0709 15:17:47.178107    3183 endpoints_controller.go:340] Error syncing endpoints for service "kube-system/kube-hpa", retrying. Error: Operation cannot be fulfilled on endpoints "kube-hpa": the object has been modified; please apply your changes to the latest version and try again
I0709 15:17:47.178372    3183 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-hpa", UID:"17a8623b-2bd6-4253-b7cd-88a7af615220", APIVersion:"v1", ResourceVersion:"6974462", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint kube-system/kube-hpa: Operation cannot be fulfilled on endpoints "kube-hpa": the object has been modified; please apply your changes to the latest version and try again
I0709 15:17:47.182381    3183 graph_builder.go:543] GraphBuilder process object: events.k8s.io/v1beta1/Event, namespace kube-system, name kube-hpa.16900e308da0917a, uid d136415c-0a51-40e2-b1ba-f63587af89a6, event type add
I0709 15:17:47.183280    3183 endpoints_controller.go:385] About to update endpoints for service "kube-system/kube-hpa"
I0709 15:17:47.183318    3183 endpoints_controller.go:512] Update endpoints for kube-system/kube-hpa, ready: 0 not ready: 0
I0709 15:17:47.186538    3183 endpoints_controller.go:353] Finished syncing service "kube-system/kube-hpa" endpoints. (3.266428ms)
I0709 15:17:47.679672    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-scheduler, uid d1e00c1e-7803-4c0f-ab8a-b3eeb0644879, event type update
I0709 15:17:47.686259    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid 9aed1771-031a-4fce-826a-11d98ee81740, event type update
I0709 15:17:48.166708    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-controller-manager, uid 5d530096-9b10-45bb-a11e-43f1f8733fa5, event type update
I0709 15:17:48.175956    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 036d9292-1152-4f8c-8a85-0879c5424cfb, event type update
I0709 15:17:48.176356    3183 leaderelection.go:283] successfully renewed lease kube-system/kube-controller-manager
I0709 15:17:49.277193    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-node-lease, name 192.168.0.5, uid 71ce7519-2999-4dbf-9118-227e5cb6d9ef, event type update
I0709 15:17:49.701416    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-scheduler, uid d1e00c1e-7803-4c0f-ab8a-b3eeb0644879, event type update
I0709 15:17:49.721102    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid 9aed1771-031a-4fce-826a-11d98ee81740, event type update
I0709 15:17:50.189139    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-controller-manager, uid 5d530096-9b10-45bb-a11e-43f1f8733fa5, event type update
I0709 15:17:50.199890    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 036d9292-1152-4f8c-8a85-0879c5424cfb, event type update
I0709 15:17:50.200028    3183 leaderelection.go:283] successfully renewed lease kube-system/kube-controller-manager
I0709 15:17:51.046632    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-node-lease, name 192.168.0.4, uid a6c1c902-8d7f-442e-89d2-407f1677247e, event type update
I0709 15:17:51.734474    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-scheduler, uid d1e00c1e-7803-4c0f-ab8a-b3eeb0644879, event type update
I0709 15:17:51.742571    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid 9aed1771-031a-4fce-826a-11d98ee81740, event type update
I0709 15:17:51.949675    3183 reflector.go:268] k8s.io/client-go/informers/factory.go:135: forcing resync
E0709 15:17:51.960736    3183 horizontal.go:214] failed to query scale subresource for Deployment/default/zx-hpa: deployments/scale.apps "zx-hpa" not found
I0709 15:17:51.961135    3183 event.go:281] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"default", Name:"nginx-hpa-zx-1", UID:"d49c5146-c5ef-4ac8-8039-c9b15f094360", APIVersion:"autoscaling/v2beta2", ResourceVersion:"4763928", FieldPath:""}): type: 'Warning' reason: 'FailedGetScale' deployments/scale.apps "zx-hpa" not found
I0709 15:17:51.965206    3183 graph_builder.go:543] GraphBuilder process object: events.k8s.io/v1beta1/Event, namespace default, name nginx-hpa-zx-1.16900e31aab074d5, uid 3c9d8d3b-d63f-463c-8f8f-b8d2ba3f4fb3, event type add
I0709 15:17:52.215733    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-controller-manager, uid 5d530096-9b10-45bb-a11e-43f1f8733fa5, event type update
I0709 15:17:52.224070    3183 leaderelection.go:283] successfully renewed lease kube-system/kube-controller-manager
I0709 15:17:52.224234    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 036d9292-1152-4f8c-8a85-0879c5424cfb, event type update
I0709 15:17:52.461003    3183 pv_controller_base.go:514] resyncing PV controller
I0709 15:17:53.755870    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-scheduler, uid d1e00c1e-7803-4c0f-ab8a-b3eeb0644879, event type update
I0709 15:17:53.766095    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid 9aed1771-031a-4fce-826a-11d98ee81740, event type update
I0709 15:17:53.886970    3183 discovery.go:214] Invalidating discovery information
I0709 15:17:54.236384    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-controller-manager, uid 5d530096-9b10-45bb-a11e-43f1f8733fa5, event type update
I0709 15:17:54.244313    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 036d9292-1152-4f8c-8a85-0879c5424cfb, event type update
I0709 15:17:54.244924    3183 leaderelection.go:283] successfully renewed lease kube-system/kube-controller-manager
I0709 15:17:55.778133    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-scheduler, uid d1e00c1e-7803-4c0f-ab8a-b3eeb0644879, event type update
I0709 15:17:55.785242    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid 9aed1771-031a-4fce-826a-11d98ee81740, event type update
I0709 15:17:56.264037    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-controller-manager, uid 5d530096-9b10-45bb-a11e-43f1f8733fa5, event type update
I0709 15:17:56.271400    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 036d9292-1152-4f8c-8a85-0879c5424cfb, event type update
I0709 15:17:56.271774    3183 leaderelection.go:283] successfully renewed lease kube-system/kube-controller-manager
I0709 15:17:57.011460    3183 cronjob_controller.go:129] Found 3 jobs
I0709 15:17:57.011484    3183 cronjob_controller.go:135] Found 1 groups
I0709 15:17:57.018598    3183 cronjob_controller.go:278] No unmet start times for default/hello
I0709 15:17:57.436623    3183 gc_controller.go:163] GC'ing orphaned
I0709 15:17:57.436642    3183 gc_controller.go:226] GC'ing unscheduled pods which are terminating.
I0709 15:17:57.799012    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-scheduler, uid d1e00c1e-7803-4c0f-ab8a-b3eeb0644879, event type update
I0709 15:17:57.807268    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid 9aed1771-031a-4fce-826a-11d98ee81740, event type update
I0709 15:17:58.282260    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-controller-manager, uid 5d530096-9b10-45bb-a11e-43f1f8733fa5, event type update
I0709 15:17:58.288233    3183 leaderelection.go:283] successfully renewed lease kube-system/kube-controller-manager
I0709 15:17:58.288746    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 036d9292-1152-4f8c-8a85-0879c5424cfb, event type update
I0709 15:17:59.286621    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-node-lease, name 192.168.0.5, uid 71ce7519-2999-4dbf-9118-227e5cb6d9ef, event type update
I0709 15:17:59.819587    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-scheduler, uid d1e00c1e-7803-4c0f-ab8a-b3eeb0644879, event type update
I0709 15:17:59.827855    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-scheduler, uid 9aed1771-031a-4fce-826a-11d98ee81740, event type update
I0709 15:18:00.301289    3183 graph_builder.go:543] GraphBuilder process object: v1/Endpoints, namespace kube-system, name kube-controller-manager, uid 5d530096-9b10-45bb-a11e-43f1f8733fa5, event type update
I0709 15:18:00.310096    3183 leaderelection.go:283] successfully renewed lease kube-system/kube-controller-manager
I0709 15:18:00.310445    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-system, name kube-controller-manager, uid 036d9292-1152-4f8c-8a85-0879c5424cfb, event type update
I0709 15:18:01.054003    3183 graph_builder.go:543] GraphBuilder process object: coordination.k8s.io/v1/Lease, namespace kube-node-lease, name 192.168.0.4, uid a6c1c902-8d7f-442e-89d2-407f1677247e, event type update
^Z

5.3 增大apiserver的日志等级,查看apiserver的处理

至少开到5

I0709 16:43:48.411395   28901 handler.go:143] kube-apiserver: PUT "/apis/apps/v1/namespaces/default/deployments/zx-hpa/status" satisfied by gorestful with webservice /apis/apps/v1
I0709 16:43:48.413431   28901 httplog.go:90] GET /apis/apps/v1/namespaces/default/deployments/zx-hpa: (2.677854ms) 200 [kube-controller-manager/v1.17.4 (linux/amd64) kubernetes/8d8aa39/generic-garbage-collector 192.168.0.4:48978]
I0709 16:43:48.414076   28901 handler.go:153] kube-aggregator: GET "/apis/apps/v1/namespaces/default/deployments/zx-hpa" satisfied by nonGoRestful
I0709 16:43:48.414089   28901 pathrecorder.go:247] kube-aggregator: "/apis/apps/v1/namespaces/default/deployments/zx-hpa" satisfied by prefix /apis/apps/v1/
I0709 16:43:48.414119   28901 handler.go:143] kube-apiserver: GET "/apis/apps/v1/namespaces/default/deployments/zx-hpa" satisfied by gorestful with webservice /apis/apps/v1
I0709 16:43:48.418663   28901 httplog.go:90] PUT /apis/apps/v1/namespaces/default/deployments/zx-hpa/status: (7.370204ms) 200 [kube-controller-manager/v1.17.4 (linux/amd64) kubernetes/8d8aa39/deployment-controller 192.168.0.4:49000]
I0709 16:43:48.420303   28901 httplog.go:90] GET /apis/apps/v1/namespaces/default/deployments/zx-hpa: (6.309997ms) 200 [kube-controller-manager/v1.17.4 (linux/amd64) kubernetes/8d8aa39/generic-garbage-collector 192.168.0.4:48978]
I0709 16:43:48.420817   28901 handler.go:153] kube-aggregator: PATCH "/apis/apps/v1/namespaces/default/deployments/zx-hpa" satisfied by nonGoRestful
I0709 16:43:48.420828   28901 pathrecorder.go:247] kube-aggregator: "/apis/apps/v1/namespaces/default/deployments/zx-hpa" satisfied by prefix /apis/apps/v1/
I0709 16:43:48.420855   28901 handler.go:143] kube-apiserver: PATCH "/apis/apps/v1/namespaces/default/deployments/zx-hpa" satisfied by gorestful with webservice /apis/apps/v1
I0709 16:43:48.425221   28901 store.go:428] going to delete zx-hpa from registry, triggered by update
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 205,033评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,725评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,473评论 0 338
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,846评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,848评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,691评论 1 282
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,053评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,700评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 42,856评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,676评论 2 323
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,787评论 1 333
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,430评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,034评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,990评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,218评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,174评论 2 352
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,526评论 2 343

推荐阅读更多精彩内容