kubectl command

common

version

1
2
kubectl version
kubectl api-versions

health

1
2
3
4
5
6
7
8
9
10
11
# check kubernetes inner ip 
kubectl get svc kubernetes -n default

# check api server
kubectl get componentstatuses

# check crd
kubectl get crd | grep cert-manager

# check all pods
kubectl get pods -A

namespace

1
2
3
4
kubectl get namespaces

# delete all resource by specify namespace
kubectl delete all --all -n <namespace>

node

resource

1
2
3
4
5
6
7
8
9
10
# method 1: Check Node Resource Availability
kubectl describe node <node-name> # check Capacity, Allocatable, Allocated resources
• CPU available = Allocatable CPU - CPU Requests
• Memory available = Allocatable Memory - Memory Requests

# method 2: Get Detailed Resource Requests & Limits for All Pods
kubectl get pods -A -o=custom-columns="NAMESPACE:.metadata.namespace,NAME:.metadata.name,CPU_REQUESTS:.spec.containers[*].resources.requests.cpu,MEM_REQUESTS:.spec.containers[*].resources.requests.memory,CPU_LIMITS:.spec.containers[*].resources.limits.cpu,MEM_LIMITS:.spec.containers[*].resources.limits.memory"

# method 3: Output for Advanced Parsing json
kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, allocatable: .status.allocatable}'

label

1
2
3
4
5
6
7
8
9
# show all labels
kubectl get nodes --show-labels

# add label on node
kubectl label node <node-name> <label-key>=<label-value>

# remove label from node
kubectl label node <node-name> <label-key>-
# kubectl label nodes sd-shangdi-ceph17 dingofs-csi-node-

taint

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# check node is taint or not
kubectl describe node <nodeName> | grep -i taint

# taint node print
Taints: nodepool=fault:NoSchedule

# normal node print
Taints: <none>

# tag taint
kubectl taint node <node> nodepool=fault:NoSchedule

# remove taint
kubectl taint node <node-name> <key>:<value>-
# e.g. kubectl taint node node1 nodepool=fault:NoSchedule-

All

1
2
# delete all resource
kubectl delete all --all -n <namespace>

pod

basic

1
2
3
4
5
# list namespace's pod
kubectl get pod -n {namespace}

# describe
kubectl describe pod {podName}

log

-c : Specify which container to retrieve logs from.

-f: Stream the logs in real-time.

–previous: Show logs from the last terminated container.

–since=: Return logs for the last period (e.g., 1h, 30m).

–tail=: Limit the number of log lines returned.

–all-containers=true: Get logs from all containers in the pod.

1
2
3
4
5
6
7
8
# pod
kubectl logs -f {podId} -n {namespace}

# pod's container
kubectl logs <pod-name> -c <container-name> -n <namespace>

# check the logs from previous crash container
kubectl logs <pod-name> --previous -n <namespace>

config

1
2
3
4
kubectl get pod <容器id> --kubeconfig=/path/to/configfile -o yaml > env-vq48.yaml
# kubectl get -o yaml 这样的参数,会将指定的 Pod API 对象以 YAML 的方式展示出来。
# expose
kubectl get pod <pod-name> -n <namespace> -o yaml > pod-config.yaml

exec

without kubeconfig

1
2
3
4
5
6
# enter pod on specify container
kubectl exec -it {pod_id} -n {namespace} -c {container_id} -- sh

# execute specify command on pod
kubectl exec -it {pod_id} -n {namespace} -c {container_id} -- <shell>
e.g. kubectl exec -it csi-node-1 -n dingofs -- cat /etc/resolv.conf

copy

1
2
kubectl cp 命令空间/容器id:path/to/source_file ./path/to/local_file
# 注意 pod里面的路径无需带 /,使用相对路径即可(相对于进入pod之后的默认目录)

delete

1
2
3
4
5
6
7
8
9
# method 1
kubectl delete pod {pod_name} --grace-period=0 --force -n {namespace}

# method 2
kubectl patch pod <pod-name> -n <target-namespace> -p '{"metadata":{"finalizers":null}}' --type=merge

# method 3
step1 : kubectl edit pod <pod-name> -n <namespace>
step2 : delete the finalizers array under metadata, save the file, and exit
Aspect kubectl patch kubectl edit
Interaction Non-interactive (scriptable). Interactive (manual editing).
Precision Targets only the finalizers field. Requires manually deleting finalizers.
Use Case Automation (e.g., CI/CD pipelines). Debugging or ad-hoc fixes.
Safety Risk of typos in JSON syntax. Risk of accidental edits to other fields.

statefulset

1
2
# restart
kubectl rollout restart StatefulSet/dingofs-csi-controller -n dingofs

daemonsets

1
2
3
4
kubectl get daemonsets --all-namespaces

# restart
kubectl rollout restart daemonset/dingofs-csi-node -n dingofs

service

1
2
3
4
kubectl get svc -o wide -n <namespace>

# delete all
kubectl delete service --all -n <namespace>

storage

CSIDriver

1
kubectl get csidrivers

storageclass

1
2
3
4
kubectl get storageclass

# delete
kubectl delete storageclass <sc-name>

pvc

1
kubectl get pvc

pv

1
2
3
4
5
6
7
8
# list pv
kubectl get pv

# delete multiply pv
kubectl get pv --no-headers | grep <NAME> | awk '{print $1}' | xargs kubectl delete pv

# delete released status pvc
kubectl get pv -o json | jq -r '.items[] | select(.status.phase=="Released") | .metadata.name' | xargs kubectl delete pv
  • delete pv

    1
    2
    3
    4
    5
    # step1: change stragety
    kubectl patch pv PV_NAME -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'

    # step2: delete
    kubectl delete pv PV_NAME
  • PV reavailable

    现象:在原有PV已经bound上pvc的情况下(pv此时状态位 bound),修改了pvc的参数,并删除pvc(此时pv状态为release),重新应用pvc之后 (此时pv状态为release)

    1
    2
    3
    4
    5
    6
    # 需要应用 `kubectl patch pv pv-1 -p '{"spec":{"claimRef":null}}'`,把 pv状态置为 avaliable,过一会 pv就会和pvc再次绑定(pv状态变为bound)
    kubectl patch pv <pv-name> -p '{"spec":{"claimRef":null}}'

    # e.g.
    # kubectl patch pv pv-1 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
    # kubectl patch pv pv-1 -p '{"metadata":{"finalizers":null}}'

RBAC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# list role
kubectl get role -n <namespace> # -o yaml

# list clusterrole
kubectl get clusterrole

# list RoleBinding
kubectl get rolebinding -n <namespace>

# list clusterrolebinding
kubectl get clusterrolebinding

# Check if the User Can List Pods in Namespace
kubectl auth can-i list pods --as=<userName> -n <namespace>

# Check If the ServiceAccount Has Permissions to Get DaemonSets
kubectl auth can-i get daemonsets --as=system:serviceaccount:<namespace>:<serviceAccount> -n <namespace>