kubernetes – Kubeadm : invalid capacity 0 on image filesystem can’t schedule pods on master

Hi im installing a kubeadm cluster , in Calico deployment the calico-controller pod stays in pending state :

calico pending pod

[root@kubeadm-master ~]# kubectl get pods -n kube-system
NAME                                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-56cdb7c587-jf5w4                  0/1     Pending   0          26m
calico-node-x7ml2                                         1/1     Running   0          26m
coredns-6d4b75cb6d-85n85                                  1/1     Running   0          27m
coredns-6d4b75cb6d-g76qk                                  1/1     Running   0          27m
etcd-kubeadm-master.octopeek-dns.com                      1/1     Running   0          27m
kube-apiserver-kubeadm-master.octopeek-dns.com            1/1     Running   0          27m
kube-controller-manager-kubeadm-master.octopeek-dns.com   1/1     Running   0          27m
kube-proxy-lmvf2                                          1/1     Running   0          27m
kube-scheduler-kubeadm-master.octopeek-dns.com            1/1     Running   0          27m

and when i describe the node i get this Warning :

Warning InvalidDiskCapacity : invalid capacity 0 on image filesystem

The Warning

Events:
  Type     Reason                   Age   From             Message
  ----     ------                   ----  ----             -------
  Normal   Starting                 20m   kube-proxy
  Normal   NodeAllocatableEnforced  20m   kubelet          Updated Node Allocatable limit across pods
  Warning  InvalidDiskCapacity      20m   kubelet          invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  20m   kubelet          Node kubeadm-master.ex.com status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    20m   kubelet          Node kubeadm-master.ex.com status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     20m   kubelet          Node kubeadm-master.ex.com status is now: NodeHasSufficientPID
  Normal   Starting                 20m   kubelet          Starting kubelet.
  Normal   RegisteredNode           20m   node-controller  Node kubeadm-master.ex.com event: Registered Node kubeadm-master.octopeek-dns.com in Controller
  Normal   NodeReady                18m   kubelet          Node kubeadm-master.ex.com status is now: NodeReady

df -h output :

    Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 7.8G     0  7.8G   0% /dev
tmpfs                    7.8G     0  7.8G   0% /dev/shm
tmpfs                    7.8G  9.9M  7.8G   1% /run
tmpfs                    7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/centos-root   50G  4.2G   46G   9% /
/dev/mapper/centos-home  441G   33M  441G   1% /home
/dev/sda1               1014M  194M  821M  20% /boot
tmpfs                    1.6G     0  1.6G   0% /run/user/0
shm                       64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/55bda3a19b7993e0b92206c7e90c572b08f5b05822dfc6c6cd99b2f03e52db07/shm
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/55bda3a19b7993e0b92206c7e90c572b08f5b05822dfc6c6cd99b2f03e52db07/rootfs
shm                       64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/0d3aed51837f9cd2930f8d4a012439ce7c6298e774051de558751cf855b57c20/shm
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/0d3aed51837f9cd2930f8d4a012439ce7c6298e774051de558751cf855b57c20/rootfs
shm                       64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/2608aa548577f82b61478fc17ef083c6a972367684bb51458a1d324f29e9c616/shm
shm                       64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/b8f8dd37e987eec8663785d98a4d0ab0d22a7135706546efa6bd362e47c9e87c/shm
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/2608aa548577f82b61478fc17ef083c6a972367684bb51458a1d324f29e9c616/rootfs
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/b8f8dd37e987eec8663785d98a4d0ab0d22a7135706546efa6bd362e47c9e87c/rootfs
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/42a79fe5669fb54ce62671a4552d92c4350323a432628338586662389370822d/rootfs
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/2f784585580f36d9dea44612490292cc0167abb11ce8b697a7a7f3baa6cde01c/rootfs
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/a8119249c419a8ceac90bd6ed496dd9b8ec79fcbc9a4277239e7a3cc652b80b1/rootfs
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/fd644f02040cfa6115776f4bb4e2a19b90545f3d32e96de0cc17a4cb1c2b8db9/rootfs
tmpfs                     16G   12K   16G   1% /var/lib/kubelet/pods/53d6227c-d96d-4619-a438-571c4da28c06/volumes/kubernetes.io~projected/kube-api-access-zml44
shm                       64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/2016af157bd1d06d9160abee6d7353e873cb20bbfb8c4c77127310361aea3f7e/shm
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/2016af157bd1d06d9160abee6d7353e873cb20bbfb8c4c77127310361aea3f7e/rootfs
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/36c92aca1becd2f0b77778b8b8186fa38984bbe54ee81f2eaa2dfe21cb3efc3d/rootfs
tmpfs                     16G   12K   16G   1% /var/lib/kubelet/pods/818de6b9-2186-4ebc-98db-b8b8c842f5d6/volumes/kubernetes.io~projected/kube-api-access-vpfh6
shm                       64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/8076d3c0c27fb526cc9c808ae4be7a24abc6204e84bf5f3a9fff3db4b29058e3/shm
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/8076d3c0c27fb526cc9c808ae4be7a24abc6204e84bf5f3a9fff3db4b29058e3/rootfs
tmpfs                    170M   12K  170M   1% /var/lib/kubelet/pods/8088fa26-522a-40c4-bd57-74b25031e4d9/volumes/kubernetes.io~projected/kube-api-access-c8kbv
tmpfs                    170M   12K  170M   1% /var/lib/kubelet/pods/1ee14f9b-66a3-4484-beb3-3ba230c490e3/volumes/kubernetes.io~projected/kube-api-access-6bx7k
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/11e3b5bb808a1dbe564e65fbd745f8cf0e1adcf6430cb290d1268fcb47e03013/rootfs
shm                       64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/059ac584116e656843e88d3fe237692aa602d864b8e62f40b5d1c41f7b682e8b/shm
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/059ac584116e656843e88d3fe237692aa602d864b8e62f40b5d1c41f7b682e8b/rootfs
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/d53bc67dd9b24350b6bf47fa15581c49adfb940cbca46982cf1dbaabf15cf555/rootfs
shm                       64M     0   64M   0% /run/containerd/io.containerd.grpc.v1.cri/sandboxes/156af61f79912978b61058b363bfad72cace9374114f0d79e41ca17a1fa4125c/shm
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/156af61f79912978b61058b363bfad72cace9374114f0d79e41ca17a1fa4125c/rootfs
overlay                   50G  4.2G   46G   9% /run/containerd/io.containerd.runtime.v2.task/k8s.io/4cf13793a39cc65628981e4fc243523f21d5f02b1fcc8399f2455ce37b43a248/rootfs

df -h

df -h 2

lsblk output :

NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0   500G  0 disk
├─sda1            8:1    0     1G  0 part /boot
└─sda2            8:2    0   499G  0 part
  ├─centos-root 253:0    0    50G  0 lvm  /
  ├─centos-swap 253:1    0   7.9G  0 lvm
  └─centos-home 253:2    0 441.1G  0 lvm  /home
sr0              11:0    1     4M  0 rom
sr1              11:1    1   4.4G  0 rom

lsblk

when describing the pod :

Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  2m33s (x8 over 37m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

Pod Description

i think i have disk space and i really don’t understand why is this disk space warning is showing

hope someone can guide me to the right direction i’ve been trying to solve this a week now and nothing could solve it.

Leave a Comment