Whilst setting up a new "vanilla" Kubernetes (K8s) cluster across two Ubuntu 20.04.5 VMs, I kept hitting a networking aka - issue.
Having created the cluster using kubeadm init as per the following: -
export ip_address=$(ifconfig eth0 | grep inet | awk '{print $2}')
kubeadm init --apiserver-advertise-address=$ip_address --pod-network-cidr=172.20.0.0/16 --cri-socket unix:///run/containerd/containerd.sock
and having added Flannel as my Container Network Interface (CNI), as follows: -
curl -sL -o /tmp/kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f /tmp/kube-flannel.yml
I was having problems with the kube-flannel pods not starting.
Initially, I thought it was because I was specifying the --pod-network-cidr switch which, I'd read, was only required for Calico CNI.
Therefore, I reset the cluster using kubeadm reset and re-ran the init as follows: -
kubeadm init --apiserver-advertise-address=$ip_address --cri-socket unix:///run/containerd/containerd.sock
but, this time around, the Flannel pods failed with: -
pod cidr not assigned
I resorted to Google, and found this: -
pod cidr not assgned #728
in the Flannel repo on GitHub.
One response said: -
The node needs to have a podCidr. Can you check if it does - kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
When I checked: -
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
nothing was returned, which was worrying.
I then read on ...
Did you see this note in the kubeadm docs
There are pod network implementations where the master also plays a role in allocating a set of network address space for each node. When using flannel as the pod network (described in step 3), specify --pod-network-cidr=10.244.0.0/16. This is not required for any other networks besides Flannel.
kubeadm init --apiserver-advertise-address=$ip_address --pod-network-cidr=10.244.0.0/16 --cri-socket unix:///run/containerd/containerd.sock
and ... IT WORKED!
kubectl get nodes
NAME STATUS ROLES AGE VERSION
acarids1.foobar.com Ready control-plane 16m v1.25.4
acarids2.foobar.com Ready <none> 14m v1.25.4
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-p9rxb 1/1 Running 0 13m
kube-flannel kube-flannel-ds-ztbj7 1/1 Running 0 13m
kube-system coredns-565d847f94-r8fdp 1/1 Running 0 15m
kube-system coredns-565d847f94-x4qhk 1/1 Running 0 15m
kube-system etcd-acarids1.foobar.com. 1/1 Running 2 16m
kube-system kube-apiserver-acarids1.foobar.com. 1/1 Running 2 16m
kube-system kube-controller-manager-acarids1.foobar.com. 1/1 Running 0 16m
kube-system kube-proxy-2nzbd 1/1 Running 0 14m
kube-system kube-proxy-jcwzr 1/1 Running 0 15m
kube-system kube-scheduler-acarids1.foobar.com. 1/1 Running 2 16m
No comments:
Post a Comment