All the high level steps to switch to containerd from docker. In this example, our k8s cluster is build on a rpm based distro with kubeadm, we also use OS based firewalls. We use Minio and Velero to do the backup and restore, the install is temporary.
Warning
You need to know what you are doing. Test this on nonprod and make sure it works as advertised.
All below steps needs to be done on the master where you going to do the
backup and restore on:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
mkdir ~/minio
cd ~/minio
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
MINIO_ROOT_USER=admin MINIO_ROOT_PASSWORD=somepassword ./minio server /mnt/data --console-address ":9001"
cat > ~/minio/minio_secrets << EOF
[default]
aws_access_key_id = admin
aws_secret_access_key = somepassword
EOF
iptables -A INPUT -p tcp --dport 9000 -s "k8s_nodes, you have to change this" -j ACCEPT
|
Logon to Minio cluster on localhost:9001 and create bucket velero
Install binary from Velero’s site: Velero Latest binary. Basicaly just download and extract, change permissions and copy velero binary to /usr/local/bin/
We now now going to install velero on our k8s cluster. Install velero on same master that you installed minio on, make sure your IP is the IP of the
master you are working on.
1
2
3
4
5
6
7
8
|
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.3.0 \
--bucket velero \
--secret-file ./minio_secrets \
--use-volume-snapshots=false \
--use-restic \
--backup-location-config region=minio,s3ForcePathStyle=true,s3Url=http://192.168.208.118:9000,publicUrl=http://192.168.208.118:9000
|
Make sure backup location is availabile
1
2
3
4
|
velero get backup-location
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
default aws velero Available 2021-10-25 17:35:56 +0200 SAST ReadWrite true
|
Make backups:
1
2
3
4
5
6
7
|
k get deploy -A --no-headers | grep -v 'kube\|monitoring' > /root/minio/deploy_state_before_reset.txt
awk '{print $1}' /root/minio/deploy_state_before_reset.txt | while read DEPLOY ; do velero create backup $DEPLOY --include-namespaces $DEPLOY ; done
k get svc -A | grep NodePort > /root/minio/NodePort_Config.txt
cat /etc/kubernetes/kubelet.conf | grep server > /root/minio/k8s_api_info.txt
|
Warning
Make sure all backups finish by tailing velero pod “velero-xxxxxxxxx-xxxx” in velero namespace or “velero get backups”
Reset k8s:
Destructive
You are going to reset you cluster! You are deleting your cluster! Make sure the backup has been done successfully in previous step
Following needs to be done on all the nodes:
1
2
3
4
5
6
|
kubeadm reset
rm -rf .kube/config
systemctl stop dockerd
systemctl disable dockerd
rm -rf /etc/kubernetes/*
rm -rf /var/lib/docker/*
|
Uninstall docker
Install containerd:
Install containerd: Containerd Downloads, also look at k8s containerd documentations
There is also a rpm repo availabile, more info at k8s containerd documentations.
Action
Reboot All Nodes when containerd is installed
Bootstrap k8s cluster:
On one master:
1
2
|
cat /root/minio/k8s_api_info.txt
server: some.api.server:6443
|
1
2
3
4
5
|
kubeadm init --control-plane-endpoint "some.api.server:6443" --pod-network-cidr "10.244.0.0/16" --upload-certs
mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
chown $(id -u):$(id -g) /root/.kube/config
|
On the two other master nodes, you get below command from a succesful “kubeadm init”
1
2
3
4
5
6
7
|
kubeadm join some.api.server:6443 --token 1qkbuf.068zdzlk09uds2wp \
--discovery-token-ca-cert-hash sha256:74b7fe171e1891f8a76eced21f970834f7ff9c0995dc847cece4a621834836cb \
--control-plane --certificate-key c54c009426c4bce91dc5329a7a4509dc12ae33b5b7dd059ac55dce5ee97155b9
mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config
chown $(id -u):$(id -g) /root/.kube/config
|
On the rest of the worker nodes, you get below command from a succesful “kubeadm init”
1
2
|
kubeadm join some.api.server:6443 --token 1qkbuf.068zdzlk09uds2wp \
--discovery-token-ca-cert-hash sha256:74b7fe171e1891f8a76eced21f970834f7ff9c0995dc847cece4a621834836cb
|
Restore velero backups:
On master where minio was installed:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
cd ~/minio
MINIO_ROOT_USER=admin MINIO_ROOT_PASSWORD=password ./minio server /mnt/data --console-address ":9001"
iptables -A INPUT -p tcp --dport 9000 -s "k8s_nodes, you have to change this" -j ACCEPT
velero install \
--provider aws \
--plugins velero/velero-plugin-for-aws:v1.2.0 \
--bucket velero \
--secret-file ./minio_secrets \
--use-volume-snapshots=false \
--use-restic \
--backup-location-config region=minio,s3ForcePathStyle=true,s3Url=http://192.168.208.118:9000,publicUrl=http://192.168.208.118:9000
|
velero get backups
should show all your backups that you have taken before k8s reset
Now restore using velero
1
|
awk '{print $1}' /root/minio/deploy_state_before_reset.txt | while read RESTORE ; do velero restore create $RESTORE --from-backup $RESTORE ; done
|
Check progress of restores with velero restore get
Delete minio and velero:
Destructive
You are going to remove Velero and delete Minio data!
1
2
3
|
k delete namespace velero
kubectl delete crd backups.velero.io
rm -rf /mnt/data/*
|