Removing Kubernetes Components from Nodes
This section describes how to disconnect a node from a Rancher-launched Kubernetes cluster and remove all of the Kubernetes components from the node. This process allows you to use the node for other purposes.
When you use Rancher to install Kubernetes on new nodes in an infrastructure provider, resources (containers/virtual network interfaces) and configuration items (certificates/configuration files) are created.
When removing nodes from your Rancher launched Kubernetes cluster (provided that they are in Active
state), those resources are automatically cleaned, and the only action needed is to restart the node. When a node has become unreachable and the automatic cleanup process cannot be used, we describe the steps that need to be executed before the node can be added to a cluster again.
What Gets Removed?
When cleaning nodes provisioned using Rancher, the following components are deleted based on the type of cluster node you're removing.
Removed Component | Nodes Hosted by Infrastructure Provider | Custom Nodes | Hosted Cluster | Registered Nodes |
---|---|---|---|---|
The Rancher deployment namespace (cattle-system by default) | ✓ | ✓ | ✓ | ✓ |
serviceAccount , clusterRoles , and clusterRoleBindings labeled by Rancher | ✓ | ✓ | ✓ | ✓ |
Labels, Annotations, and Finalizers | ✓ | ✓ | ✓ | ✓ |
Rancher Deployment | ✓ | ✓ | ✓ | |
Machines, clusters, projects, and user custom resource definitions (CRDs) | ✓ | ✓ | ✓ | |
All resources create under the management.cattle.io API Group | ✓ | ✓ | ✓ | |
All CRDs created by Rancher v2.x | ✓ | ✓ | ✓ |
Removing a Node from a Cluster by Rancher UI
When the node is in Active
state, removing the node from a cluster will trigger a process to clean up the node. Please restart the node after the automatic cleanup process is done to make sure any non-persistent data is properly removed.
To restart a node:
# using reboot
$ sudo reboot
# using shutdown
$ sudo shutdown -r now
Removing Rancher Components from a Cluster Manually
When a node is unreachable and removed from the cluster, the automatic cleaning process can't be triggered because the node is unreachable. Please follow the steps below to manually remove the Rancher components.
The commands listed below will remove data from the node. Make sure you have created a backup of files you want to keep before executing any of the commands as data will be lost.
Removing Rancher Components from Registered Clusters
For registered clusters, the process for removing Rancher is a little different. You have the option of simply deleting the cluster in the Rancher UI, or your can run a script that removes Rancher components from the nodes. Both options make the same deletions.
After the registered cluster is detached from Rancher, the cluster's workloads will be unaffected and you can access the cluster using the same methods that you did before the cluster was registered into Rancher.
- By UI / API
- By Script
This process will remove data from your cluster. Make sure you have created a backup of files you want to keep before executing the command, as data will be lost.
After you initiate the removal of a registered cluster using the Rancher UI (or API), the following events occur.
Rancher creates a
serviceAccount
that it uses to remove the Rancher components from the cluster. This account is assigned the clusterRole and clusterRoleBinding permissions, which are required to remove the Rancher components.Using the
serviceAccount
, Rancher schedules and runs a job that cleans the Rancher components off of the cluster. This job also references theserviceAccount
and its roles as dependencies, so the job deletes them before its completion.Rancher is removed from the cluster. However, the cluster persists, running the native version of Kubernetes.
Result: All components listed for registered clusters in What Gets Removed? are deleted.
Rather than cleaning registered cluster nodes using the Rancher UI, you can run a script instead.
Install kubectl.
Open a web browser, navigate to GitHub, and download
user-cluster.sh
.Make the script executable by running the following command from the same directory as
user-cluster.sh
:chmod +x user-cluster.sh
Air Gap Environments Only: Open
user-cluster.sh
and replaceyaml_url
with the URL inuser-cluster.yml
.If you don't have an air gap environment, skip this step.
From the same directory, run the script and provide the
rancher/rancher-agent
image version which should be equal to the version of Rancher used to manage the cluster. (<RANCHER_VERSION>
):tipAdd the
-dry-run
flag to preview the script's outcome without making changes../user-cluster.sh rancher/rancher-agent:<RANCHER_VERSION>
Result: The script runs. All components listed for registered clusters in What Gets Removed? are deleted.
Cleaning up Nodes
- RKE1
- RKE2
- K3s
Before you run the following commands, first remove the node through the Rancher UI.
To remove a node:
- Click ☰ and select Cluster Management.
- In the table of clusters, click the name of the cluster the node belongs to.
- In the first tab, click the checkbox next to the node's state.
- Click Delete.
If you remove the entire cluster instead of an individual node, or skip rermoving the node through the Rancher UI, follow these steps:
- Remove the Docker containers from the node and unmount any volumes.
- Reboot the node.
- Remove any remaining files.
- Confirm that network interfaces and IP tables were properly cleaned after the reboot. If not, reboot one more time.
Windows Nodes
To clean up a Windows node, run the script in c:\etc\rancher
. This script deletes Kubernetes-generated resources and the execution binary. It also drops the firewall rules and network settings:
pushd c:\etc\rancher
.\cleanup.ps1
popd
After you run this script, the node is reset and can be re-added to a Kubernetes cluster.
Docker Containers, Images, and Volumes
Be careful when cleaning up Docker containers. The following command will remove all Docker containers, images, and volumes on the node, including non-Rancher related containers:
docker rm -f $(docker ps -qa)
docker rmi -f $(docker images -q)
docker volume rm $(docker volume ls -q)
Mounts
Kubernetes components and secrets leave behind the following mounts:
/var/lib/kubelet
/var/lib/rancher
- Miscellaneous mounts in
/var/lib/kubelet/pods/
To unmount all mounts, run:
for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do umount $mount; done
You need to remove the following components from Rancher-provisioned RKE2 nodes:
- The rancher-system-agent, which connects to Rancher and installs and manages RKE2.
- RKE2 itself.
Removing rancher-system-agent
To remove the rancher-system-agent, run the system-agent-uninstall.sh script:
curl https://raw.githubusercontent.com/rancher/system-agent/main/system-agent-uninstall.sh | sudo sh
Removing RKE2
To remove the RKE2 installation, run the rke2-uninstall
script that is already present on the node:
sudo rke2-uninstall.sh
You need to remove the following components from Rancher-provisioned K3s nodes:
- The rancher-system-agent, which connects to Rancher and installs and manages K3s.
- K3s itself.
Removing rancher-system-agent
To remove the rancher-system-agent, run the system-agent-uninstall.sh script:
curl https://raw.githubusercontent.com/rancher/system-agent/main/system-agent-uninstall.sh | sudo sh
Removing K3s
To remove the K3s installation, run the k3s-uninstall
script that is already present on the node:
sudo k3s-uninstall.sh
Directories and Files
The following directories are used when adding a node to a cluster, and should be removed. You can remove a directory using rm -rf /directory_name
.
Depending on the role you assigned to the node, some of the directories will or won't be present on the node.
Directories |
---|
/etc/ceph |
/etc/cni |
/etc/kubernetes |
/etc/rancher |
/opt/cni |
/opt/rke |
/run/secrets/kubernetes.io |
/run/calico |
/run/flannel |
/var/lib/calico |
/var/lib/etcd |
/var/lib/cni |
/var/lib/kubelet |
/var/lib/rancher |
/var/log/containers |
/var/log/kube-audit |
/var/log/pods |
/var/run/calico |
To clean the directories:
rm -rf /etc/ceph \
/etc/cni \
/etc/kubernetes \
/etc/rancher \
/opt/cni \
/opt/rke \
/run/secrets/kubernetes.io \
/run/calico \
/run/flannel \
/var/lib/calico \
/var/lib/etcd \
/var/lib/cni \
/var/lib/kubelet \
/var/lib/rancher\
/var/log/containers \
/var/log/kube-audit \
/var/log/pods \
/var/run/calico
Network Interfaces and Iptables
The remaining two components that are changed/configured are (virtual) network interfaces and iptables rules. Both are non-persistent to the node, meaning that they will be cleared after a restart of the node. To remove these components, a restart is recommended.
To restart a node:
# using reboot
$ sudo reboot
# using shutdown
$ sudo shutdown -r now
If you want to know more on (virtual) network interfaces or iptables rules, please see the specific subjects below.
Network Interfaces
Depending on the network provider configured for the cluster the node was part of, some of the interfaces will or won't be present on the node.
Interfaces |
---|
flannel.1 |
cni0 |
tunl0 |
caliXXXXXXXXXXX (random interface names) |
vethXXXXXXXX (random interface names) |
To list all interfaces:
# Using ip
ip address show
# Using ifconfig
ifconfig -a
To remove an interface:
ip link delete interface_name
Iptables
Depending on the network provider configured for the cluster the node was part of, some of the chains will or won't be present on the node.
Iptables rules are used to route traffic from and to containers. The created rules are not persistent, so restarting the node will restore iptables to its original state.
Chains |
---|
cali-failsafe-in |
cali-failsafe-out |
cali-fip-dnat |
cali-fip-snat |
cali-from-hep-forward |
cali-from-host-endpoint |
cali-from-wl-dispatch |
cali-fw-caliXXXXXXXXXXX (random chain names) |
cali-nat-outgoing |
cali-pri-kns.NAMESPACE (chain per namespace) |
cali-pro-kns.NAMESPACE (chain per namespace) |
cali-to-hep-forward |
cali-to-host-endpoint |
cali-to-wl-dispatch |
cali-tw-caliXXXXXXXXXXX (random chain names) |
cali-wl-to-host |
KUBE-EXTERNAL-SERVICES |
KUBE-FIREWALL |
KUBE-MARK-DROP |
KUBE-MARK-MASQ |
KUBE-NODEPORTS |
KUBE-SEP-XXXXXXXXXXXXXXXX (random chain names) |
KUBE-SERVICES |
KUBE-SVC-XXXXXXXXXXXXXXXX (random chain names) |
To list all iptables rules:
iptables -L -t nat
iptables -L -t mangle
iptables -L