Skip to main content
Version: v2.8

RKE Cluster Configuration Reference

When Rancher installs Kubernetes, it uses RKE or RKE2 as the Kubernetes distribution.

This section covers the configuration options that are available in Rancher for a new or existing RKE Kubernetes cluster.


You can configure the Kubernetes options one of two ways:

  • Rancher UI: Use the Rancher UI to select options that are commonly customized when setting up a Kubernetes cluster.
  • Cluster Config File: Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML.

The RKE cluster config options are nested under the rancher_kubernetes_engine_config directive. For more information, see the section about the cluster config file.

In clusters launched by RKE, you can edit any of the remaining options that follow.

For an example of RKE config file syntax, see the RKE documentation.

The forms in the Rancher UI don't include all advanced options for configuring RKE. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the RKE documentation.

Editing Clusters with a Form in the Rancher UI

To edit your cluster,

  1. In the upper left corner, click ☰ > Cluster Management.
  2. Go to the cluster you want to configure and click ⋮ > Edit Config.

Editing Clusters with YAML

Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration, by specifying them in YAML.

RKE clusters (also called RKE1 clusters) are edited differently than RKE2 and K3s clusters.

To edit an RKE config file directly from the Rancher UI,

  1. Click ☰ > Cluster Management.
  2. Go to the RKE cluster you want to configure. Click and click ⋮ > Edit Config. This take you to the RKE configuration form. Note: Because cluster provisioning changed in Rancher 2.6, the ⋮ > Edit as YAML can be used for configuring RKE2 clusters, but it can't be used for editing RKE1 configuration.
  3. In the configuration form, scroll down and click Edit as YAML.
  4. Edit the RKE options under the rancher_kubernetes_engine_config directive.

Configuration Options in the Rancher UI


Some advanced configuration options are not exposed in the Rancher UI forms, but they can be enabled by editing the RKE cluster configuration file in YAML. For the complete reference of configurable options for RKE Kubernetes clusters in YAML, see the RKE documentation.

Kubernetes Version

The version of Kubernetes installed on your cluster nodes. Rancher packages its own version of Kubernetes based on hyperkube.

For more detail, see Upgrading Kubernetes.

Network Provider

The Network Provider that the cluster uses. For more details on the different networking providers, please view our Networking FAQ.


After you launch the cluster, you cannot change your network provider. Therefore, choose which network provider you want to use carefully, as Kubernetes doesn't allow switching between network providers. Once a cluster is created with a network provider, changing network providers would require you tear down the entire cluster and all its applications.

Out of the box, Rancher is compatible with the following network providers:


The Weave CNI plugin for RKE with Kubernetes v1.27 and later is now deprecated. Weave will be removed in RKE with Kubernetes v1.30.

Notes on Weave:

When Weave is selected as network provider, Rancher will automatically enable encryption by generating a random password. If you want to specify the password manually, please see how to configure your cluster using a Config File and the Weave Network Plug-in Options.

Project Network Isolation

If your network provider allows project network isolation, you can choose whether to enable or disable inter-project communication.

Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.

Kubernetes Cloud Providers

You can configure a Kubernetes cloud provider. If you want to use dynamically provisioned volumes and storage in Kubernetes, typically you must select the specific cloud provider in order to use it. For example, if you want to use Amazon EBS, you would need to select the aws cloud provider.


If the cloud provider you want to use is not listed as an option, you will need to use the config file option to configure the cloud provider. Please reference the RKE cloud provider documentation on how to configure the cloud provider.

Private Registries

The cluster-level private registry configuration is only used for provisioning clusters.

There are two main ways to set up private registries in Rancher: by setting up the global default registry through the Settings tab in the global view, and by setting up a private registry in the advanced options in the cluster-level settings. The global default registry is intended to be used for air-gapped setups, for registries that do not require credentials. The cluster-level private registry is intended to be used in all setups in which the private registry requires credentials.

If your private registry requires credentials, you need to pass the credentials to Rancher by editing the cluster options for each cluster that needs to pull images from the registry.

The private registry configuration option tells Rancher where to pull the system images or addon images that will be used in your cluster.

  • System images are components needed to maintain the Kubernetes cluster.
  • Add-ons are used to deploy several cluster components, including network plug-ins, the ingress controller, the DNS provider, or the metrics server.

For more information on setting up a private registry for components applied during the provisioning of the cluster, see the RKE documentation on private registries.

Rancher v2.6 introduced the ability to configure ECR registries for RKE clusters.

Authorized Cluster Endpoint

Authorized Cluster Endpoint (ACE) can be used to directly access the Kubernetes API server, without requiring communication through Rancher.


ACE is available on RKE, RKE2, and K3s clusters that are provisioned or registered with Rancher. It's not available on clusters in a hosted Kubernetes provider, such as Amazon's EKS.

ACE must be set up manually on RKE2 and K3s clusters. In RKE, ACE is enabled by default in Rancher-launched Kubernetes clusters, using the IP of the node with the controlplane role and the default Kubernetes self-signed certificates.

For more detail on how an authorized cluster endpoint works and why it is used, refer to the architecture section.

We recommend using a load balancer with the authorized cluster endpoint. For details, refer to the recommended architecture section.

Node Pools

For information on using the Rancher UI to set up node pools in an RKE cluster, refer to this page.

NGINX Ingress

If you want to publish your applications in a high-availability configuration, and you're hosting your nodes with a cloud-provider that doesn't have a native load-balancing feature, enable this option to use NGINX Ingress within the cluster.

Metrics Server Monitoring

Option to enable or disable Metrics Server.

Each cloud provider capable of launching a cluster using RKE can collect metrics and monitor for your cluster nodes. Enable this option to view your node metrics from your cloud provider's portal.

Pod Security Policy Support

Enables pod security policies for the cluster. After enabling this option, choose a policy using the Default Pod Security Policy drop-down.

You must have an existing Pod Security Policy configured before you can use this option.

Docker Version on Nodes

Configures whether nodes are allowed to run versions of Docker that Rancher doesn't officially support.

If you choose to require a supported Docker version, Rancher will stop pods from running on nodes that don't have a supported Docker version installed.

For details on which Docker versions were tested with each Rancher version, refer to the support maintenance terms.

Docker Root Directory

If the nodes you are adding to the cluster have Docker configured with a non-default Docker Root Directory (default is /var/lib/docker), specify the correct Docker Root Directory in this option.

Default Pod Security Policy

If you enable Pod Security Policy Support, use this drop-down to choose the pod security policy that's applied to the cluster.

Node Port Range

Option to change the range of ports that can be used for NodePort services. Default is 30000-32767.

Recurring etcd Snapshots

Option to enable or disable recurring etcd snapshots.

Agent Environment Variables

Option to set environment variables for rancher agents. The environment variables can be set using key value pairs. If rancher agent requires use of proxy to communicate with Rancher server, HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variables can be set using agent environment variables.

Updating ingress-nginx

Clusters that were created before Kubernetes 1.16 will have an ingress-nginx updateStrategy of OnDelete. Clusters that were created with Kubernetes 1.16 or newer will have RollingUpdate.

If the updateStrategy of ingress-nginx is OnDelete, you will need to delete these pods to get the correct version for your deployment.

Cluster Agent Configuration and Fleet Agent Configuration

You can configure the scheduling fields and resource limits for the Cluster Agent and the cluster's Fleet Agent. You can use these fields to customize tolerations, affinity rules, and resource requirements. Additional tolerations are appended to a list of default tolerations and control plane node taints. If you define custom affinity rules, they override the global default affinity setting. Defining resource requirements sets requests or limits where there previously were none.


With this option, it's possible to override or remove rules that are required for the functioning of the cluster. We strongly recommend against removing or overriding these and any other affinity rules, as this may cause unwanted side effects:

  • affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution for cattle-cluster-agent
  • cluster-agent-default-affinity for cattle-cluster-agent
  • fleet-agent-default-affinity for fleet-agent

If you downgrade Rancher to v2.7.4 or below, your changes will be lost and the agents will re-deploy without your customizations. The Fleet agent will fallback to using its built-in default values when it re-deploys. If the Fleet version doesn't change during the downgrade, the re-deploy won't be immediate.

RKE Cluster Config File Reference

Instead of using the Rancher UI to choose Kubernetes options for the cluster, advanced users can create an RKE config file. Using a config file allows you to set any of the options available in an RKE installation, except for system_images configuration. The system_images option is not supported when creating a cluster with the Rancher UI or API.

For the complete reference for configurable options for RKE Kubernetes clusters in YAML, see the RKE documentation.

Config File Structure in Rancher

RKE (Rancher Kubernetes Engine) is the tool that Rancher uses to provision Kubernetes clusters. Rancher's cluster config files used to have the same structure as RKE config files, but the structure changed so that in Rancher, RKE cluster config items are separated from non-RKE config items. Therefore, configuration for your cluster needs to be nested under the rancher_kubernetes_engine_config directive in the cluster config file. Cluster config files created with earlier versions of Rancher will need to be updated for this format. An example cluster config file is included below.

Example Cluster Config File
# Cluster Config
docker_root_dir: /var/lib/docker
enable_cluster_alerting: false
enable_cluster_monitoring: false
enable_network_policy: false
enabled: true
# Rancher Config
rancher_kubernetes_engine_config: # Your RKE template config goes here.
addon_job_timeout: 30
strategy: x509
ignore_docker_version: true
# # Currently only nginx ingress provider is supported.
# # To disable ingress controller, set `provider: none`
# # To enable ingress on specific nodes, use the node_selector, eg:
# provider: nginx
# node_selector:
# app: ingress
provider: nginx
kubernetes_version: v1.15.3-rancher3-1
provider: metrics-server
# If you are using calico on AWS
# network:
# plugin: calico
# calico_network_provider:
# cloud_provider: aws
# # To specify flannel interface
# network:
# plugin: flannel
# flannel_network_provider:
# iface: eth1
# # To specify flannel interface for canal plugin
# network:
# plugin: canal
# canal_network_provider:
# iface: eth1
flannel_backend_type: vxlan
plugin: canal
# services:
# kube-api:
# service_cluster_ip_range:
# kube-controller:
# cluster_cidr:
# service_cluster_ip_range:
# kubelet:
# cluster_domain: cluster.local
# cluster_dns_server:
enabled: true
interval_hours: 12
retention: 6
safe_timestamp: false
creation: 12h
election-timeout: 5000
heartbeat-interval: 500
gid: 0
retention: 72h
snapshot: false
uid: 0
always_pull_images: false
pod_security_policy: false
service_node_port_range: 30000-32767
ssh_agent_auth: false
windows_prefered_cluster: false

Default DNS provider

The table below indicates what DNS provider is deployed by default. See RKE documentation on DNS provider for more information how to configure a different DNS provider. CoreDNS can only be used on Kubernetes v1.12.0 and higher.

Rancher versionKubernetes versionDefault DNS provider
v2.2.5 and higherv1.14.0 and higherCoreDNS
v2.2.5 and higherv1.13.x and lowerkube-dns
v2.2.4 and loweranykube-dns

Rancher Specific Parameters in YAML

Besides the RKE config file options, there are also Rancher specific settings that can be configured in the Config File (YAML):


See Docker Root Directory.


Option to enable or disable Cluster Monitoring.


Option to enable or disable Project Network Isolation.

Project network isolation is available if you are using any RKE network plugin that supports the enforcement of Kubernetes network policies, such as Canal or the Cisco ACI plugin.


See Authorized Cluster Endpoint.


enabled: true
fqdn: "FQDN"
ca_certs: |-

Custom Network Plug-in

You can add a custom network plug-in by using the user-defined add-on functionality of RKE. You define any add-on that you want deployed after the Kubernetes cluster is deployed.

There are two ways that you can specify an add-on:

For an example of how to configure a custom network plug-in by editing the cluster.yml, refer to the RKE documentation.