Using Fleet Behind a Proxy
In this section, you'll learn how to enable Fleet in a setup that has a Rancher server with a public IP a Kubernetes cluster that has no public IP, but is configured to use a proxy.
Rancher does not establish connections with registered downstream clusters. The Rancher agent deployed on the downstream cluster must be able to establish the connection with Rancher.
To set up Fleet to work behind a proxy, you will need to set the Agent Environment Variables for the downstream cluster. These are cluster-level configuration options.
Through the Rancher UI, you can configure these environment variables for any cluster type, including registered and custom clusters. The variables can be added while editing an existing cluster or while provisioning a new cluster.
For public downstream clusters, it is sufficient to set the required environment variables in the Rancher UI.
For private nodes or private clusters, the environment variables need to be set on the nodes themselves. Then the environment variables are configured from the Rancher UI, typically when provisioning a custom cluster or when registering the private cluster. For an example of how to set the environment variables on Ubuntu node in a K3s Kubernetes cluster, see this section.
Required Environment Variables
When adding Fleet agent environment variables for the proxy, replace <PROXY_IP> with your private proxy IP.
Setting Environment Variables in the Rancher UI
To add the environment variable to an existing cluster,
- Click ☰ > Cluster Management.
- Go to the cluster where you want to add environment variables and click ⋮ > Edit Config.
- Click Advanced Options.
- Click Add Environment Variable.
- Enter the required environment variables
- Click Save.
Result: The Fleet agent works behind a proxy.
Setting Environment Variables on Private Nodes
For private nodes and private clusters, the proxy environment variables need to be set on the nodes themselves, as well as configured from the Rancher UI.
This example shows how the environment variables would be set up on an Ubuntu node in a K3s Kubernetes cluster:
ssh -o ForwardAgent=yes ubuntu@<public_proxy_ip>