The Logging operator now powers Rancher's logging solution in place of the former, in-house solution.
You can enable the logging for a Rancher managed cluster by going to the Apps page and installing the logging app.
- In the Rancher UI, go to the cluster where you want to install logging and click Cluster Explorer.
- Click Apps.
- Click the
- Scroll to the bottom of the Helm chart README and click Install.
Result: The logging app is deployed in the
- From the Cluster Explorer, click Apps & Marketplace.
- Click Installed Apps.
- Go to the
cattle-logging-systemnamespace and check the boxes for
- Click Delete.
- Confirm Delete.
rancher-logging is uninstalled.
For more information about how the logging application works, see this section.
Role-based Access Control
Rancher logging has two roles,
logging-view. For more information on how and when to use these roles, see this page.
Configuring Logging Custom Resources
ClusterOutputs, go to the Cluster Explorer in the Rancher UI. In the upper left corner, click Cluster Explorer > Logging.
Flows and ClusterFlows
For help with configuring
ClusterFlows, see this page.
Outputs and ClusterOutputs
For help with configuring
ClusterOutputs, see this page.
Configuring the Logging Helm Chart
For a list of options that can be configured when the logging application is installed or upgraded, see this page.
- Rancher v2.5.8+
- Rancher before v2.5.8
As of Rancher v2.5.8, logging support for Windows clusters has been added and logs can be collected from Windows nodes.
For details on how to enable or disable Windows node logging, see this section.
Clusters with Windows workers support exporting logs from Linux nodes, but Windows node logs are currently unable to be exported. Only Linux node logs are able to be exported.
To allow the logging pods to be scheduled on Linux nodes, tolerations must be added to the pods. Refer to the Working with Taints and Tolerations section for details and an example.
Working with a Custom Docker Root Directory
For details on using a custom Docker root directory, see this section.
Working with Taints and Tolerations
For information on how to use taints and tolerations with the logging application, see this page.
Logging V2 with SELinux
Available as of v2.5.8
For information on enabling the logging application for SELinux-enabled nodes, see this section.
Additional Logging Sources
By default, Rancher collects logs for control plane components and node components for all cluster types. In some cases additional logs can be collected. For details, see this section.
The Logging Buffer Overloads Pods
Depending on your configuration, the default buffer size may be too large and cause pod failures. One way to reduce the load is to lower the logger's flush interval. This prevents logs from overfilling the buffer. You can also add more flush threads to handle moments when many logs are attempting to fill the buffer at once.
cattle-logging Namespace Being Recreated
If your cluster previously deployed logging from the Cluster Manager UI, you may encounter an issue where its
cattle-logging namespace is continually being recreated.
The solution is to delete all
projectloggings.management.cattle.io custom resources from the cluster specific namespace in the management cluster.
The existence of these custom resources causes Rancher to create the
cattle-logging namespace in the downstream cluster if it does not exist.
The cluster namespace matches the cluster ID, so we need to find the cluster ID for each cluster.
- In your web browser, navigate to your cluster(s) in either the Cluster Manager UI or the Cluster Explorer UI.
- Copy the
<cluster-id>portion from one of the URLs below. The
<cluster-id>portion is the cluster namespace name.
# Cluster Management UI
# Cluster Explorer UI (Dashboard)
Now that we have the
<cluster-id> namespace, we can delete the CRs that cause
cattle-logging to be continually recreated.
Warning: ensure that logging, the version installed from the Cluster Manager UI, is not currently in use.
kubectl delete clusterloggings.management.cattle.io -n <cluster-id>
kubectl delete projectloggings.management.cattle.io -n <cluster-id>