Cleanup Rules
Warning
Cleanup policies are a beta feature. It is not ready for production usage and there may be breaking changes. Normal semantic versioning and compatibility rules will not apply.Kyverno has the ability to cleanup (i.e., delete) existing resources in a cluster in two different ways. The first way is via a declarative policy definition in either a CleanupPolicy
or ClusterCleanupPolicy
. See the section on cleanup policies below for more details. The second way is via a reserved time-to-live (TTL) label added to a resource. See the cleanup label section for further details.
Cleanup Policy
Similar to other policies which can validate, mutate, generate, or verify images in resources, Kyverno can cleanup resources by defining a new policy type called a CleanupPolicy
. Cleanup policies come in both cluster-scoped and Namespaced flavors; a ClusterCleanupPolicy
being cluster scoped and a CleanupPolicy
being Namespaced. A cleanup policy uses the familiar match
/exclude
block to select and exclude resources which are subjected to the cleanup process. A conditions{}
block (optional) uses common expressions similar to those found in preconditions and deny rules to query the contents of the selected resources in order to refine the selection process. Context variables (optional) can be used to fetch data from other resources to factor into the cleanup process. And, lastly, a schedule
field defines, in cron format, when the rule should run.
Note
Since cleanup policies always operate against existing resources in a cluster, policies created withsubjects
, Roles
, or ClusterRoles
in the match
/exclude
block are not allowed since this information is only known at admission time.An example ClusterCleanupPolicy is shown below. This cleanup policy removes Deployments which have the label canremove: "true"
if they have less than two replicas on a schedule of every 5 minutes.
1apiVersion: kyverno.io/v2beta1
2kind: ClusterCleanupPolicy
3metadata:
4 name: cleandeploy
5spec:
6 match:
7 any:
8 - resources:
9 kinds:
10 - Deployment
11 selector:
12 matchLabels:
13 canremove: "true"
14 conditions:
15 any:
16 - key: "{{ target.spec.replicas }}"
17 operator: LessThan
18 value: 2
19 schedule: "*/5 * * * *"
Values from resources to be evaluated during a policy may be referenced with target.*
similar to mutate existing rules.
Because Kyverno follows the principal of least privilege, depending on the resources you wish to remove it may be necessary to grant additional permissions to the cleanup controller. Kyverno will assist in informing you if additional permissions are required by validating them at the time a new cleanup policy is installed. See the Customizing Permissions section for more details.
An example ClusterRole which allows Kyverno to cleanup Pods is shown below. This may need to be customized based on the values used to deploy Kyverno.
1apiVersion: rbac.authorization.k8s.io/v1
2kind: ClusterRole
3metadata:
4 labels:
5 app.kubernetes.io/component: cleanup-controller
6 app.kubernetes.io/instance: kyverno
7 app.kubernetes.io/part-of: kyverno
8 name: kyverno:cleanup-pods
9rules:
10- apiGroups:
11 - ""
12 resources:
13 - pods
14 verbs:
15 - get
16 - watch
17 - list
18 - delete
Cleanup Label
In addition to policies which can declaratively define what resources to remove and when to remove them, the second option for cleanup involves assignment of a reserved label called cleanup.kyverno.io/ttl
to the exact resource(s) which should be removed. The value of this label can be one of two supported formats. Any unrecognized formats will trigger a warning.
- An absolute time specified in ISO 8601 format (ex.,
2023-10-04T003000Z
or2023-10-04
) - A remaining time calculated from when the label was observed (ex.,
5m
,4h
, or1d
)
This label can be assigned to any resource and so long as Kyverno has the needed permissions to delete the resource (see above section for an example), it will be removed at the designated time.
For example, creation of this Pod will cause Kyverno to clean it up after two minutes and without the presence of a cleanup policy.
1apiVersion: v1
2kind: Pod
3metadata:
4 labels:
5 cleanup.kyverno.io/ttl: 2m
6 name: foo
7spec:
8 containers:
9 - args:
10 - sleep
11 - 1d
12 image: busybox:1.35
13 name: foo
Although labeled resources are watched by Kyverno, the cleanup interval (the time resolution at which any cleanup can be performed) is controlled by a flag passed to the cleanup controller called ttlReconciliationInterval
. This value is set to 1m
by default and can be changed if a longer resolution is required.
Because this is a label, there is opportunity to chain other Kyverno functionality around it. For example, it is possible to use a Kyverno mutate rule to assign this label to matching resources. A validate rule could be written prohibiting, for example, users from the infra-ops
group from assigning the label to resources in certain Namespaces. Or, Kyverno could generate a new resource with this label as part of the resource definition.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.