Original image by Travis Wise, found via Wikimedia

Fun With Kustomize nameReference

In a previous post we looked at how to add extensions to the new Keycloak operator. In that article we used a ConfigMap to store those extensions. In the real world a PersistentVolumeClaim (PVC) would be a more realistic choice, especially for larger or more extensions.

In this post we will look at how to change out the ConfigMap for a PVC, and how to set up a Job that can fill that PVC with the content we want.

Moving to a PersistentVolumeClaim

Looking at the previous example, we would change the Keycloak CR to look like this:

 1apiVersion: k8s.keycloak.org/v2alpha1
 2kind: Keycloak
 3metadata:
 4 labels:
 5   app: sso
 6 name: example-keycloak
 7 namespace: keycloak-test
 8spec:
 9 hostname:
10   hostname: keycloak.apps-crc.testing
11 http:
12   tlsSecret: my-tls-secret
13 ingress:
14   annotations:
15     route.openshift.io/termination: reencrypt
16 instances: 1
17 startOptimized: false
18 unsupported:
19   podTemplate:
20     spec:
21       containers:
22       - volumeMounts:
23         - mountPath: /opt/keycloak/providers
24           name: providers
25       volumes:
26       - name: providers
27         persistentVolumeClaim:
28           claimName: keycloak-providers

This would change out our ConfigMap for a PVC (do not forget to create the PVC with the correct name). The only problem we now have left is how to download our extensions to the PVC.

Filling the PVC

One way to fill our PVC would be with a ConfigMap and a Job. If we give the ConfigMap a key, say downloads, and fill it with a list of URLs to download, then the Job could mount the providers PVC, mount the ConfigMap, and loop over the downloads key and execute curl on each of the keys.

Such a job could look something like this:

 1apiVersion: batch/v1
 2kind: Job
 3metadata:
 4  creationTimestamp: null
 5  name: download-config
 6spec:
 7  template:
 8    metadata:
 9      creationTimestamp: null
10    spec:
11      containers:
12      - image: ubi8
13        name: download
14        command:
15        - /bin/bash
16        - -c
17        - |
18          cd /providers
19          echo "Cleaning out old versions"
20          rm -f *.jar *.JAR
21          while IFS="" read -r SOURCE || [ -n "${SOURCE}" ]
22          do
23            echo "Downloading ${SOURCE}"
24            curl -L -O "${SOURCE}"
25          done < /downloadconfig/downloads
26          ls -l /providers          
27        resources: {}
28        volumeMounts:
29        - name: keycloak-providers
30          mountPath: /providers
31        - name: download-config
32          mountPath: /downloadconfig
33      restartPolicy: Never
34      volumes:
35      - name: keycloak-providers
36        persistentVolumeClaim:
37          claimName: keycloak-providers
38      - name: download-config
39        configMap:
40          name: download-config
41status: {}

We want updates

So far this has all worked out splendidly for us. We’ve downloaded our extensions, we’ve configured Keycloak to install those extensions, and we can actually use those extensions in our Keycloak. But unfortunately we haven’t accounted for updates to our extensions, or the possibility of adding more extensions.

In those cases we would update our ConfigMap, that’s fairly easy, and we would have to run our job again. If you are doing all of this manually that doesn’t seem like a big deal, but if you’ve got all your configuration and deployment in git, and deployed with a tool like ArgoCD that makes things a tat more interesting. Kubernetes Jobs are both (mostly) immutable, and single-shot. If the Job has run (successfully) once, it won’t run again until we delete it and recreate it.

Kustomize to the Rescue

One thing we can do is have Kustomize generate our ConfigMap for us. That way it will get a hash appended to its name based on its contents. That means that if we update our content, the name changes. Kustomize tracks this name, and updates all references to that name from things like Deployments and Jobs as well. While this is a good start, Jobs are still immutable, and we will not be allowed to update our Job, forcing it to run again. What we want is to create a separate Job every time we have a new ConfigMap. This will allow us track the logs of the different versions.

Replacements, a Natural Choice

The first thing you might think about is to use Kustomize “replacements”, a powerful transformer that can grab (part of) a field from one object, and apply it to (part of) a field in another object.

That way we could add something like this to our kustomization.yaml:

 1replacements:
 2- source:
 3    kind: ConfigMap
 4    name: download-config
 5    options:
 6      delimiter: "-"
 7      index: 2
 8  targets:
 9  - select:
10      kind: Job
11      name: download-config-template
12      fieldPath: metadata.name
13      options:
14        delimiter: "-"
15        index: 2

This would, theoretically, split the name of our generated ConfigMap on the “-” symbol, and grab the third one (we start counting at zero like civilized people), then apply that to the third part of the name of our Job. Problem solved.

Except…

Kustomize replacements don’t grab the name of the generated ConfigMap, not even when it’s in a parent kustomization all the way up the tree, but it grabs the name before the hashed suffix is appended. This will simply not work for our use-case.

(Ab)using nameReferences

Remember how earlier we said that Kustomize automatically updates references to generated ConfigMaps (and Secrets) with the hashed version? We can abuse that, by telling it that other fields in other objects hold references to ConfigMaps, so that Kustomize will update those for us. What if we tell it that the metadata.name field of a Job is a reference to a ConfigMap? Now if our ConfigMap gets updated it gets a nice shiny new Job all of its own.

First we need to tell Kustomize that we have a new piece of configuration by adding these lines to our kustomization.yaml:

1configurations:
2- namereference.yaml

Next we add the following content to the (new) namereference.yaml file:

1nameReference:
2- kind: ConfigMap
3  fieldSpecs:
4  - kind: Job
5    path: metadata/name

This tells Kustomize that the metadata.name field of Job object is a reference to a ConfigMap. So if we generate a ConfigMap with a basename of say download-config, and we have a Job named download-config in our Kustomize tree, the name of our Job will be updated to the name of the generated ConfigMap, including the hash suffix.

Try It Yourself

We’ve got a complete example Kustomization tree setup for you on our gitlab. All you have to do is bring your own OpenShift cluster, and create a namespace called keycloak-test. If you want to add it to an instance of ArgoCD or the like you will have to add some sync-wave annotations to make sure that objects get applied in the correct order to appease the CD gods.

One thing we did explicitly not add to this is a restart of your Keycloak instance to activate the new extensions. This is left as an exercise to the reader, why should we have all the fun?

Gerelateerde posts