Modern Testing of Java applications in Tekton
One of the most important aspects of speeding up delivery is to make sure the quality of the deliveries are up to the highest possible standards. Automated testing can help reduce the number of escaped defects which helps in getting the trust of the business to more frequently deploy. From a micro-service perspective, automated tests are usually a combination of unit tests and integration tests. In unit tests we mock most of the external components away, while in integration testing we typically test against ‘real’ external components.
Docker, or containerisation in general, makes it very easy to spin up a message queue, database or basically any other service that can run in a container. Projects like testcontainers take this a step further and make running integration tests as easy as running unit tests. For example, when you create a service that will be using a Postgres database, testcontainers enables you to easily spin up and integrate with a real Postgres service in your test code, instead of using a maybe less compatible alternative. Below you can see a basic example of testcontainers to test a Redis integration, taken from the testcontainers website.
1@Testcontainers
2public class RedisBackedCacheIntTest {
3
4 private RedisBackedCache underTest;
5
6 @Container
7 public GenericContainer redis = new
8 GenericContainer(DockerImageName.parse("redis:5.0.3-alpine"))
9 .withExposedPorts(6379);
10
11 @BeforeEach
12 public void setUp() {
13 String address = redis.getHost();
14 Integer port = redis.getFirstMappedPort();
15 underTest = new RedisBackedCache(address, port);
16 }
17
18 @Test
19 public void testSimplePutAndGet() {
20 underTest.put("test", "example");
21 String retrieved = underTest.get("test");
22 assertEquals("example", retrieved);
23 }
24}
In the enterprise java development space a lot is happening at the moment and especially Quarkus is very disruptive. Not only with innovations that improve boot time, memory consumption and live coding, but also raising the bar with the Quarkus dev-services. The dev-services will automatically provision unconfigured services in dev and test mode. In the background the services will setup your environment automatically and they are usually using testcontainers in the background. Nice examples are the Postgres, Keycloak and Kafka dev-services which will spin up containers and configure the context automagically so there is usually no configuration required.
Continuous integration
While all of this is making the live easier on the developers laptop, things become a bit more challenging when these tests are to be run in a continuous integration pipeline. In a traditional Jenkins setup, with a bunch of virtual machines as worker nodes, typically the only requirement to run these tests is having a docker daemon running on that node. The test will start all required containers on that node and will clean up after it’s done. In containerised setups like OpenShift pipelines and Jenkins X, both based on Tekton, things get a bit more complicated.
Running a docker daemon inside a container is a bit more challenging. In the past one of the approaches was to mount the docker-socket of the host node. This exposed a security risk, as this would grant access to the all containers running on that actual node. Fortunately this isn’t required any more, and a rootless docker-in-docker sidecar has been introduced. However, to be able to run the docker-in-docker sidecar, the container still needs to be run in privileged mode. Enabling this security exception is not encouraged, and while some platform administrators will gladly add the exception, this is probably not the case in a large enterprise setup.
A workaround could be to instantiate a virtual machine somewhere, and exposing the port on which the docker api is listening. In such a setup the pipeline can use that virtual machine for orchestrating the containers. This obviously doesn’t scale up, nor down very well.
If you think this a bit more through, it actually also doesn’t make sense to run these containers inside an already running container or on an external virtual machine. If your pipeline is running on a scale-out platform like OpenShift/Kubernetes, it would make more sense to actually scale out these tests and their containers as well. This should theoretically speed up the tests. The alternative would be scaling nodes vertically, to ensure enough memory and cpu is available to run the test and the containers on which it depends.
Alternative approach
Ideally, containers are not orchestrated on a local node, but scaled out into Kubernetes pods to take the most advantage of the capacity of the platform, rather than the capacity of the node. If we focus on a Quarkus setup there are actually three ways to approach a solution where containers are orchestrated into Kubernetes pods itself.
The first approach would be in the top layer, Quarkus itself. Quarkus has the ability where you can create your own test resources. This could be a resource that either already starts a service inside a Kubernetes environment, or just a simple wait for a certain endpoint to be available. This would mean, that for every containerised dependency a dedicated test resource should be implemented. Also, this would imply you are not taking advantage of the already available dev-services.
Another approach to solve this would be to make sure testcontainers is supporting Kubernetes. From a requirement perspective, this seems easy. However, the testcontainers framework is a direct implementation of the docker api, and has little to no extra abstraction that would allow an easy adoption for the Kubernetes api. There are a few attempts, but since the testcontainers framework is cross language and has a uniform api across these languages, it will be very challenging to get a solution integrated upstream.
The last approach is to implement a docker api that, instead of orchestrating containers locally, translate these somehow to pods running into Kubernetes instead. This would also support a wider set of applications, not limited to Quarkus or the testcontainers library.
Introducing kubedock
Implementing the docker api to orchestrate towards Kubernetes is the approach that we implemented in kubedock. This implementation focusses on running containers, and leaves the building of containers out of scope. To support volumes, it starts an init container and copies the volume contents to a volume before the container is started. Networking is flattened and for each network alias that is used in the docker setup, a Kubernetes service object is created. Connecting to the containers is supported by both port-forwards and a reverse proxy.
This approach works surprisingly well for most use-cases and is not limited to the java testcontainers implementation. Situations when it doesn’t work are usually due to docker being able to access volume content before a container is started, and after it has stopped. The docker api allows copying files when a container is not running. This is not that obvious in Kubernetes. Copying before a container is started can be workaround by using init containers, which is currently how it is solved in kubedock.
Tekton example
Let’s do this in practice. Running kubedock as a sidecar within a Tekton pipeline would be a simple as the below task. This does require a service account that is able to create and remove some resources (configmaps, deployments, jobs and services) within the namespace, but this usually is not that difficult to address.
1apiVersion: tekton.dev/v1beta1
2kind: Task
3metadata:
4 name: mvn-test
5spec:
6 params:
7 - name: contextDir
8 type: string
9 workspaces:
10 - name: source
11 steps:
12 - name: step-mvn-test
13 image: gcr.io/cloud-builders/mvn
14 workingDir: $(workspaces.source.path)/$(params.contextDir)
15 command: [ "/usr/bin/mvn" ]
16 args:
17 - test
18 env:
19 - name: TESTCONTAINERS_RYUK_DISABLED
20 value: "true"
21 - name: TESTCONTAINERS_CHECKS_DISABLE
22 value: "true"
23 resources: {}
24 volumeMounts:
25 - name: kubedock-socket
26 mountPath: /var/run/
27 sidecars:
28 - name: kubedock
29 image: joyrex2001/kubedock:latest
30 args: [ "server", "--reverse-proxy", "--unix-socket", "/var/run/docker.sock" ]
31 env:
32 - name: NAMESPACE
33 valueFrom:
34 fieldRef:
35 fieldPath: metadata.namespace
36 volumeMounts:
37 - name: $(workspaces.source.volume)
38 mountPath: $(workspaces.source.path)
39 - name: kubedock-socket
40 mountPath: /var/run/
41 volumes:
42 - name: kubedock-socket
43 emptyDir: {}
The example is very similar to the examples you can find for using the docker-in-docker sidecar, but instead of using the dind container, it uses kubedock instead. Like testcontainers iteself, kubedock has options to automatically clean created resources. Another challenge that doesn’t as much occur in a docker environment, is that running multiple tests in the same namespace can cause clashes with regards to service names. This can be solved by creating a temporary namespace automatically in your pipeline for each test, or by locking the namespace during the test with kubedock itself.
Conclusion
There is a lot happening lately with regards to testing and ease of development, especially looking at the cloud native java space. Quarkus is raising the bar of what a developer should expect when he is coding, while tekton and kubernetes are slowly conquering the ci world as well. In this blog we showed how to solve some of the challenges that wel face when both worlds meet. Happy coding!
Note that recently AtomicJar is started offering testcontainers as a cloud solution. If you are in an organisation can use cloud in these context, this is also a very promising solution.