Tanka: 短歌 (lit. short poem) in Japanese. Go here for more details.
Grafana Labs is an ambitious and progressive company. Most DevOps practitioners have crossed paths with some of their offerings, be it Grafana, Loki, Tempo, or k6. Each of their offerings challenges the status-quo in the market, such as Grafana to ELK stack, k6 to locust, and so on. Yet, have you heard of Tanka before? This is their answer to managing Kubernetes environments. I had the opportunity to get my hands dirty on it, and my thoughts? Read on.
Why do we need a Kubernetes deployment manager?
In the beginning, most of us start learning Kubernetes with static manifests such as deployment.yaml. They are very easy to construct, and absolutely easy to apply (kubectl apply -f deployment.yaml). However, imagine if you have two sets of environments that differ from each other slightly, such as they use different images or they expose different ports. In order to achieve that with static resources, you would need to maintain two copies of manifests. The complexity involved in maintaining multiple environments would quickly get out of hands when the number of environments and manifests grows. But fear not, since there are a number of tools that can be used to address this issue, such as kustomize, helm, skaffold, or terraform. Of course, our protagonist, Tanka, too.
We won’t talk about skaffold and terraform in this blog post. They aren’t built exclusively to work with Kubernetes like the rest of the players, so you can’t compare apple to orange.
I thought we already have helm and kustomize?
Tanka is created to manage Kubernetes resources. That places it in the same ball park as kustomize or helm, which I am sure you have at least heard of them if you are serious about Kubernetes. kustomize is a part of Kubernetes (kubectl apply -k) and Helm is a graduated CNCF project with an established industrial standing, and the majority of OSS release their cloud native solution with it. Does Tanka have what it takes to stand up against them?
If we unwrap the bangs and whistles of these three implementations, they are the same in essence: they take some of resources written in their tongues, translate them into plain standard Kubernetes manifests, and pipe them into the cluster. One distinctive feature that sets Tanka apart from the other two contenders is its usage of Jsonnet instead of YAML. We can’t talk about Tanka without giving Jsonnet a skim.
Jsonnet
Jsonnet is a Google initiative. It is a superset of JSON that’s similar to YAML, such that normal JSON is legitimate Jsonnet/YAML but not vice versa. In contrast to YAML, where the syntax of the language is completely redone but the language stays as an EDI, Jsonnet keeps the style of JSON intact, but adding a lot of features from Object-Oriented Programming (OOP) and Functional Programming (FP) and almost turning the language into a scripting language.
Please do note that we can’t go too much in detail about Jsonnet before making this blog post overly lengthy, therefore you are welcome to learn more about Jsonnet here.
After I migrated my work project into it from YAML, I sat down and started refactoring the code so that I can cut down boilerplate and makes the project easier to maintain. This is somewhat unthinkable in a helm or kustomize context, since for one you can’t really call helm/kustomize-flavoured YAML code, and there’s really not much to refactor.
Jsonnet has a steep learning curve. Google always has a DevOps mentality and it always tries to use a developer’s mindset to resolve operational challenges and to reduce toil. We saw that on Kubernetes, on SRE, on Protobuf, and Jsonnet is no exception, the common theme being they are most certainly more complex than their forebears, but the complexity will pay off in the long run.
Objective Comparison:
How does Grafana Tanka compare to Kustomize and Helm
Now let's look at How does Grafana Tanka compare to Kustomize? Tanka doesn’t alter the syntax of Jsonnet in any way. This is markedly different to helm, in which helm-flavoured go template is blended into YAML which would be illegal in plain YAML. There are libraries you can utilise to help with generating standard Kubernetes manifests such as k8s-libsonnet and istio-libsonnet, but they bear no inherent relationship with Tanka and they are by no means mandatory.
Interestingly, Tanka also works with helm and kustomize, although the support is unilateral in nature, meaning that you can port kustomize or helm resources into Tanka, but not in the other way around.
Tanka is designed with hermeticism in mind, meaning that it’s self-sustainable without relying on any external resources such as a remote chart repository or any remote configuration such as ~/.kube/config. Every two builds, no matter how long time has spanned between them, should result in completely identical yields if nothing is changed in the project.
CLI-wise, kustomize doesn’t have a standalone toolkit and essential operations are located under kubectl kustomize.
helm has a cli toolkit (helm). You can install / uninstall charts with it, template exports the plain manifests interpreted by it, and registry keeps a remote chart repository in sync so you always have the latest release on hand.
Tanka’s cli toolkit is called tk. It has a useful diff feature that prints out the diff in manifests between your local copy and the live version running on the cluster. It’s in standard GNU diff format, so you can integrate it in your PR workflow and makes it quite useful to check what is actually being changed in a PR.
Other than diff, tk also has an interactive apply workflow you can use to replace kubectl apply. There’s also show which builds the Jsonnet project into plain YAML Kubernetes manifests and pipe them into less on your terminal, whereas export will write those manifests to a location.
Difference in Action
To demonstrate the difference discussed above better, let’s set up the stage by using a simple pet Deployment resource:
deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- image: nginx
name: nginx
And let’s say we would like to use different images for different environment, namely nonprod, staging and prod, and the images should be tagged accordingly.
kustomize
To achieve it in kustomize, we would have deployment.yaml as above, but also a set of kustomization.yaml containing the following content:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
images:
- name: test
newName: nginx
newTag: nonprod/staging/prod
The Deployment is applied with kubectl apply -k {{env folder}}, which will pick up this kustomization.yaml and apply the patch to use the correct image.
helm
An idiomatic way to work with this kind of issue in helm is to replace the changing bits in the static manifest with helm-flavoured go template.
...
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
name: nginx
...
This value is then provided with a values.yaml or with a --set flag with the helm cli.
Tanka
local k = import 'github.com/jsonnet-libs/k8s-libsonnet/1.21/main.libsonnet';
{
local deploy = k.apps.v1.deployment, //local variables won't show up in the built resource.
local container = k.core.v1.container,
nginx: { //this will show up
deployment: deploy.new(name="nginx", replicas=1, containers=[
container.new("nginx", "nginx:" + tag) //tag will be injected via a TLA
]),
}
}
You can read more about Top Level Argument (TLA) here.
Subjective Comparison: Tanka vs Helm vs kustomize
kustomize?
One thing that plagues kustomize from the root is YAML is a human-readable language. It’s not even a templating language, let alone a scripting one. 1+1 is always "1+1". There isn’t a way you can define 1+1 -> 2. There’s no way you can say let this field be A under condition A, but B under condition B in YAML, so in kustomize you can only do something like this field is [placeholder] in a base manifest, and have a this field is A in a separate file that’s used in condition A, hence overriding this field in the final manifest. By doing this, you are placing yourself at the whim of the kustomize developers, since there’s no way you can define the operation in YAML, so you have to use the features provided by them. If you want to modify the resource based on an unsupported operation (such as changes on a CRD), you are out of luck.
Although to be fair, you can do a JsonPatch with the patches keyword in kustomize to theoretically do whatever the kind of changes you want on a field, but it still works poorly in my opinion. For instance, if there are multiple containers in a Deployment and you want to modify something in one of them, you will need to specify the containers you want to change with an ordinal (i.e. containers[0]) since JsonPatch doesn’t define a foreach - select operation. Let’s just pray the order of the containers won’t change in future, and if you are using Istio, istio-proxy is always injected as the first container, so don’t forget to always increase the ordinal by 1. To make matters worse, arrays are plentiful in standard Kubernetes manifests.
Another thing I am not particularly fond of about kustomize is there could potentially be a bunch of legitimate-looking Kubernetes manifests that are filled with [placeholder] or even incomplete. People who are unfamiliar with the project structure might accidentally apply them, and because they are syntactically correct (take the container image example we used before, placeholder can be an image tag. ), they will be accepted by the API server. This can lead to a container stuck at ImagePullBackOff.
Notwithstanding the limitations, its ease of use is real. Sometimes, the manifests in scope are quite simple so that they don’t warrant additional constructions necessary in both helm and tanka. The toolchain is also much simpler (only kubectl needed) than helm (both kubectl and helm needed) and tanka (kubectl, jb and tk needed), so it’s easier to maintain in runners/pipelines. It’s also easier in syntax so you might have less resistance rolling it out to devs. It’s not good DevOps if you can’t get your teammates onboard.
helm?
The hermetic design philosophy of Tanka makes it less suitable for software distribution. How would you receive an update from the upstream provider if you refuse to hear from them? Moreover, since helm value files are standarised key-value file, it is easier to config an upstream software, and usually software distributor will ship a stock value.yaml as default configs as well as serving as a how-to configure your installation. In contrast, two Tanka projects can have vastly different ways the variables are injected into the resources. If I’m shipping a software, I would opt for helm.
However, Tanka still compares favourably in other aspects. In our project, we have a few istio resources (VirtualService for example) are ready in nonprod and staging environment, but for a range of reasons, we cannot apply them onto prod yet, so we need to find out a way to selectively disable it on prod. With helm, we have no other options other than jumping into where VirtualService is defined and plug in a conditional if (such as skipping said block if it’s prod). This breaks some of the SOLID principles since as a VirtualService, it shouldn’t have any sort of knowledge about the differences on environments. On the other hand, in Tanka, it’s as easy as defining this in main.jsonnet for prod:
app + {app+: {VirtualService: {}}}
This will override VirtualService definition in the underlying resources (hence toggle it off / set it to empty) without actually touching the underlying resources.
Readability in helm is also an issue in more complex projects. Let’s take this template in Grafana as an example, you will see a lot of blocks are covered under if branches such as .Values.sidecar.notifiers.enabled. What you are seeing here is a classical antipattern in software design that can be refactored via polymorphism. (See here) However, there’s no class in helm, so we have to make do there. In Jsonnet, there is proper class and inheritance, so we are at a better prosition to deal with this kind of problem.
Is Tanka perfect?
Not really. One concern I have at the beginning when I picked Tanka up was this language looks alien. Although JSON is supported officially by Kubernetes, I guess the majority of people learnt k8s in YAML. All the pairs of square and curly brackets gave me a culture shock initially. It also takes longer to set up the first version of your resources under Tanka since you are writing code in comparison to doing the same under YAML-based solutions such as helm, where you can copy a standard manifest and modify them pretty easily. It’s quite similar to when you write your first Hello, World! in Java, unlike in Bash script a simple echo 'Hello, World!' does the job, you’ll need to create the correct folder hierarchy, create the correct class system and a main function, import the correct library, and only until then you can put in the System.out.println("Hello, World!"). The effort will most certainly pay off in the long run, but you have to admit it can be quite frustrating at the beginning.
Tanka adoption in the industry is also spotty, so a team runs the risk of not able to find replacements if one of the member leaves. Also, searching for help for a particular issue on Google yields much fewer hits than both kustomize and helm.
The toolchain of Tanka also needs some work too. I was once hit with this message when I did tk show:
RUNTIME ERROR: Couldn't manifest function in JSON output.
libmonad.jsonnet:(1:1)-(21:1) object <anonymous>
During manifestation
Most of the time, the toolchain will be able to tell me what I did wrong, but occasionally it just threw this generic Couldn't manifest function in JSON output which isn’t very helpful.
Summary
We are at a place to wrap up this review. Let’s finish it with a decision matrix:
kustomize: Choose if the problem domain is straightforward and limited. You may also want to choose it if you are uncertain about your team’s willingness and capacity.
helm: Choose if you are releasing softwares to be reused by other teams. Also, choose if your team isn’t fully comfortable with Kubernetes yet.
Tanka: Choose if your manager doesn’t mind initial investment in up-skilling and your colleagues don’t mind occasionally writing some code. Your team should also be 100% comfortable and confident about Kubernetes. Also, they like tinkering.
You are strongly encouraged to check out Jsonnet and Tanka yourself. Unfortunately, I am not confident enough to say it will be a complete replacement for either kustomize or helm. I can’t even say for sure it will be useful for you, but it’s a delightful refreshment regardless.
Finally, Innablr is a Kubernetes Certified Service Provider and leading consultancy for cloud native, Kubernetes, and serverless technologies. Frequently championing community events, delivering thought leadership and leading practices, Innablr is recognised in the Australian market as one of the most experienced providers of Kubernetes solutions.
Continuing our successful approach of building repeatable and extensible frameworks, Innablr has built a blueprint for Google Cloud and Amazon Web Services Kubernetes deployment whether it is Google Kubernetes Engine (GKE) or Elastic Kubernetes Service (EKS).
To learn more about how we’ve been helping businesses innovate with Kubernetes, see our Kubernetes Certified Solution Provider page.
Chuning Song, Engineer @ Innablr