Since the term was originally coined back in 2017 by Alexis Richardson (CEO of Weaveworks), the GitOps philosophy has struggled to find a coherent, consensual definition. This fostered some ambiguity, and occasionally some contradiction. For example, push-based deployments were once categorised as a valid approach to GitOps deployments, but now they’re not. There was going to be a common GitOps engine for the good of the community, and then there wasn’t. Flux was in the vanguard of software tools that exemplified GitOps, and then it needed to be rewritten from the ground up. And, so on.
It’s encouraging, then, that the last 12 months or so have seen a gradual maturing of the concept, with a common understanding of what we mean by the term ‘GitOps’. And the community has started working together across competing technologies, all under the umbrella of the Cloud Native Computing Foundation (CNCF). Some of the tooling has even acquired ‘graduated’ project status with the CNCF. All is good in GitOps land.
As the work to define standards has unfolded, and the development of software tools to meet those standards has ensued, one aspect of GitOps has consistently bugged me. It’s an implementation detail that doesn’t seem to get a lot of attention.
GitOps Agents Link to heading
First, let’s briefly outline what a GitOps agent is in the context of a Kubernetes cluster. A GitOps agent is a software application that (at least in part) implements the Kubernetes controller pattern, and is responsible for reconciling the declared ‘desired state’ with the actual cluster state. That is, the Kubernetes configuration for an application stored in a version control system (VCS), is periodically fetched and applied to the cluster, resulting in automated deployments. Change is initiated through the VCS, and explicitly not through direct, imperative change in the cluster. This allows application deployments to be managed using the inherent control features of the hosting VCS (i.e. pull or merge requests).
Chicken or the Egg Link to heading
So, GitOps agents allow us to automate application installs and updates in a Kubernetes cluster, according to the GitOps principles. But, aren’t the agents themselves, software applications? They might be controllers, but they’re still just software applications, requiring; installation, update, reconfiguration, recovery, and so on. This begs the question, how do GitOps agents get deployed to a cluster? Can they be deployed to a cluster according to GitOps principles, like the apps they manage? Can they be used to manage themselves? Seemingly, this is an example of the classic chicken or the egg paradox!
If we have to manually install or update a GitOps agent, then it’s deployment is open to all of the issues that the GitOps principles seek to resolve; ambiguous state, configuration drift, unsolicited change, and much more. This effectively makes the state of the whole system as frail as if there were no GitOps agent at all. To that end, it’s crucial that GitOps solutions address this problem head on, and provide a means for bootstrapping their agents, whilst allowing for their ongoing management, in line with the GitOps principles that underpin the whole approach.
This is the first article in a brief series looking into the different approaches taken to the bootstrapping conundrum.
Up Next Link to heading
The two leading projects in the GitOps domain (arguably), Flux and ArgoCD, go about bootstrapping in different ways. First, we’ll discuss how the Flux project goes about bootstrapping its numerous controllers, and then we’ll lift the lid on ArgoCD, to see how it gets things done.