Following on from an introductory article on the merits of bootstrapping GitOps agents into Kubernetes clusters, this article looks into how this can be achieved with Flux. Flux is a collective name to describe several discrete Kubernetes controllers (also known as the GitOps Toolkit), that each perform a separate, specific GitOps-oriented function. Flux is a graduated Cloud Native Computing Foundation (CNCF) project.

Installing Flux Link to heading

Bootstrapping involves installation, and Flux can be installed into a Kubernetes cluster in a variety of ways. A set of YAML manifests are maintained in the project’s GitHub repo:

$ kustomize build | \
    kubectl apply -f -

But, if Helm is your thing, then there’s a packaged Helm chart maintained for installing Flux, too. There’s even a command line tool, unsurprisingly called Flux, that can also be used to get Flux up and running in a cluster:

$ flux install

Performing any of these actions would provide us with a working Flux deployment in a cluster, consisting of the set of controllers that govern its GitOps actions.

$ kubectl -n flux-system get po
NAME                                          READY   STATUS    RESTARTS   AGE
helm-controller-7b85c84687-dzdbr              1/1     Running   0          81s
image-automation-controller-f45c4b86b-sqqlf   1/1     Running   0          81s
image-reflector-controller-59c894c647-lzb76   1/1     Running   0          80s
kustomize-controller-d88d76876-dqw2m          1/1     Running   0          80s
notification-controller-55df97fbb9-jbrqb      1/1     Running   0          80s
source-controller-5bb5c7b9bd-dsd6w            1/1     Running   0          80s

But, it doesn’t provide us with a self-managed setup, using immutable, declarative configuration stored under version control.

It turns out that the Flux project considers that bootstrapping, self-management and recovery, are primary concerns of GitOps deployments based on Flux. And, as such, it provides an installation mechanism that speaks to each of these concerns, which is described prominently in the documentation. So, how does Flux go about bootstrapping?

Bootstrapping with the Flux CLI Link to heading

The Flux CLI is a like a Swiss army knife, in that it provides a large number of useful, utilitarian features. As well as using it to install Flux’s controllers into a cluster, you can use it to create the manifests for the custom resource objects that the controllers use to perform GitOps actions, for example. And a whole lot more, besides.

Perhaps one of its most important functions, however, is its ability to perform a bootstrap of a GitOps environment, using a single command:

$ flux bootstrap github \
    --repository flux-bootstrap \
    --owner nbrownuk \
    --personal true

This is a fairly vanilla example of the use of the flux bootstrap command, but there are a ton of other flags available to nuance the outcome. The command performs a lot of work behind the scenes; it:

  • creates a remote Git repo (GitHub, GitLab, AWS CodeCommit etc.)
  • clones the repo
  • generates the manifests for each of Flux’s controllers
  • commits the manifests and pushes them to the remote repo
  • installs the controllers into the cluster using the generated manifests
  • creates an ssh key pair for trusted communication between Flux’s source controller and the remote repo (if you don’t bring your own)
  • creates a secret embodying the private key, and adds the public key to the remote Git repo as a ‘deploy key’
  • generates manifests (based on Flux’s custom resources) for the purposes of syncing desired state from the remote repo to the cluster
  • commits sync manifests and pushes them to the remote repo
  • applies the sync manifests to cluster, and waits for reconciliation

That’s a lot of heavy lifting done on our behalf, and results in a repo structure that contains all that’s needed for Flux to manage itself:

└── flux-system
   ├── gotk-components.yaml
   ├── gotk-sync.yaml
   └── kustomization.yaml

The ‘gotk-components.yaml’ manifest contains the custom resource definitions, and the Kubernetes object definitions for each of the controllers. And, the ‘gotk-sync.yaml’ manifest contains custom resource object definitions, providing Flux with the location of the repo, and the path within the repo that contains the configuration that is to be applied to the cluster. That’s all Flux needs to manage itself. And, should you be unfortunate enough to lose you cluster, the bootstrap command can be run again to provision Flux to a new cluster, using the configuration in the repo.

Updating Flux Link to heading

As I write, even though Flux has gained graduate status with the CNCF, and has acquired a sizeable community of adopters, the project is still working towards General Availability. This means that Flux gets updated on a regular basis, which begs the question, how does Flux get updated after bootstrap?

Firstly, with each release, a new version of the Flux CLI is provided1. And so, to achieve an update to Flux after an initial bootstrap, all that’s required is a re-run of the bootstrap command with the updated version of the CLI. Execution of the command results in an update to the controller definitions in the ‘gotk-components.yaml’ file in the remote Git repo, which is subsequently fetched and applied to the cluster by Flux. And this results in a rolling update of each controller running in the cluster.

It may be more preferable to gate this change in the source Git repo using a pull request. The revised manifest for the new controller versions could be generated in advance using the following command:

$ flux install --export > ./gotk-components.yaml

If you’re using GitHub as the Git host provider, this could also be automated using GitHub Actions, which could easily be adapted for use in other CI/CD systems.

Bootstrapping and updating in this manner is all well and good, if we’re happy to use a command line interface to achieve this. Sometimes, DevOps teams prefer to bundle the installation of core services into the infrastructure layer. How do we manage this for Flux?

Bootstrapping with Terraform Link to heading

When provisioning an entire environment for running cloud-native applications, an interesting question arises. What delineates infrastructure from applications? This line can get quite blurred, and different people will have different answers as to where to draw the line. But, the question is an important one, because different tools are generally used for managing the different layers. Terraform is pretty ubiquitous when it comes to managing infrastructure, and as we know, Flux is used to manage application deployments to Kubernetes.

Flux sits on the boundary; you can’t manage applications with Flux until it’s provisioned, and you can’t provision Flux until the Kubernetes cluster is provisioned. Many will choose to drop Flux into the infrastructure layer, and have Terraform handle its bootstrapping, after which, Flux is able to manage itself. Anticipating the need, and recognising the importance of catering for this scenario, the Flux project has provided a Flux Terraform provider for provisioning Flux.

Up to at least v0.21.0, the Flux provider consisted of two data sources; one for generating the YAML for the CRDs and deployment configuration, and one for generating the YAML for the sync activity. The code used to achieve this is the same code used in the CLI for bootstrapping Flux. But, generation of the Kubernetes configuration doesn’t get Flux bootstrapped, and the user is required to make use of other Terraform providers to store the configuration in a Git source, and then to get it applied to a cluster.

The official Terraform Kubernetes provider, which is needed to apply the generated configuration to the cluster, comes with some drawbacks. And, further, the notion that the Flux provider doesn’t itself handle the bootstrapping, might be considered a sub-optimal user experience. This has prompted the Flux project to overhaul the Flux provider, so that it performs the bootstrap process in its entirety. At the time of writing, a new bootstrap resource is in development for the Flux provider. With this important revision, the initial terraform apply will:

  • generate the necessary configuration,
  • push the configuration to the Git repo,
  • apply the configuration to the cluster, and wait for reconciliation

Subsequently, Terraform is responsible for managing the configuration in the repo, and Flux is responible for fetching and applying the configuration from the repo. This provides a clear delineation of responsibility, and a much improved user experience.

  1. For the sake of clarity, the version of the CLI reflects the Flux release version, whereas the individual controllers have different, independent release versions. For example, in Flux v0.37.0, the source controller version is v0.32.1, whilst the helm controller is v0.27.0. ↩︎