I’m putting this here in case anyone runs into the same problem as I did (and, seemingly, others).
TLDR Link to heading
If you want to install the External Secrets Operator (ESO);
-
using Kustomize instead of Helm,
-
intend to use the
external-secrets.yaml
file from the assets on the GitHub release page -
want to deploy to a custom namespace (instead of ‘default’)
some patching is required to get the pods running.
The Problem Link to heading
The External Secrets Operator is generally deployed using Helm, with a chart provided and maintained by the project community. For those who don’t use Helm, and would prefer to work with YAML instead, an external-secrets.yaml
file is provided in the assets on the project’s GitHub release page. It contains the CRDs as well as the various manifests for getting ESO deployed. The different manifests have been rendered with the assumption that ESO is being deployed to the ‘default’ namespace on Kubernetes.
If you’d prefer to deploy ESO to a different, custom namespace, you might use Kustomize to patch the YAML before using kubectl
to apply it to the cluster. A Kustomization could be created to define the namespace, and to have it applied to all object definitions:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- https://github.com/external-secrets/external-secrets/releases/download/v0.9.9/external-secrets.yaml
namespace: eso
However, this will result in the failure of the TLS bootstrapping mechanism for the validating webhook, which, in turn, will inhibit the ability to create any SecretStore or ExternalSecret objects:
$ kubectl -n eso get po
NAME READY STATUS RESTARTS AGE
external-secrets-9f6b87d4b-f44vx 1/1 Running 0 21m
external-secrets-cert-controller-8d89c6584-ncbz2 0/1 Running 0 21m
external-secrets-webhook-5857bd5fff-rndcf 0/1 CrashLoopBackOff 6 (2m ago) 21m
If you switch back to deploying in the ‘default’ namespace, all is rosy again. Perusing the documentation, it’s not entirely clear what configuration change is required to prevent this problem occurring, but the logs from the pods suggest that a Kubernetes secret with the TLS credentials for the webhook is missing.
The Solution Link to heading
It turns out the fix requires some additional options provided for the cert-controller and webhook containers, that relate to the custom namespace. With these added, the secret with the TLS credentials appears in the custom namespace, and all is well:
$ kubectl -n eso get secrets external-secrets-webhook
NAME TYPE DATA AGE
external-secrets-webhook Opaque 4 15s
A patch can be defined in the Kustomization to effect this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
<snip>
patches:
- patch: |-
- op: replace
path: /spec/template/spec/containers/0/args/3
value: "--service-namespace=eso"
- op: replace
path: /spec/template/spec/containers/0/args/5
value: "--secret-namespace=eso"
target:
kind: Deployment
name: external-secrets-cert-controller
- patch: |-
- op: replace
path: /spec/template/spec/containers/0/args/2
value: "--dns-name=external-secrets-webhook.eso.svc"
target:
kind: Deployment
name: external-secrets-webhook
I hope this helps someone out there!