OSO uses Flux daily for all their clients, as well as for internal products and to showcase concepts to clients and the community. I have been working on deploying Apache Kafka within Kubernetes using Flux. The transition from Flux 1 to Flux 2 offered so many benefits, but primarily it made managing multiple clusters much easier. A lot was learned along the way and it is these lessons and best practices that I will share during this session.

If your curious on how to manage multiple client clusters with Kubernetes/Flux press play :)

--

--

This will be quick and to the point.

previously when we wanted to create an Opaque secret manually we would have to encode and decode the data like so:

apiVersion: v1kind: Secretmetadata:  name: log-level  namespace: namespacetype: Opaquedata:  LOG_LEVEL: ZXJyb3IK# must encode values# encode#  echo "error" | base64# ZXJyb3IK# # decode# echo "ZXJyb3IK==" | base64 -d# error

Which wasn’t the end of the world, however added extra steps. In February 2022, the api was updated and the values no longer needed to be encoded/decoded. This is because of the key stringData

and can be used like so:

apiVersion: v1kind: Secretmetadata:  name: log-level  namespace: exampletype: OpaquestringData:  LOG_LEVEL: "error"

Then to apply

kubectl apply -f log-level-secret.yaml

--

--

Joshua Callis

Joshua Callis

Converted DevOps Engineer, Previously a Senior Software Engineer.