OSO uses Flux daily for all their clients, as well as for internal products and to showcase concepts to clients and the community. I have been working on deploying Apache Kafka within Kubernetes using Flux. The transition from Flux 1 to Flux 2 offered so many benefits, but primarily it made managing multiple clusters much easier. A lot was learned along the way and it is these lessons and best practices that I will share during this session.

If your curious on how to manage multiple client clusters with Kubernetes/Flux press play :)



This will be quick and to the point.

previously when we wanted to create an Opaque secret manually we would have to encode and decode the data like so:

apiVersion: v1kind: Secretmetadata:  name: log-level  namespace: namespacetype: Opaquedata:  LOG_LEVEL: ZXJyb3IK# must encode values# encode#  echo "error" | base64# ZXJyb3IK# # decode# echo "ZXJyb3IK==" | base64 -d# error

Which wasn’t the end of the world, however added extra steps. In February 2022, the api was updated and the values no longer needed to be encoded/decoded. This is because of the key stringData

and can be used like so:

apiVersion: v1kind: Secretmetadata:  name: log-level  namespace: exampletype: OpaquestringData:  LOG_LEVEL: "error"

Then to apply

kubectl apply -f log-level-secret.yaml



Recently went through the pain of getting a bunch of errors, most notably

“ Error while reading line from the server.”

when trying to connect to a new ElasticCache cluster in AWS that is multi-az and has encryption/encryption at rest enabled.

By default Laravel doesn’t set the cluster configs and the default scheme is tcp, which obviously can’t be used on a cluster that requires TLS :(

To keep it to the point…

Set the cluster configuration and change the scheme to tls and also remove the default 60 second time out, by setting read_write_timeout to -1.

Redis configuration in config/database.php will look like the following..



Joshua Callis

Converted DevOps Engineer, Previously a Senior Software Engineer.