I know someone that will find this interesting.
Thanks!
Nice! It remembers me a lot to Clutch.
I’m using Librewolf (a Firefox fork) and have the same issue.
Just check Firefox messaging folder exists in your home
ls -l ~/.mozilla/native-messaging-hosts
In my case, I needed to create a symlink to make it work with my browser
ln -s ~/.mozilla/native-messaging-hosts ~/.librewolf/native-messaging-hosts
Maybe you could apply a similar workaround. Hope this helps
If you still want more you can use Helmfile. Take care of your PMs 😁
I understand your point. Anyway, if your devs are using Helm they can still use Sops with the helm-secrets plugin. Just create a separated values file (can be named as secrets.yaml) contaning all sensitive values and encrypt it with Sops.
What do you think about storing your encrypted secrets in your repos using Sops?
Thanks for your answer. That’s correct as much as I can see in the EKS docs. But in GKE there is a little disclaimer here
If you want to use a beta Kubernetes feature in GKE, assume that the feature is enabled. Test the feature on your specific GKE control plane version. In some cases, GKE might disable a beta feature in a specific control plane version.
They basically say “ok, trust on all the beta features would be enabled by default, but we can disable some of them without advising you”. Funny guys.
If an entire region goes down, the Terraform status file stored there will not be useful at all because it only stores information about the resources you deployed in that particular region and your resources deployed there will also go down.
Replicating the status file in another region will not be useful either because it will only contain information about the resources that are down in your region.
The status file inventories all the resources you have deployed to your cloud provider. Basically Terraform uses it to know what resources are being managed by the current Terraform code and to be idempotent.
If you want to set up another region for disaster recovery (Active-Passive) you can use the same Terraform code, but use a different configuration (meaning different tfvars files) to deploy the resources to a different region (not necessarily to another account). Just make sure that all your data is replicated into the passive region.
Does this answer your question?
https://min.io/docs/minio/kubernetes/upstream/operations/data-recovery.html
My apologies if I’m saying something stupid, but I see that this is built on top of Drone, which stopped being Open Source several years ago. Does this means that Drone, as part of Gitness, has become Open Source again?
New Pipe unofficial fork with Sponsor Block if you want to skip ads
This is a very interesting approach that we are starting to fully adopt in our organization for our Kubernetes deployments.
We switched from Helm (using Helmfile) to ArgoCD to deploy applications into our clusters.
The main challenge here is how to design a good repository structure to organize the ArgoCD applications because there is nothing said about which is the best approach that must be followed.
Finally we decided to use ApplicationSets to deploy umbrella charts that are defined in the repo. The Chart.yaml
of our umbrellas contain the charts that we really want to deploy as if they were dependencies (such as as Ingress Nginx) and their chart versions and the values.yaml
contains the values for a particular cluster.
Another interesting issue is how we manage secrets. We were using sops along with helm secrets plugin to automatically decrypt secrets when running helmfile apply
. Fortunatelly the helm secrets plugin can be installed as an addon on ArgoCD via an initScript or developing a custom ArgoCD image.
I switched to KISS launcher after I knew the company had bought the project. Never came back.