I’m using Librewolf (a Firefox fork) and have the same issue.
Just check Firefox messaging folder exists in your home
ls -l ~/.mozilla/native-messaging-hosts
In my case, I needed to create a symlink to make it work with my browser
ln -s ~/.mozilla/native-messaging-hosts ~/.librewolf/native-messaging-hosts
Maybe you could apply a similar workaround. Hope this helps
Just joking. take it easy.
I mean, the person who filed that complaint must not be very well if feels annoyed by an apology note.
Maybe who complained for the apology note prefers a good shotgun to solve his/her neighborhood matters
If you still want more you can use Helmfile. Take care of your PMs 😁
I understand your point. Anyway, if your devs are using Helm they can still use Sops with the helm-secrets plugin. Just create a separated values file (can be named as secrets.yaml) contaning all sensitive values and encrypt it with Sops.
I totally agree that
This is the perfect situation in which consumers could just stop buying audiobooks from them and the problem would be solved, but noooo. Most people will prefer living with this shit because they cannot stop using Spotify. Great! I love humanity’s awesome hability to consume crap from everyhere and everyone and still be grateful for that
I prefer blaming most of humanity which is not capable of having a minimal critical though
Great! I can’t wait some assholes telling that this is progress and if you don’t like it go fuck yourself
What do you think about storing your encrypted secrets in your repos using Sops?
You assumed well. I’m in a Firefish instance and the most content I can see from there comes from Mastodon users from other instances.
Don’t missunderstand me. I think that is pretty cool because you can interact with both Mastodon and Firefish users using any of the two applications, but at least in case of my Firefish instance, most of the additional features of Firefish are underutilized.
Thanks for your answer. That’s correct as much as I can see in the EKS docs. But in GKE there is a little disclaimer here
If you want to use a beta Kubernetes feature in GKE, assume that the feature is enabled. Test the feature on your specific GKE control plane version. In some cases, GKE might disable a beta feature in a specific control plane version.
They basically say “ok, trust on all the beta features would be enabled by default, but we can disable some of them without advising you”. Funny guys.
If an entire region goes down, the Terraform status file stored there will not be useful at all because it only stores information about the resources you deployed in that particular region and your resources deployed there will also go down.
Replicating the status file in another region will not be useful either because it will only contain information about the resources that are down in your region.
The status file inventories all the resources you have deployed to your cloud provider. Basically Terraform uses it to know what resources are being managed by the current Terraform code and to be idempotent.
If you want to set up another region for disaster recovery (Active-Passive) you can use the same Terraform code, but use a different configuration (meaning different tfvars files) to deploy the resources to a different region (not necessarily to another account). Just make sure that all your data is replicated into the passive region.
Does this answer your question?
https://min.io/docs/minio/kubernetes/upstream/operations/data-recovery.html
That has much more sense that I though
I totally agree that. But I suppose that with this rebranding they are looking for moving away the original project as much as possible.
I’m not sure if this movement has a lot of sense right now because by when the project had finally been released it will still being a fork of the original Terraform, but they may change this in a near future.
Hi! I’m afraid there is not a solution that groups all the functionality you that are looking for. Anyway, these are the AWS services I use for most of the requirements you described. Take at count most of them require AWS services and your company will be charged for most of them.
Default blocking for certain CIDRs.
Exceptions for certain IP/Host and port combos within those CIDRs.
Use Security Groups (free cost): https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html
Authentication and authorisation to use said exceptions (i.e. user tracking).
You can implement user Authentication using AWS Cognito: https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html.
Additionally you can delegate the user authentication by using Application Load Balancers and Cognito. See: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-authenticate-users.html
Detailed logging on connections; source, dest, request and response sizes, ports, protocols, whatever we can get out hands on.
All of the above for all (?) kinds of TCP connections (HTTPS, Postgres, Oracle DB, MongoDB, as examples).
For connections through the Load Balancer y suggest you to enable access logs (requires an S3 bucket and will generate additional charges). For the rest of connections you may want to check this but I never tried it.
Hi! After more than 6 years using Ansible I have not found a way to print the standard output of a program running under the command
module, so I’m afraid the only way to achieve this is exactly what you suggest: using a debug
task, something that has always seemed terribly ugly to me.
New Pipe unofficial fork with Sponsor Block if you want to skip ads
I know someone that will find this interesting.
Thanks!