Signal is finally tightening its desktop client's security by changing how it stores plain text encryption keys for the data store after downplaying the issue since 2018.
A security company should prioritize investments (I.e. development time) depending on a threat model and risk management, not based on what random people think.
I am saying that based on the existing risks, effort should be put on the most relevant ones for the threat model you intend to assume.
In fact the “fix” that they are providing is not changing much, simply because on single-user machines there is borderline no difference between compromising your user (i.e., physical access, you installing malware unknowingly etc.) and compromising the whole box (with root/admin access).
On Windows it’s not going to have any impact at all (due to how this API is implemented), on Linux/Mac it adds a little complexity to the exploit. Once your user is compromised, your password (which is what protects the keychain) is going to be compromised very easily via internal phishing (i.e., a fake graphical prompt, a fake sudo prompt etc.) or other techniques. Sometimes it might not be necessary at all. For example, if you run signal-desktop yourself and you own the binary, an attacker with local privileges can simply patch/modify/replace the binary. So then you need other controls, like signing the binary and configuring accepted keys (this is possible and somewhat common on Mac), or something that anyway uses external trust (root user, remote server, etc.).
So my point is: if their threat model assumed that if your client device was compromised, your data was not protected, it doesn’t make much sense to reduce 10/20% the risk for this to happen, and focus on other work that might be more impactful.
Privacy is not anonimity though. Privacy simply means that private data is not disclosed or used to parties and for purposes that the data owner doesn’t explicitly allow.
Often not collecting data is a way to ensure no misuse (and no compromise, hence security), but it’s not necessarily always the case.
It’s better now, but for years and years all they used for contact discovery was simple hashing… problem is the dataset is very small, and it was easy to generate a rainbow table of all the phone number hashes in a matter of hours. Then anyone with access to the hosts (either hackers, or the US state via AWS collaboration) had access to the entire social graph.
What I’m saying though is that for the longest time they didn’t, and when they changed the technique they hardly acknowledge that it was a problem in the past and that essentially every users social graph had been compromised for years.
Signal, originally known as TextSecure, worked entirely over text messages when it first came out. It was borne from a different era and and securing communication data was the only immediate goal because at the time everything was basically viewable by anyone with enough admin rights on basically every platform. Signal helped popularize end-to-end encryption (E2EE) and dragged everyone else with them. Very few services at the time even advertised E2EE, private metadata or social graph privacy.
As they’ve improved the platform they continue to make incremental changes to enhance security. This is not a flaw, this is how progress is made.
deleted by creator
It wasnt a serious security flaw, arguable not one at all. So they are perfectly justified in downplaying the hysteria.
the point is they could have fixed it by the time it was reported and not waited around until the issue was blown bigger.
A security company should prioritize investments (I.e. development time) depending on a threat model and risk management, not based on what random people think.
so are you saying that wasn’t a security risk?
I am saying that based on the existing risks, effort should be put on the most relevant ones for the threat model you intend to assume.
In fact the “fix” that they are providing is not changing much, simply because on single-user machines there is borderline no difference between compromising your user (i.e., physical access, you installing malware unknowingly etc.) and compromising the whole box (with root/admin access).
On Windows it’s not going to have any impact at all (due to how this API is implemented), on Linux/Mac it adds a little complexity to the exploit. Once your user is compromised, your password (which is what protects the keychain) is going to be compromised very easily via internal phishing (i.e., a fake graphical prompt, a fake sudo prompt etc.) or other techniques. Sometimes it might not be necessary at all. For example, if you run signal-desktop yourself and you own the binary, an attacker with local privileges can simply patch/modify/replace the binary. So then you need other controls, like signing the binary and configuring accepted keys (this is possible and somewhat common on Mac), or something that anyway uses external trust (root user, remote server, etc.).
So my point is: if their threat model assumed that if your client device was compromised, your data was not protected, it doesn’t make much sense to reduce 10/20% the risk for this to happen, and focus on other work that might be more impactful.
A company that requires using a phone number prides itself in security?
privacy != anonymity != security
But in some way, privacy ≈ security. Very intertwined.
Privacy is not anonimity though. Privacy simply means that private data is not disclosed or used to parties and for purposes that the data owner doesn’t explicitly allow. Often not collecting data is a way to ensure no misuse (and no compromise, hence security), but it’s not necessarily always the case.
Right, and often for that to be the case, the transferring and storing of data should be secure.
I’m mostly just pointing out the fact that when you do x ≠ y ≠ z, it can still be the case that x = z, e.g. 4 ≠ 3 ≠ 4.
Just nitpicking, perhaps.
Whats the vulnerability with Signal and phone numbers?
It’s better now, but for years and years all they used for contact discovery was simple hashing… problem is the dataset is very small, and it was easy to generate a rainbow table of all the phone number hashes in a matter of hours. Then anyone with access to the hosts (either hackers, or the US state via AWS collaboration) had access to the entire social graph.
Yeah the way I remember it, they put a lot of effort into masking that social graph. That was a while back too, not recent.
What I’m saying though is that for the longest time they didn’t, and when they changed the technique they hardly acknowledge that it was a problem in the past and that essentially every users social graph had been compromised for years.
Signal, originally known as TextSecure, worked entirely over text messages when it first came out. It was borne from a different era and and securing communication data was the only immediate goal because at the time everything was basically viewable by anyone with enough admin rights on basically every platform. Signal helped popularize end-to-end encryption (E2EE) and dragged everyone else with them. Very few services at the time even advertised E2EE, private metadata or social graph privacy.
As they’ve improved the platform they continue to make incremental changes to enhance security. This is not a flaw, this is how progress is made.
Bam!
Did it get leaked or something?