“But we never had a problem before! We might as well cut the security and back-up budget for next year.”
IT is one of those things that when it is running perfectly fine, no one thinks about it and instead they start to question why they are spending so much money on various services. But it is BECAUSE you are spending that money that it is running so smoothly. The allure to cut some corners and hire cheaper, less trained employees and cut back on security to save money is a big problem.
It’s such a pain in the ass presenting and justifying budget requests when the people who decide only see $$$. It’s always “Why do you need this much? We’ve been doing fine the last couple of years.”. The only way to get them to understand is to talk to them in $$$-speak. Like, “This is the amount of the money and reputation you are risking to lose if something happens because we didn’t spend for this.”
You have to talk-the-talk.
You don’t talk about Gigabytes and megabits-per-second and megawatts with sales people and executives. You need to turn things around and talk in terms of money and man-power. If it frustrating that you have to do that, but it is the best way to get through to people.
There is no finite dollar value associated with security breaches so there isn’t exactly a way to quantify it. Intangible benefits that are difficult to express in dollar values are often hard to present to one-dimensional money managers.
[ in Moss’ voice ] they never remember us!
deleted by creator
deleted by creator
It sounds like they had a really bad backup system for this to happen.
No one will ever trust them with their data and email again, so they might as well close their doors straight away. It’s unfortunate but a mistake like this is likely gonna cost you your business.
Its probably even an easily avoided issue too. If only they had offsite backups they could roll back…
It’s not offsite backups that would have saved them, it’s offline backups.
You can have all the data centers you want, but if they’re all connected, then one ransomware attack can (and did) nuke them all.
If you have just one system that’s unplugged with a copy of all the data, then your data will be fine. It’s just time at that point, which could still be very very bad, but the data still exists.
Immutable backups are the “current hotness” in this space.
Or something like AWS S3 vault lock. You pay up front and specify the duration. And at that point you can’t even delete the data if you want to. You can remove you’re credit card from account billing, and they still keep the data for the specified duration.
Pretty sure the Amazon vault lock offerings are essentially specific implementations of the broad idea of an immutable backup. Not disagreeing with you here, just saying this might not be an “or” situation.
Off-site can also be offline. Any serious company has a tape library stored in another location in a vault for worst case scenarios. Since fire and natural disasters are also a serious risk.
Sure, but those are two different things. Offsite is good to protect against natural disasters (or attacks, or…), offline is good to protect against digital disasters (ransomware, admin mistakes, etc.)
Tape libraries are a way to store offline backups, but they don’t have to be offline and aren’t the only way to run a backup solution. They are a way to store data in bulk for cheap. They are also a way to help protect against technological changes (if all your backups are on 5 1/4" floppy discs and you can’t find any 5 1/4" floppy drives, then your backups are no good).
Some people like to use a 3-2-1 solution (3 copies, 2 different technologies, 1 offsite), but that doesn’t specifically mention anything about offline, which is critical for these types of situations, and the exact solution has many different correct answers.
You are right. Thats what I meant. You should have both in place for different reasons.
Whew that sounds pretty bad.
Tl-dr:
*Danish hosting firms CloudNordic and AzeroCloud have suffered ransomware attacks.
During a data center migration, those servers were connected to the broader network, allowing the attackers to access critical administrative systems, all data storage silos, and all backup systems.
Danish media reports that the attacks have impacted “several hundred Danish companies” who lost everything they stored in the cloud, including websites, email inboxes, documents, etc.*
This is a company ender. How would you even come back from this?
Too many companies have stupid people running their IT department, stupid to the bone, thinking backups (on site and off site) are not important. This hosting company is prime example.
All customers too I bet.