For those who haven't been following this, take a minute to read how a bad thing can become a disaster in the blink of an eye. In a nutshell:
- DDOS atack with ransom
- EC2 admin panel hacked
- Code Spaces failed regaining and retaining access
- tons of data deleted by attacker, including backups
- company realized they screwed up and is now out of business
More than a breach, this is a great example of lack of BCP/DR. I find hilarious that Code Spaces boasted on their backup capabilities by saying that "At Code Spaces, we backup your data every time you make a change at multiple off-site locations, giving you peace of mind with each commit." I guess these guys have never heard what the term "single point of failure" means. Likewise, they had no idea on the term "redundancy". They basically said "we can recover from anything and they found out the hard way that wasn't the case.
Although we use "the cloud" a lot I still respect and use my trusty LTO-5 gear and leverage offline backups. Would've come in handy for these peeps.
Takeaway: when you don't plan properly and put all your eggs in one basket, horrible things will happen sooner or later.
Code Spaces goes titsup FOREVER after attacker NUKES its Amazon-hosted dataAWS console breach leads to demise of service with “proven” backup plan