The death of Code Spaces
cyberguypr
Mod Posts: 6,928 Mod
in Off-Topic
For those who haven't been following this, take a minute to read how a bad thing can become a disaster in the blink of an eye. In a nutshell:
- DDOS atack with ransom
- EC2 admin panel hacked
- Code Spaces failed regaining and retaining access
- tons of data deleted by attacker, including backups
- company realized they screwed up and is now out of business
More than a breach, this is a great example of lack of BCP/DR. I find hilarious that Code Spaces boasted on their backup capabilities by saying that "At Code Spaces, we backup your data every time you make a change at multiple off-site locations, giving you peace of mind with each commit." I guess these guys have never heard what the term "single point of failure" means. Likewise, they had no idea on the term "redundancy". They basically said "we can recover from anything and they found out the hard way that wasn't the case.
Although we use "the cloud" a lot I still respect and use my trusty LTO-5 gear and leverage offline backups. Would've come in handy for these peeps.
Takeaway: when you don't plan properly and put all your eggs in one basket, horrible things will happen sooner or later.
Code Spaces goes titsup FOREVER after attacker NUKES its Amazon-hosted data
AWS console breach leads to demise of service with “proven” backup plan
- DDOS atack with ransom
- EC2 admin panel hacked
- Code Spaces failed regaining and retaining access
- tons of data deleted by attacker, including backups
- company realized they screwed up and is now out of business
More than a breach, this is a great example of lack of BCP/DR. I find hilarious that Code Spaces boasted on their backup capabilities by saying that "At Code Spaces, we backup your data every time you make a change at multiple off-site locations, giving you peace of mind with each commit." I guess these guys have never heard what the term "single point of failure" means. Likewise, they had no idea on the term "redundancy". They basically said "we can recover from anything and they found out the hard way that wasn't the case.
Although we use "the cloud" a lot I still respect and use my trusty LTO-5 gear and leverage offline backups. Would've come in handy for these peeps.
Takeaway: when you don't plan properly and put all your eggs in one basket, horrible things will happen sooner or later.
Code Spaces goes titsup FOREVER after attacker NUKES its Amazon-hosted data
AWS console breach leads to demise of service with “proven” backup plan
Comments
-
jibbajabba Member Posts: 4,317 ■■■■■■■■□□Ouch .. We did have a customer once calling in to shutdown their "cloud" because it got compromised. Don't know anything about AWS and whether this is possible at all, but they write it all happens over a 12-hour period ? In that time they weren't able to get Amazon involved ?
I am confused ...My own knowledge base made public: http://open902.com -
the_Grinch Member Posts: 4,165 ■■■■■■■■■■From the second article it seems the attacker made a series of backup admin accounts and once he detected their attempts to recover the account he went on a deleting spree. As cyberguypr pointed out they put all their eggs in one basket and paid the ultimate price. Not the first nor the last time you will read things like this. Being on the auditing side always makes for a good perspective on these things. I see first hand how companies respond to their incidents and you learn (as these guys did) that the first couple of hours are the most important.WIP:
PHP
Kotlin
Intro to Discrete Math
Programming Languages
Work stuff