Hate technology ... or labs ... or both ...
jibbajabba
Member Posts: 4,317 ■■■■■■■■□□
I thought it is time to clean up the lab and reinstall my lab making sure I run the whole vRealize Suite as it is a big part of my job.
So over the last two weeks I have been
[noparse]
1. Reinstalling my physical host
2. Created a virtual vSphere infrastructure (AD, SQL and whatnot)
3. Installed two vCloud Director environments (5.6.4 &
4. Installed a vCAC 6.0 environment
5. Installed a vRA 6.2.2 environment
6. Installed Hyperic
7. Installed Log Insight
8. Installed Infrastructure Navigator
9. Installed vROPS 6.0
10. Installed a second virtual vSphere Infrastructure
11. Installed vROPS 6.1
12. Installed Veeam, backed up the lot
Then there is tonight .. Thursday 3pm
1. Call from manager, I am getting a tech screen within the hour for a couple of projects
2. Panic settling in
3. Wasn't sure what project the tech screen was for (vRA / vCAC / vCD / vROPS - who knows)
4. Trying to connect to lab .. nothing .. no connection
5. Checking DRAC of server - pink screen of ESX host - reset
6. No USB drive found - bust - reinstall of ESXi on new USB
7. No LUNs visible
8. Log in to first NAS (Synology) - 2 !!! out of 4x480GB SSD failed - whole infra bust
9. Replaced SSDs, re-created LUN, re-created physical host
Heh - Love Veeam .. so what could possible go / gone wrong
10. Reinstalled Veeam onto a VM
11. Realized I am missing my local storage (6x 2TB SATA) - which hosts the first set of backup
12. Checked Raid card - error saying battery degraded
13. Inside Raid bios - raid 6 failed - corrupt meta data - cannot be recovered
14. Checked second Synology - 4x2TB SATA - backup backup
15. Can't check Synology - turned off
16. Can't be turned on
17. Synology bust
18. Move disks from one to another synology - filesystem damaged
So .. two weeks of labbing to create a perfect infrastructure - down the toilet
Hardware costs to replace broken parts : £400 (without Synology) + delivery
Time needed to reinstall lab : priceless
So what else could go wrong .. oh yea - a. tech screen ...
b. Checked time : 6:30pm ... missed GP appointment, no other appointments for a week
c. Checked Amazon : parts out of stock - no new lab hardware for now (affordable)
d. Needed a whisky .. went to kitchen, dropped bottle, kitchen floor smelling nice but needs cleaning
Moral of this story : Sometimes you just need to stay in bed ..... or win the lottery ... or have a stupid job which doesn't require thinking, tinkering or .. well anything really
[/noparse]
[/rant]
So over the last two weeks I have been
[noparse]
1. Reinstalling my physical host
2. Created a virtual vSphere infrastructure (AD, SQL and whatnot)
3. Installed two vCloud Director environments (5.6.4 &
4. Installed a vCAC 6.0 environment
5. Installed a vRA 6.2.2 environment
6. Installed Hyperic
7. Installed Log Insight
8. Installed Infrastructure Navigator
9. Installed vROPS 6.0
10. Installed a second virtual vSphere Infrastructure
11. Installed vROPS 6.1
12. Installed Veeam, backed up the lot
Then there is tonight .. Thursday 3pm
1. Call from manager, I am getting a tech screen within the hour for a couple of projects
2. Panic settling in
3. Wasn't sure what project the tech screen was for (vRA / vCAC / vCD / vROPS - who knows)
4. Trying to connect to lab .. nothing .. no connection
5. Checking DRAC of server - pink screen of ESX host - reset
6. No USB drive found - bust - reinstall of ESXi on new USB
7. No LUNs visible
8. Log in to first NAS (Synology) - 2 !!! out of 4x480GB SSD failed - whole infra bust
9. Replaced SSDs, re-created LUN, re-created physical host
Heh - Love Veeam .. so what could possible go / gone wrong
10. Reinstalled Veeam onto a VM
11. Realized I am missing my local storage (6x 2TB SATA) - which hosts the first set of backup
12. Checked Raid card - error saying battery degraded
13. Inside Raid bios - raid 6 failed - corrupt meta data - cannot be recovered
14. Checked second Synology - 4x2TB SATA - backup backup
15. Can't check Synology - turned off
16. Can't be turned on
17. Synology bust
18. Move disks from one to another synology - filesystem damaged
So .. two weeks of labbing to create a perfect infrastructure - down the toilet
Hardware costs to replace broken parts : £400 (without Synology) + delivery
Time needed to reinstall lab : priceless
So what else could go wrong .. oh yea - a. tech screen ...
b. Checked time : 6:30pm ... missed GP appointment, no other appointments for a week
c. Checked Amazon : parts out of stock - no new lab hardware for now (affordable)
d. Needed a whisky .. went to kitchen, dropped bottle, kitchen floor smelling nice but needs cleaning
Moral of this story : Sometimes you just need to stay in bed ..... or win the lottery ... or have a stupid job which doesn't require thinking, tinkering or .. well anything really
[/noparse]
[/rant]
My own knowledge base made public: http://open902.com
Comments
-
iBrokeIT Member Posts: 1,318 ■■■■■■■■■□Don't you love it when managing your home lab turns into a full time job by itself?2019: GPEN | GCFE | GXPN | GICSP | CySA+
2020: GCIP | GCIA
2021: GRID | GDSA | Pentest+
2022: GMON | GDAT
2023: GREM | GSE | GCFA
WGU BS IT-NA | SANS Grad Cert: PT&EH | SANS Grad Cert: ICS Security | SANS Grad Cert: Cyber Defense Ops | SANS Grad Cert: Incident Response