Options

Got a good scare today. lol

BlackoutBlackout Member Posts: 512 ■■■■□□□□□□
So I was working on our VOIP switch, doing to minor configurations for port use, looked over and realized and realized my phone wasn't working...... so I walked around the office looking at peoples desktops.....all of the phones weren't working!!! At this point I realized the whole switch was down. Im starting to panic a bit. I was looking through all of the configs, I hadn't saved anything yet, so I reboot the switch.....same issue. At this point I call my Senior engineer to tell him I messed something up but I don't know what it is yet, I explain step by step what happened, he walks into the server room unplugs the Ethernet cable from the UC500 plugs it back in smiles at me and walks away, I walk out to my desk and boom my phone is working, I sat there feeling a bit sheepish. lol apparently I had bumped the Ethernet cable and unseated it when I was wire chasing in the server room. Thank god I work in a small company and nobody really noticed their phones weren't working.
Current Certification Path: CCNA, CCNP Security, CCDA, CCIE Security

"Practice doesn't make perfect. Perfect practice makes perfect"

Vincent Thomas "Vince" Lombardi

Comments

  • Options
    Concerned WaterConcerned Water Member Posts: 338 ■■■■□□□□□□
    lol, reminds me of myself.
    :study:Reading: CCNP Route FLG, Routing TCP/IP Vol. 1
    SWITCH [x] ROUTE [ ] TSHOOT [ ] VCP6-NV [ ]
  • Options
    mapletunemapletune Member Posts: 316
    That's a great story with a happy ending! =D

    reminds us of what we always hear people say "Always start troubleshooting on Layer 1" XD
    Studying: vmware, CompTIA Linux+, Storage+ or EMCISA
    Future: CCNP, CCIE
  • Options
    nhprnhpr Member Posts: 165
    If you haven't crashed at least one major system, you're not in IT.
  • Options
    universalfrostuniversalfrost Member Posts: 247
    congrats on the crash! next time go bigger... since you only had a small pbx 1/2 you were just using training wheels...lol i took down an entire SMEO (old raytheon ERSU with DEC Alpha processors and running VMS) which serviced the entire southern half of eucom with all the long locals hanging off it... thankfully it was 1 AM and nobody noticed in the morning when i handed them the call detail sheets (i made the mistake of switching to proc B before i had proc B online)...

    glad everything worked out in the end... now remember the KISS principle... layer 1 and then work your way up...
    "Quando Omni Flunkus Moritati" (when all else fails play dead) -Red Green
  • Options
    sratakhinsratakhin Member Posts: 818
    One time I was messing around with an old switch. Suddenly, a guy next to me said he couldn't send an e-mail. I knew it was because of me. In a minute, I got to know that the whole building was down. It turned out that STP was off on all switches...
  • Options
    mapletunemapletune Member Posts: 316
    VTP server vlan propagation? =D ouch~
    Studying: vmware, CompTIA Linux+, Storage+ or EMCISA
    Future: CCNP, CCIE
  • Options
    gorebrushgorebrush Member Posts: 2,743 ■■■■■■■□□□
    My best one has to be: -

    I was the systems administrator for a company (this was a few years ago) and had to upgrade Exchange 2000 to 2003. We had it running on our main "Exchange Server" and the backup copy was running on a SQL Server (so we had a failover)

    I upgraded Exchange by doing a full in-place upgrade of Windows + Exchange (that was apparently the most difficult method of doing it, but worked a dream) - Basically you upgrade Exchange 2003 first, then upgrade Windows, as Exchange 2000 would not run on Windows 2003.

    I basically put the Windows CD in and booted off it - worked a treat. I was up and running with Exchange 03 in a few hours.

    The next day I did the SQL Server, and used the same principle. I ended up hosing the SQL Server because the HP DL360's we were using were both different. The newer server let you use the Windows CD without issue - the SQL box demanded that you use the HP SmartStart CD.

    Now, as our ERP ran on SQL....................................

    I had to rebuild the SQL Server from scratch once I'd figured out (With the help from a buddy who happened to be online) and luckily all we lost was peoples logins to the system (all data was backed up) so it cost us half a day on the Monday restoring all the user accounts for the app.

    Now, that was a butt clencher!
  • Options
    4_lom4_lom Member Posts: 485
    I've had a similar experience so don't feel too bad icon_wink.gif
    Goals for 2018: MCSA: Cloud Platform, AWS Solutions Architect, MCSA : Server 2016, MCSE: Messaging

  • Options
    J_86J_86 Member Posts: 262 ■■□□□□□□□□
    I caused a broadcast storm once...that was a fun day icon_redface.gif.

    If you don't break something major a least once, you should play the lottery!
  • Options
    QHaloQHalo Member Posts: 1,488
    I ran force cleanup and put in the name of the cluster instead of the cluster node name once. Hello complete rebuild of the entire file server cluster.
  • Options
    BlackoutBlackout Member Posts: 512 ■■■■□□□□□□
    Thank you everyone for the funny stories, and the support! You guys are the best.
    Current Certification Path: CCNA, CCNP Security, CCDA, CCIE Security

    "Practice doesn't make perfect. Perfect practice makes perfect"

    Vincent Thomas "Vince" Lombardi
  • Options
    RobertKaucherRobertKaucher Member Posts: 4,299 ■■■■■■■■■■
    Always check layer one.
  • Options
    Danielh22185Danielh22185 Member Posts: 1,195 ■■■■□□□□□□
    I haven't had a major break on the network or a server yet (still pretty new to IT) but I did have a scare when I was administering our CMS Avaya server for the help desk call center I used to work at. I was testing a new skill addition to see how calls routed to it. That all worked great on the test ID so next thing in order was to move this new skill into each of the 70 help desk agents’ call skills so they can be routed these new types of calls. I accidently copied the test ID which only had that one new skill over to all of the 70 help desk agents during peak hours of call volume. Needless to say my boss wasn’t very happy that we had dozens of calls holding and nobody able to answer them. Luckily I was able to make a new template ID quickly and recopy that to everybody again. We probably only missed a few hundred calls that day.
    Currently Studying: IE Stuff...kinda...for now...
    My ultimate career goal: To climb to the top of the computer network industry food chain.
    "Winning means you're willing to go longer, work harder, and give more than anyone else." - Vince Lombardi
  • Options
    powerfoolpowerfool Member Posts: 1,666 ■■■■■■■■□□
    Ugh. That reminds me of my first day at my last job. I was getting my bearings in the server room and went to look behind the communications racks. They didn't leave much clearance and my shirt got caught on the power cable for the switch running many of the highly important servers... yeah, that wasn't good. I felt like an idiot. About ten minutes later, all of the switches had zip ties holding the power cords to the exhaust fan vent and any reason to unplug the switches would have to be on purpose...
    2024 Renew: [ ] AZ-204 [ ] AZ-305 [ ] AZ-400 [ ] AZ-500 [ ] Vault Assoc.
    2024 New: [X] AWS SAP [ ] CKA [ ] Terraform Auth/Ops Pro
  • Options
    Lucas21Lucas21 Member Posts: 46 ■■□□□□□□□□
    Is that how you got your name powerfool :)
  • Options
    EssendonEssendon Member Posts: 4,546 ■■■■■■■■■■
    I got a similar story to some of you. I was 5 days into a short contract gig at a medium size grocery chain and the manager asked me to remove some old computers from the server room. Upon reaching the server room, there was an innocuous looking pile of computers there - the only thing not right was the pile was too close to a rack. Some 3rd party had installed a router for grocery devices (from the stores) to communicate with the stuff in the server room. Now the prick that installed the router put the thing behind the pile of computers (dont ask me how, I dont know). I didnt see the router, began removing the computers and somehow knocked the power cable from the router. None of the stores could communicate with the servers. Not good!

    Needless to say, within an hour two parties got the sack - me and the 3rd part installer.
    NSX, NSX, more NSX..

    Blog >> http://virtual10.com
  • Options
    DevilryDevilry Member Posts: 668
    I had the exact same thing happen to me before, in the past few months actually. Only difference was, I took down an entire eastern division of the company. Talk about stress threshold testing :)
  • Options
    4_lom4_lom Member Posts: 485
    I once tried racking a $14,000 dollar server by myself, only to drop it 4 feet onto the ground.... Talk about being scared...icon_rolleyes.gif. I was 3 hours away from home doing an install for a new bank, thought I had one rail on all the way, and I didn't. You can guess what happened from there. Luckily, the only damage done was to the front bezel, which I had spares of. That was probably the scariest moment of my life so far. But I'm only 21, so we'll see what happens next icon_lol.gif
    Goals for 2018: MCSA: Cloud Platform, AWS Solutions Architect, MCSA : Server 2016, MCSE: Messaging

  • Options
    IristheangelIristheangel Mod Posts: 4,133 Mod
    My messup was a couple of weeks ago. I received an alert from our monitor software for a server in the middle of the night. I'll be the first to admit that I had been up for 20+ hours so I wasn't paying as close of attention to detail as I should have. I recognized that the issue was one that could be easily corrected by restarting a service. So I remoted into the server, restarted the service and didn't really think anything else of it since I was at my other job.
    So the next morning when I walked into work, everyone was frantic since this client's employees couldn't login to Citrix. Turns out that in my tired stupor, I restarted the service on the wrong server. I had restarted the service on the monitor server, not the one that had the problem. I easily fixed it and all was well but that was a dumb rookie move on my part.
    BS, MS, and CCIE #50931
    Blog: www.network-node.com
  • Options
    undomielundomiel Member Posts: 2,818
    That reminds me of the panicked midnight call I received the other week where a number of virtual machines were down including the all important Exchange server. Apparently one of the techs had a bunch of nested remote desktop sessions and they went to shutdown the server they were logged into. Well they got confused and ended up shutting down the host server. Thankfully DRACs exist so it was an easy fix for me and if the coworker had bothered to check the documentation I would have been left sleeping peacefully.
    Jumping on the IT blogging band wagon -- http://www.jefferyland.com/
  • Options
    tprice5tprice5 Member Posts: 770
    All of the printers on base go through a running mac bypass list on the NPS. I hand jammed one in onetime and missed a character causing all printers to cease to work.
    Whoops!
    Certification To-Do: CEH [ ], CHFI [ ], NCSA [ ], E10-001 [ ], 70-413 [ ], 70-414 [ ]
    WGU MSISA
    Start Date: 10/01/2014 | Complete Date: ASAP
    All Courses: LOT2, LYT2 , UVC2, ORA1, VUT2, VLT2 , FNV2 , TFT2 , JIT2 , FMV2, FXT2 , LQT2
Sign In or Register to comment.