crititcal issue I ran into with ESXi

Young GrasshopperYoung Grasshopper Member Posts: 51 ■■□□□□□□□□
So we have an ESXi server hosting 4 VM's(terminal servers) in a dell poweredge 2950x600GB raid10 config. everything is on this raidset(vmware os + vm's). The server recently died and we tried removing the drives and placing them in another 2950 and importing the raid config. it imports successfully, and we can access the esxi console, but we cannot loads the vm's!. does anyone know what we can do to correct it? the infrastructure client just tells us its unable to access any vms, plus it doesnt see the datastore.


thanks

Comments

  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    Since it's ESXi, do you have the Remote CLI available?
  • Young GrasshopperYoung Grasshopper Member Posts: 51 ■■□□□□□□□□
    thanks for your reply, no i dont have it installed. if i download and install it, what should i do with remotecli?


    thanks
  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    Well you should be able to do a "vicfg-rescan" and then a "vicfg-vmhbadevs" to at least see what VMFS volumes are visible to the host. Once you've verified it's present you can also look at "vmkfstools.pl --queryfs" to see the details of any volumes (capacity, space available, etc).

    We need to figure out if the RAID import damaged the file system in anyway, if not then we can reimport the guests back into ESX.

    Also is ESXi installed locally, running off an external flash drive, or internal flash? And what version of ESXi is it? Update level?
  • blargoeblargoe Member Posts: 4,174 ■■■■■■■■■□
    This happened to my on my lab server one time, the problem was that once I loaded the drives into the new server and told it to save to config, the server saw them, but whatever guid that gets assigned to the vmfs volume didn't match up anymore, it was like being a different physical card meant it wasn't the same volume or something. It wasn't important enough to me to save the config so I just blew it away and started over.
    IT guy since 12/00

    Recent: 11/2019 - RHCSA (RHEL 7); 2/2019 - Updated VCP to 6.5 (just a few days before VMware discontinued the re-cert policy...)
    Working on: RHCE/Ansible
    Future: Probably continued Red Hat Immersion, Possibly VCAP Design, or maybe a completely different path. Depends on job demands...
  • astorrsastorrs Member Posts: 3,139 ■■■■■■□□□□
    blargoe wrote:
    This happened to my on my lab server one time, the problem was that once I loaded the drives into the new server and told it to save to config, the server saw them, but whatever guid that gets assigned to the vmfs volume didn't match up anymore, it was like being a different physical card meant it wasn't the same volume or something. It wasn't important enough to me to save the config so I just blew it away and started over.
    Exactly, that's what I'm thinking happened - hence the above commands one of which will tell us the UUID of the volumes and the mount points (if they are mounted). Once we have that we should be able to fix it - the catch is I've never done this in ESXi before with the RCLI (only by using the service console in ESX)...
  • Young GrasshopperYoung Grasshopper Member Posts: 51 ■■□□□□□□□□
    thanks for the replys everyone, unfortunatly we didnt have time to work on correcting this so we just blew the raids and started from scratch. management has tasked me with recreating this situation in case it comes up again next week.
Sign In or Register to comment.