bermovick wrote: » So you DO have to manually list each address on the ESXi side then? That'll teach me to listen to the server admin who said it wasn't required!
bermovick wrote: » Curiously, after correcting a mismatch in trunking settings between the NETAPPs, brocades, and ESXi servers I find the ESXi side does automatically detect the NETapps! Now to figure out why jumbo frames breaks connectivity. This project is a large box of headaches.
jibbajabba wrote: » Unless you got HBAs preconfigured with an IP range or DHCP and the whole subnet is allowed. No offence, but I hope this is not production yet. It sounds thrown together so make sure you got your storage setup as per NetApp and VMware best practises.
bermovick wrote: » Well, I take it back. Today scans detects nothing, despite several test luns being created on the netapps. I really don't get it. Yesterday it was seeing luns on netapp00 even though netapp00's address wasn't in dynamic discovery. Today, nothing. I'm not going to test anymore to figure it out. I'll just add all the addresses and call it done. If the person who actually has their VCP complains about it not being done correctly I'll point out that perhaps the person with the VCP certification should have done the work
bermovick wrote: » Yeah, I missed 2 things regarding jumbo frames: 1) Enabling jumbo frames on the switches connecting the devices together, and 2) apparently you have to configure the jumbo MTU size on the vswitch itself, not only the vmkernel ports. Once those were done pings started working and all the latency problems we were seeing went away. I'm grabbing screenshots regarding the ESXi servers automatically seeing the netapp luns when scanning without having the addresses added under the dynamic discovery tab. I dunno what to tell you though - perhaps it remembers from having them added before? If it helps, they're in the same layer 2 broadcast domain so perhaps that helps?