SQL and dependant Application servers: seperate vSwitch and Physical vLAN
Deathmage
Banned Posts: 2,496
Hey guys,
I'm curious if you guys ever have done something sort of at the back-end of a SQL and Application server that is dependent upon the SQL server into their own vSwitch and/or their own physical vLAN with the goal of seeking faster performance. I know this is entirely a huge discussion on this topic casue this can be related to SQL optimization, optimization of scripts, and then the Layer 7 communication between the SQL-driven Application and the SQL server, but let me digress.
So far I've done the following:
1. Applied these registry changes that reduced the memory overhead of Windows Server 2008 R2 by on average ~40%, link on my blog here: Group Policy: Performance tweaks of 45% less memory usage - I.T.HINK ...So you don't have too...
2. configured a separate .vmdk mounting of a 60 GB Eager Zero'd drive for Windows Swap in a 1.5x increase over normal memory, so 36 GB min to a 56 GB max.
3. configured the SQL server to use 8 vCPU's and set the memory allocation to 24 GB's with a high priority on CPU shares. I'd set the Storage to high shares but I'm told by Dell Equalogic that using that setting doesn't do squat for Equalogic's.
4. Performed a SQL tables re-indexing every Friday that brings the data index's to about 20%, found that making them any smaller can really have an issue on performance. Unless told otherwise, the 20% seems to do the trick.
5. Recently just performed a SQL packing on the databases, in 6 years it's been used it was never packed, and this has helped quite a bit. But this result was kind of obvious since the fragmentation was at 99%, now it's much smaller.
Thoughts of changes to come:
1. Purchase an additional Quad Nic card and setup one or two more uplinks to the iSCSI switch and enabled MPIO on the vStorage iSCSI switch. Our Equallogic only have one nic per active switch, the other nic is on the passive HA switch.
2. Configure a separate vLAN for just the SQL and Application server with routing to the vLAN's that require connectivity to the Application servers. The other vLAN's that connect to the Application server currently have about 300 devices that require connections. My thinking is if those two VM's are on their own vLAN separated from those 300 device the broadcast traffic alone would improve communication speeds between these two servers and only pass the traffic that is needed by the end-devices over to their respective vLAN's.
With this being said I'm curious if you guys see if this will work towards the desired results of optimizing SQL's performance even further or if you guys do things differently in your networks when it comes to SQL and their dependent Application Servers.
I'm curious if you guys ever have done something sort of at the back-end of a SQL and Application server that is dependent upon the SQL server into their own vSwitch and/or their own physical vLAN with the goal of seeking faster performance. I know this is entirely a huge discussion on this topic casue this can be related to SQL optimization, optimization of scripts, and then the Layer 7 communication between the SQL-driven Application and the SQL server, but let me digress.
So far I've done the following:
1. Applied these registry changes that reduced the memory overhead of Windows Server 2008 R2 by on average ~40%, link on my blog here: Group Policy: Performance tweaks of 45% less memory usage - I.T.HINK ...So you don't have too...
2. configured a separate .vmdk mounting of a 60 GB Eager Zero'd drive for Windows Swap in a 1.5x increase over normal memory, so 36 GB min to a 56 GB max.
3. configured the SQL server to use 8 vCPU's and set the memory allocation to 24 GB's with a high priority on CPU shares. I'd set the Storage to high shares but I'm told by Dell Equalogic that using that setting doesn't do squat for Equalogic's.
4. Performed a SQL tables re-indexing every Friday that brings the data index's to about 20%, found that making them any smaller can really have an issue on performance. Unless told otherwise, the 20% seems to do the trick.
5. Recently just performed a SQL packing on the databases, in 6 years it's been used it was never packed, and this has helped quite a bit. But this result was kind of obvious since the fragmentation was at 99%, now it's much smaller.
Thoughts of changes to come:
1. Purchase an additional Quad Nic card and setup one or two more uplinks to the iSCSI switch and enabled MPIO on the vStorage iSCSI switch. Our Equallogic only have one nic per active switch, the other nic is on the passive HA switch.
2. Configure a separate vLAN for just the SQL and Application server with routing to the vLAN's that require connectivity to the Application servers. The other vLAN's that connect to the Application server currently have about 300 devices that require connections. My thinking is if those two VM's are on their own vLAN separated from those 300 device the broadcast traffic alone would improve communication speeds between these two servers and only pass the traffic that is needed by the end-devices over to their respective vLAN's.
With this being said I'm curious if you guys see if this will work towards the desired results of optimizing SQL's performance even further or if you guys do things differently in your networks when it comes to SQL and their dependent Application Servers.
Comments
-
Lexluethar Member Posts: 516Are you seeing performance issues with your SQL boxes? We segment the storage but all the traffic goes through the same vSwitch and VLAN at the switch level. We do try and keep cross VLAN traffic to a minimum by keeping like SQL boxes / Applications on the same VLAN but that's about as much segregation as we use.
On the storage end we put SQL on it's own volume(s) in order to troubleshoot performance easier and give us the ability to tier up the volumes as needed if performance becomes a problem.
A lot of the performance best practices i've seen and read came from the assumption that the SQL server was not virtualized which isn't reality now aday's. We also did some research in seeing what effect segregating traffic into more vSwitche's but in the end they all end up going to the same 9K switch so it really didn't matter unless you were doing traffic shaping / IO control at the switch or VM level.
Not sure if that's much help, thought i'd give you my two cents and what we are doing. -
Deathmage Banned Posts: 2,496Thanks bro.
The SQL isn't dreadfully performing bad, we'd just like to keep making improvements to bleed as much performance as possible, it usually hovers around 200 IOPS and on-load around ~350 IOPS.