No announcement yet.

ESX 5.5 - NFS - iSCSI Netapp performance issue?

  • Filter
  • Time
  • Show
Clear All
new posts

  • ESX 5.5 - NFS - iSCSI Netapp performance issue?

    Hi All.

    I visited a sit today that was installed by someone else to troubleshoot performance issues. The performance issues seem to relate to high disk queues and latency on the datastores.

    On first inspection the hardware seems great.

    80 user site.

    2x HP DL360P 180gb RAM - 10GBit SFP to switch
    1x Netapp 24x 500gb SAS connected to 10Gbit SFP switch.
    8 guest servers, exchange, sql, rds nothing special.

    The netapp san appears to be carved up into two separate units and each with 12 disks SAN1/SAN2 with HA enabled between the two. One side is used for NFS sharing of user data files and mapped to network drive on each pc.

    Other side has NFS shares also but for the virtual machines. this is where it gets interesting. VMware servers are connected to the SAN but guest VMDK files reside in an NFS share not bock ISCSI as i would have expected.

    The guests have additional virtual network cards added to talk back to the san using the microsoft iscsi initiators connected to a second vSwitch to mount extra disks rather than adding the disk in the settings of the guest to a LUN on the san via iscsi. All cards are E1000 cards so limited to 1GB

    In one instance i have an exchange server with a single disk in vmware running on a NFS lun, then in the guest windows os a Microsoft iscsi initator back to the san over the virtual nic's to add an extra disk on which the exchange database resides. Presumably because Microsoft don't support exchange in a NFS environment and SQL not so good... I know NFS should be about the same as block level iscsi but I'm not sure if i should change it all. perhaps mainly because the san is only giving 12 splingles to 8 servers which is causing my performance issues.