No announcement yet.

vSphere Physical to iSCSI SAN Migration Ideas

  • Filter
  • Time
  • Show
Clear All
new posts

  • vSphere Physical to iSCSI SAN Migration Ideas

    Hello everyone – thanks as always for your advice. I would like to brainstorm a bit here so forgive the long post.

    I am considering a project to migrate many of my organization’s physical servers to a virtualized, iSCSI SAN environment. I would like to hear your opinions and experiences with this technology. As for the VMHOST, I think I am okay in that area – basically, as many cores in one CPU as possible for licensing and as much RAM as possible.

    I understand that there is no “cookie-cutter” method to recommend specific SAN hardware for x number of VMs, so let’s not get too buried in details unless absolutely necessary. Also, I am not a storage engineer (but trying to learn) so my apologies in advance if some parts this post seem ridiculous.

    10 Gb iSCSI seems to be just as expensive as Fibre Channel so I would like to stick with 1 Gb iSCSI. I would also like to use vMotion and DRS so dual switches and controllers will be necessary. For now, the most VMs that I would run are 20, 1 CPU, 1 GB RAM, 20 GB, general purpose server VMs. As with any business, cost will be an important factor.

    According to the VMWare HCL for 4i, the HP LeftHand P4000 series and HP MSA2300i series are listed as fully supported – and quite expensive.

    Would you or have you used the HP Procurve switch for this type of deployment? If not the Procurve, what would you use – Cisco such as the Cisco Catalyst 2960G-24TC-L? The switches will be on their own VLAN.

    In comparing SATA and SAS disks for the HP SAN listed above and generally speaking, if I used RAID 1+0 for the LUNs would there be a significant I/O and IOPS performance difference? I will not be running any Web Servers and/or large databases and/or Exchange servers. I recall a podcast where a guy used RAID 5+0 (12 terabytes total) with 1 Terabyte SATA disks and running over 40 VMs, was getting about 130 MBs IO for each VM. If there isn’t going to be a significant decrease in I/O, I would like to fill the SAN up with as many SATA disks as possible however, if this is definitely not the case then I will have to reconsider. I know that I cannot have my cake and eat it too so a specific and technical analysis will be necessary for my current hardware and server apps.

    We have VMHOSTS (DL 380, G4, 6 Gig of RAM, 8 CPU Cores) that are running VMWare Server for Windows with 6 or 7 VMs just fine so technically, I could just get a host with that will hold enough SAS disks, full of CPU cores, and lots of memory then run 4i without vMotion and DRS (which will be a lot cheaper) which would be an improvement. However, I really think that this decision (or easy way out) would not be a good long term solution. We currently have lots of DL 320s (G3) with a sprink of DL 360s (G4) that do not have much memory and hard disk space so it seems silly to upgrade them.

    We use mostly HP but I would be interested in other vendor products however, I am in Asia so prices and product availability is quite different from the US.

    As a disclaimer, I have been searching night and day through HP's and VMWare's websites as well as other sites like Experts Exchange. I see mostly hype so I am having a difficult time making a decision without actually testing different products myself. Unfortunately, we do not have the luxury of shelling out 40K for a system to "test" with.

    If you have time, I would like to hear your comments and or suggestions.