Announcement

Collapse
No announcement yet.

New ESX Infrastructure Setup

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • New ESX Infrastructure Setup

    Currently, the Infrastructure consist of DC & ADC for a Single 2000 Domain, that running DNS Active Directory Integrated and Holding all the FSMO Roles as well as both are GC.

    Microsoft Exchange Server 5.5 is running for both Internally and Externally mails. That will be upgraded to 2003 and then to Exchange 2007 in cluster Mode. This project will cost unbelievable amount by buying additional Hardwares and HP MSA 1000 Storage.

    My Ideas are:
    1. Buy 2 "Slot" HP Blade Server that consist of two 64-bit Hardware with 2x72 GB Hard drive to be run as a Raid 1+0 "Mirror".
    2. These Blades will run ESX, one server will host all the VMS and the other will be for VMotion .
    3. Connect the MSA 1000 to the HP Blade servers, and store all the VMS there.
    4. All the servers will be connecting as a Redundant Power, Redundant Network as well as Redundant Fiber Cable to connect to the MSA and Redundant SAN Switches.
    5. DC's will be Migrated and Put it on the ESX Host.
    6. Exchange will be Migrated to Exchange 2003 and the to Exchange 2007 and will be put it on the ESX Host. "Since I'll be using the VMotion and DRS, do I have to make Clustering for Exchange Server?"
    6. Other servers will be migrated and will be put it on the ESX Host too.

    On another hand is the DR Site:

    Another HP Blade will be in the DR Site, in case if the Primary site goes down, the ESX over. This will be replicated by the ESXReplicator.

    The Setup:
    1. 2 ESX Servers running on BL 460C Intel Xeon 1.86 Ghz
    2. Each 16. GB RAM
    3. Each host will be running on 2 x 72 GB Hard Drive.
    4. Each server will be haiving 6~8 Nic's;
    4.1 2 Teamed for Service Console
    4.2 2 Teamed for VMotion Network
    4.3 2 Teamed for Production network
    4.4 2 Teamed for DMZ Network
    For each network there will be a Virtual Swicth for connecting VMs and spreat switch to connect to the Network. or will use two Cisco 2950 Switch and will do VLAN for each network and VLAN inside the ESX. "Your Recommendation Please"!

    5. Each Server will be having 2 HBA
    6. 2 SAN FC Swicth, the servers will be connected to the Switches and then to HP MSA 1000
    7. 1 SAN Hp MSA1000 for VMs Luns.
    8. HP DL320 G4 will be for Virtual Center Client and in the same time will be for Backup Server that connects directly to the VMs Lun and do the backup using HP DP 6.0
    9. How many GB of LUN will be given to the VMs? I'm planning to run the VMs only on 10.0 GB for C:\ System Drive. And the rest will be for Another Luns will be directly mapped to the VMs. That will name it Data Lun.

    Any additional Recommendations?

    BR,
    Habibalby
    ================================
    HND: Higher National Diploma in
    Computer Science(IT)


    Passed:
    MCSA+Security 2003, VCP3, VCP4
    Done:VMware DSA
    ================================[/COLOR]

  • #2
    Re: New ESX Infrastructure Setup

    its a little over kill with the NICs.

    i have over 500 users connecting to a web server VM, and the net usage is not over 20% at any time. you may want to reconsider this decision. it will also be hell when scripting your VMotion, when defining interfaces for fail-over.

    1. Buy 2 "Slot" HP Blade Server that consist of two 64-bit Hardware with 2x72 GB Hard drive to be run as a Raid 1+0 "Mirror".
    i also dont mirror anything on the blades with ESX installed on them. Its too easy for me to restore an ESX blade, and want performance over redundancy. essentially, since the VMs are stored on the SAN and not on the internal storage of the blade, there is really no need, imo. you may feel otherwise...

    Since I'll be using the VMotion and DRS, do I have to make Clustering for Exchange Server?
    there are several caveats in the selection of clustering and DRS. you will want to investigate the abilities of your backup solution before implementing a cluster. some products do not support backups of the VM cluster, so this can can be tricky. you will need to decide this based on your needs. it is also depends on how fluent you are with vmware and m$ clustering...

    2. These Blades will run ESX, one server will host all the VMS and the other will be for VMotion .
    on the same site? i was kinda up in the air on this too. ultimately, i decided that if the servers went down on this side, no point in having vmotion move them to another server, since it would be down. i decided to use all my blades as ESX hosts and let DRS and HA determine when and where to move VMs in the event hardware is failing... but i have 24 blades, you only have 2. this will be critical in the determination of how to allocate the blades. i think your probably correct, being that you only have 2 blades. maybe someone else has some input on that...

    otherwise, it looks like a nice setup.
    its easier to beg forgiveness than ask permission.
    Give karma where karma is due...

    Comment


    • #3
      Re: New ESX Infrastructure Setup

      ts a little over kill with the NICs.

      i have over 500 users connecting to a web server VM, and the net usage is not over 20% at any time. you may want to reconsider this decision. it will also be hell when scripting your VMotion, when defining interfaces for fail-over.
      I didn't understand what u mean by it's a Little Over Kill with the NICs? In my case there will be 6~8 NICs. if there are 6 cards two will be Teamed for VMs on Production that comes from a VLAN switch. Two Nics Teamed as well for DMZ Than connects to the DMZ VMs, such as Firewall, or NAT Servers. The reset will be 1 for Service Console and 1 for VMotion. But if there are 8 NICs for each server, for each network will be teaming for HA and redundancy.

      i also dont mirror anything on the blades with ESX installed on them. Its too easy for me to restore an ESX blade, and want performance over redundancy. essentially, since the VMs are stored on the SAN and not on the internal storage of the blade, there is really no need, imo. you may feel otherwise...
      Sounds make sense in ESX Implementation, sometime we need Performance over Redandancy. I agree with you in this Point.

      there are several caveats in the selection of clustering and DRS. you will want to investigate the abilities of your backup solution before implementing a cluster. some products do not support backups of the VM cluster, so this can can be tricky. you will need to decide this based on your needs. it is also depends on how fluent you are with vmware and m$ clustering...
      Hummm, lots of consedration needs. Our current Backup using HP DP 6.0. But for the ESX there will be another Server act as a Backup Proxy Server connects with FC Cable and the LUN of the VMS is presented to this Backup Server. Initially these VMs will be taken on Tape Drive which is connected to the Proxy Server. Once everyting is sattle down, then a Tape Libarary will be linked to the Proxy Server and one copy of VMs LUN will be taken to the MSA 1000 Backup Storage and then from the MSA 1000 LUN will be taken on Tapes.


      Regarding the Clustering, well, i don;t think clustering 2 VMs Machine on Different ESX Host will do much becuase the ESX Servers themselves are running on Cluster Mode and all the resources are controlled by HA and DRS.

      This needs to be considered once again, do u agree with me?

      on the same site? i was kinda up in the air on this too. ultimately, i decided that if the servers went down on this side, no point in having vmotion move them to another server, since it would be down. i decided to use all my blades as ESX hosts and let DRS and HA determine when and where to move VMs in the event hardware is failing... but i have 24 blades, you only have 2. this will be critical in the determination of how to allocate the blades. i think your probably correct, being that you only have 2 blades. maybe someone else has some input on that...
      In this case do u recommend configuring both servers in HA and DRS mode rather than having Vmotion? Becuase in case of H/W failuer, how the VMs will move from one server to another?

      But what about DR site? if i have another Blades Servers running there in Cluster Mode and want to move working machine from Primary Site to the DR. is there any special recommendations need to be taken? other than the networking part?


      I really thank you for your great points and help, i really appreciated.

      BR,
      Habibalby
      ================================
      HND: Higher National Diploma in
      Computer Science(IT)


      Passed:
      MCSA+Security 2003, VCP3, VCP4
      Done:VMware DSA
      ================================[/COLOR]

      Comment


      • #4
        Re: New ESX Infrastructure Setup

        I didn't understand what u mean by it's a Little Over Kill with the NICs?
        i just meant that is a lot of NICs to manage. when you script out the vmotion, you will have to specify what interface is doing what, then set that up for a failover. 2 interfaces are sufficient for most needs, 4 at the most... but if you need 8 then by all means.

        ill try to give you a run over of my setup...
        onsite:
        1 blade chassis with 12 blades. blade management VLAN 242, server VLAN 5.
        so VM1 on blade1 is 192.168.5.100, blade1 is 192.168.242.101
        at the DR:
        1 blade chassis with 12 blades. blade management VLAN 232, server VLAN 6.
        so VM1 on blade1 is 192.168.6.100, blade1 is 192.168.232.101

        5 blades are clustered with HA and DRS. there are about 10 virtuals on this cluster.

        if something fails in the HA/DRS cluster onsite, ESX will move the virtuals to another blade in the DRS cluster and put the non-operational blade into maintance mode...

        if the blade chassis fails, then vmotion (which had to be scripted, cause we couldnt keep the subnet intact between the DR and here) will kick in and change the IPs and host names for everything so that requests destined for 192.168.5.100 during a failover are sent to 192.168.6.100.
        each VM onsite has a failover virtual ready in the event vmotion is initiated. this insures that people trying to reach our public webpage can get to in the event of internal failure.

        the DR is a mirror of here, except the IPs are a 1 off... like a blade in slot1 here is 192.168.242.101 and at the DR its 192.168.232.101. a virtual of a server here is 192.168.5.100, and its backup is 192.168.6.100... so forth, and so on. this allows me to keep track of whats happening on paper.

        Regarding the Clustering, well, i don;t think clustering 2 VMs Machine on Different ESX Host will do much becuase the ESX Servers themselves are running on Cluster Mode and all the resources are controlled by HA and DRS.

        This needs to be considered once again, do u agree with me?
        lets be specific about what type of clustering were talking about, cause im a little confused.. if you mean, in the above quote, that there is no need to create an M$ cluster from the virtuals because there is an ESX HA/DRS cluster in place, then i would say yes. im also not sure if you can M$ cluster things when they are in an HA/DRS cluster... i mean, it may let you, but there were some complications that basically prevented it from working correctly on both the ESX and M$ side. the resolution was to use one or the other.

        in your scenario, if you wanted to set up for vmotion and HA/DRS, you would need exactly the same thing set up on the other side... like you would have 2 blades here and 2 blades there. the two blades here would be HA/DRSed. vmotion would then be set up to move things from the two blades here to the blades over there...
        but you only have 2 blades, correct? then, you cant really setup an HA/DRS cluster with one blade...
        so you would have one blade running here with all the VMs, and the DR would house a blade for failover via vmotion.

        sorry for the confusion. this is kinda hard to visualize in text...

        am i helping at all or confusing you more?
        its easier to beg forgiveness than ask permission.
        Give karma where karma is due...

        Comment


        • #5
          Re: New ESX Infrastructure Setup

          Hi,
          well initially it's a confuse, but things getting more clear. As I said earlier, 8 NICs for 4 Networks just for redundancy. I have put 4 Nics which king of Redundancy i will have? Basically, my networking part of the Virtualization Infrastructure will include DMZ and Production. If have 4 Nics, two will be for DMZ VLAN and 2 for Production VLAN! What about VMOTION and Service Console for each Server. Or HA/DRS will take-after Vmotion? I think networking part has to be taking care. Could you send me your network configuration in Primary Site and DR Site? If you have it with the actual diagram that will be better.

          n your scenario, if you wanted to set up for vmotion and HA/DRS, you would need exactly the same thing set up on the other side... like you would have 2 blades here and 2 blades there. the two blades here would be HA/DRSed. vmotion would then be set up to move things from the two blades here to the blades over there...
          but you only have 2 blades, correct? then, you cant really setup an HA/DRS cluster with one blade...
          so you would have one blade running here with all the VMs, and the DR would house a blade for failover via vmotion.
          No, there will be two blades initially, when everything is sattle down perfectly more blades will be added and they will be part of the HA/DRS Resource Pool.

          Also, the Networking Configuration will be identical to the Primary Site, specially the v-Swicth and VLANs.

          Any recommendation?
          ================================
          HND: Higher National Diploma in
          Computer Science(IT)


          Passed:
          MCSA+Security 2003, VCP3, VCP4
          Done:VMware DSA
          ================================[/COLOR]

          Comment


          • #6
            Re: New ESX Infrastructure Setup

            Hi,

            I have created a Proposal for this Infrasture, can you please have a look and advice? This paper is incomplete.

            Your comments, advice or suggestion will be much appreciated.
            http://communities.vmware.com/servle...e%20design.doc
            ================================
            HND: Higher National Diploma in
            Computer Science(IT)


            Passed:
            MCSA+Security 2003, VCP3, VCP4
            Done:VMware DSA
            ================================[/COLOR]

            Comment

            Working...
            X