Thursday, February 11, 2010

VMware PEX: Lab Manager Design and Implementation

VMware PEX: Lab Manager Design and Implementation (TECHMGT0922).  This is written as I sit in the session so it could be messy.

Architecture
  • Lab Manager Server (Windows based)
  • vCenter Server
  • One or more ESX 3.5 or 4.0 servers, ESXi 4 servers
  • TCP port 902/903 for virtual machine console access
  • Brower client to Lab Manager (LM) server is tcp443 
  • TCP 5212 from LM Server to ESX servers and TCP 443 from lm to vCetner
  • Read the manual for all the requirments (installer checks for them and spits out errors)
  • Install a service account on lm server - page 14-15 of user guide as details on permissions needed
  • Pre-crreate resource pools if you want to use them so that it will pick them up at install time
  • Pre-create all virtual switches, distributed switches, etc.
  • Don't join it to the domain so nothing from AD would get pushed to it as a member server
Storage Design
  • LM uses Linked Clones to save disk space
  • Make sure the I/O can support your environment (just because you have the space doesn't mean you have enough I/O!)
  • LUN locking and limit of 8 nodes if using vmfs (NFS doesn't have this limit)
  • Understand the concept of disk chains and how they work (this isn't well documented)
Network Design
  • When Setting up LM server, create the default Physical network  
  • Design Considerations for Networking: Physical Network vs. Virtual, Fenced vs. Non-Fenced - Will IP's be from an IP Pool, DHCP, or Static?
  • Physical network is nothing more than a connection out to a physical network
  • Virtual network - a network that is may or may not be connected to a physical network (could be on a different ip, vlan, etc) - A virtual network can be connected to a physical network if needed upon deployment
  • Fencing - The ability to create a fence around a configuration (group of vm's) so they are isolated from the rest of the network.  If the fenced configuration needs to get out, it will do so through a NAT router (small Linux vm).  In this case it would have an internal ip inside the fence and an external ip address outside the fence using the NAT router.  This is great for machines and applications that all have the same ip.  This way there will not be ip conflicts on the network.
  • Host spanning - fencing isolation was limited to one host in version 3, the ability to cross servers with vMotion is called host spanning.  Host Spanning needs the Distributed Switch (and Enterprise Plus license to get Distributed Switch)
  • IP Pools - Takes a lot of ip addresses - if using fencing remember the the NAT router needs ip's as well
Fencing Considerations
  • Fancing can't use DHCP (DHCP can't cross the fencing to provide the address)
  • IP Static Pool must be used
  • At least one virtual machine needs to conntect to physical network (otherwise virtual network with no outside connection)
  • Fence policy is traffic In and Out, All Blocked, or Out Only.  There is no In Only policy. 
  • Be careful with outside communications if using fencing (many machines with same name but different ip's all hitting an outside source!)
    •  Domain COntroller - member servers can be in a configuration with the same name as others as long as the machine pasword with AD doesn't expire (30 days by default).  Otherwise, put a clone of the DC in the configuration and run it private to the configuration group
    • Domain Controller Clone - be careful - a cloned dc will come up with a .169 address because it detects one with the same ip address already out there.  Best way to do this is clone the DC and completely isolate it from the production network.
    • SQL server - if outside the fence, what happens when multiple configurations hit it??  Maybe different instances of the same database on the same server - adds a little bit of a manual intervention to the process
    • Can create a workstation that is inlcuded in the configuration for the user to use as a workstation "in the fence"

Good Article: VMware KB1000023 - How to Backup the VLM Database

No comments: