1. Configure VMWare ESXi
– If dedicated storage card(s) is/are available, configure LACP/PAGP etherchannel or bond on the attached switch(es) for active-active (not active/standby due to probability of non automatic activation or when standby connection(s) has silently detached) in failover mode.
– Configure network switches to seggregate these VLANS: VM (dedicated to guest virtual machines), Management (can be on the same subnet as the VM network), iSCSI (or SMB/NFS storage subnet with recommended 9000 BTU for enhanced performance), vMotion (for guests migration), heartbeat (WMWare’s “Active-Passive” architecture of high availability)
– Install 10G/25G/32G/ PCIe SFP+ card(s) into server motherboard. Best practice is to have redundancy of multiple NIC’s bound into a vSwitch.
– Update system BIOS
– Download and install ESXi 8.0 onto hardware and active license
– Install updates to bring kernel security to compliant levels
– enable SSH, assign Name & static IPv4 on Mgmt Interface
2. Integrate ESXi into Active Directory
– Create a new group in Active Directory as ‘ESXi Admins’ (or someting sematically similar), and add users into that group
– Join VMWare host(s) into Active Directory
3. Install vCenter & vSphere Client
– Download & Install vCenter ISO’s from the vendor & add licences
– Patch vCenter
– Add Host(s) into Datacenter group(s)
– Create 2 new groups in Active directory as vCenter Admins and vCenter Read-Only Users. Populate these groups appropriately
– Join vCenter onto Active Directory
– Create 2 new groups in vCenter to include the 2 Active Directory groups generated previously
4. VM Storage
– Internet Small Computer System Interface or iSCSI server is the common method to present Logical Unit Numbers (LUN) and ‘targets’ to guest VMs due to plug-and-play features. This can be done on the storage appliance (HPe Nimble, NetApps, Windows Server, etc.). Multiple LUNs can be combined into a single target for the clients to attach.
– Best practice is to dedicate at least 2 physical NIC’s into the iSCSI network (vSwitch). Hint of this setup is to create each vNIC being associated as 1:1 with a physical NIC, then several dedicated vNICs can be added into the iSCSI vSwitch. Dual fabric design is the standard.
– It is recommended that large LUN’s of 16TB+ are presented as targets to be connected from the ESXi hosts and controlled by vCenter storage cluster. This simplifies management as compared to running initiators from guest VM’s.
– Affinity and anti-affinity rules must also be setup for security & I/O performance purposes. A single LUN should be localized on one controller, instead of multiple controllers as that would lead to higher blocks processing overhead (lower I/O results).
– Certain deployments would include NFS, SMB, or vSAN. those are pieces of cakes which may not merit detailed scripts in this overview.
– Benchmark using vRealize Operations Manager or vSphere Storage Performance Monitoring to determine when to rebalance LUN’s on underutilized controllers
– Label LUNs basing on I/O ‘class’ or speed, redundancy, and backup schedules. This naming convention would assist in deciding which blob to use for database, general compute, or low-tiered machines.
5. vSphere Site Recovery Manager
– Group VM Clients into folders
– Config Replication for guest VMs
– Setup Protection Groups with replication type “vSphere Replication (VR)”
– Create Recovery Plan & associate the Protection Group with it. Then, configure each guest VM’s with destination IP’s, storage, and priority levels