Table of Contents
About the Author
About the Contributors
Chapter 1: Introducing VMware vSphere 5
Chapter 2: Planning and Installing VMware ESXi
Chapter 3: Installing and Configuring vCenter Server
Chapter 4: Installing and Configuring vSphere Update Manager
Chapter 5: Creating and Configuring Virtual Networks
Chapter 6: Creating and Configuring Storage Devices
Chapter 7: Ensuring High Availability and Business Continuity
Chapter 8: Securing VMware vSphere
Chapter 9: Creating and Managing Virtual Machines
Chapter 10: Using Templates and vApps
Chapter 11: Managing Resource Allocation
Chapter 12: Balancing Resource Utilization
Chapter 13: Monitoring VMware vSphere Performance
Chapter 14: Automating VMware vSphere
Appendix A: The Bottom Line
Removing a dvSwitch is possible only if no VMs have been assigned to a dvPort group on the dvSwitch. Otherwise, the removal of the dvSwitch is blocked with an error message similar to the one displayed previously in Figure 5.50. Again, you’ll need to reconfigure the VM(s) to use a different vSwitch or dvSwitch before the operation can proceed. Refer to Chapter 9, “Creating and Managing Virtual Machines,” for more information on modifying a VM’s network settings.
Perform the following steps to remove the dvSwitch if no VMs are using the dvSwitch or any of the dvPort groups on that dvSwitch:
1. Launch the vSphere Client, and connect to a vCenter Server instance.2. On the vSphere Client home screen, select the Networking option under Inventory.You can also select the View menu and then choose Inventory → Networking, or you can press the keyboard hotkey (Ctrl+Shift+N).3. Select an existing vSphere Distributed Switch in the inventory pane on the left.4. Right-click the dvSwitch and select Remove, or choose Remove from the Edit menu. Select Yes in the confirmation dialog box that appears.5. The dvSwitch and all associated dvPort groups are removed from the inventory and from any connected hosts.
The bulk of the configuration for a dvSwitch isn’t performed for the dvSwitch itself but rather for the dvPort groups on that dvSwitch.
Creating and Configuring dvPort Groups
With vSphere Standard Switches, port groups are the key to connectivity for the VMkernel and for VMs. Without ports and port groups on a vSwitch, nothing can be connected to that vSwitch. The same is true for vSphere Distributed Switches. Without a dvPort group, nothing can be connected to a dvSwitch, and the dvSwitch is, therefore, unusable. In this section, you’ll take a closer look at creating and configuring dvPort groups.
Perform the following steps to create a new dvPort group:
1. Launch the vSphere Client, and connect to a vCenter Server instance.2. On the vSphere Client home screen, select the Networking option under Inventory. Alternately, from the View menu, select Inventory → Networking.3. Select an existing vSphere Distributed Switch in the inventory pane on the left, click the Summary tab in the details pane on the right, and select New Port Group in the Commands section.This launches the Create Distributed Port Group Wizard, as illustrated in Figure 5.51.The name of the dvPort group and the number of ports are self-explanatory, but the options under VLAN Type need a bit more explanation:
- With VLAN Type set to None, the dvPort group will receive only untagged traffic. In this case, the uplinks must connect to physical switch ports configured as access ports, or they will receive only untagged/native VLAN traffic.
- With VLAN Type set to VLAN, you’ll then need to specify a VLAN ID. The dvPort group will receive traffic tagged with that VLAN ID. The uplinks must connect to physical switch ports configured as VLAN trunks.
- With VLAN Type set to VLAN Trunking, you’ll then need to specify the range of allowed VLANs. The dvPort group will pass the VLAN tags up to the guest OSes on any connected VMs.
- With VLAN Type set to Private VLAN, you’ll then need to specify a Private VLAN entry. Private VLANs are described in detail later in this section.Specify a descriptive name for the dvPort group, select the appropriate number of ports, select the correct VLAN type, and then click Next.4. On the summary screen, review the settings, and click Finish if everything is correct.
Figure 5.51 The Create Distributed Virtual Port Group Wizard allows the user to specify the name of the dvPort group, the number of ports, and the VLAN type.
After a dvPort group has been created, you can select that dvPort group in the VM configuration as a possible network connection, as shown in Figure 5.52.
Figure 5.52 A dvPort group is selected as a network connection for VMs, just like port groups on a Standard vSwitch.
After creating a dvPort group, selecting the dvPort group in the inventory on the left side of the vSphere Client provides you with the option to get more information about the dvPort group and its current state:
- The Summary tab provides exactly that — summary information such as the total number of ports in the dvPort group, the number of available ports, any configured IP pools, and the option to edit the settings for the dvPort group.
- The Ports tab lists the dvPorts in the dvPort group, their current status, attached VMs, and port statistics, as illustrated in Figure 5.53.
To update the port status or statistics, click the link in the upper-right corner labeled Start Monitoring Port State. That link then changes to Stop Monitoring Port State, which you can use to disable port monitoring.
- The Virtual Machines tab lists any VMs currently attached to that dvPort group. The full range of VM operations — such as editing VM settings, shutting down the VM, and migrating the VM — is available from the context menu of a VM listed in this area.
- The Hosts tab lists all ESXi hosts currently participating in the dvSwitch that hosts this dvPort group. As with VMs, right-clicking a host here provides a context menu with the full range of options, such as creating a new VM, entering maintenance mode, checking host profile compliance, or rebooting the host.
- The Tasks & Events tab lists all tasks or events associated with this dvPort group.
- The Alarms tab shows any alarms that have been defined or triggered for this dvPort group.
- The Permissions tab shows permissions that have been applied to (or inherited by) this dvPort group.
Figure 5.53 The Ports tab shows all the dvPorts in the dvPort group along with port status and port statistics.
To delete a dvPort group, right-click the dvPort group and select Delete. If any VMs are still attached to that dvPort group, the vSphere Client prevents the deletion of the dvPort group and logs an error message into the Tasks pane of the vSphere Client. This error is also visible on the Tasks And Events tab of the dvPort group.
To delete the dvPort group, you first have to reconfigure the VM to use a different dvPort group or a different vSwitch or dvSwitch. You can either edit the settings of the VM, or just use drag and drop in the Networking inventory view to reconfigure the VM’s network settings.
To edit the configuration of a dvPort group, use the Edit Settings link in the Commands section on the dvPort group’s Summary tab. This produces the dialog box shown in Figure 5.54. The various options along the left side of the dvPort group Settings dialog box allow you to modify different aspects of the dvPort group.
Different Options Are Available Depending on the dvSwitch VersionRecall that you can create different versions of dvSwitches in the vSphere Client. Certain configuration options — like Resource Allocation and Monitoring — are only available with a version 5.0.0 vSphere Distributed Switch.
Figure 5.54 The Edit Settings command for a dvPort group allows you to modify the configuration of the dvPort group.
Let’s focus now on modifying VLAN settings, traffic shaping, and NIC teaming for the dvPort group. Policy settings for security and monitoring follow later in this chapter.
Perform the following steps to modify the VLAN settings for a dvPort group:
1. Launch the vSphere Client, and connect to a vCenter Server instance.2. On the vSphere Client home screen, select the Networking option under Inventory. Alternately, from the View menu, select Inventory → Networking.3. Select an existing dvPort group in the inventory pane on the left, select the Summary tab in the details pane on the right, and click the Edit Settings option in the Commands section.4. In the dvPort Group Settings dialog box, select the VLAN option under Policies from the list of options on the left.5. Modify the VLAN settings by changing the VLAN ID or by changing the VLAN Type setting to VLAN Trunking or Private VLAN.Refer to Figure 5.51 for the different VLAN configuration options.6. Click OK when you have finished making changes.
Perform the following steps to modify the traffic-shaping policy for a dvPort group:
1. Launch the vSphere Client, and connect to a vCenter Server instance.2. On the vSphere Client home screen, select the Networking option under Inventory. Alternately, from the View menu, select Inventory → Networking.3. Select an existing dvPort group in the inventory pane on the left, select the Summary tab in the details pane on the right, and click the Edit Settings option in the Commands section.4. Select the Traffic Shaping option from the list of options on the left of the dvPort group settings dialog box, as illustrated in Figure 5.55.Traffic shaping was described in detail in the section “Using and Configuring Traffic Shaping.” The big difference here is that with a dvSwitch, you can apply traffic-shaping policies to both ingress and egress traffic. With vSphere Standard Switches, you could apply traffic-shaping policies only to egress (outbound) traffic. Otherwise, the settings here for a dvPort group function as described earlier.5. Click OK when you have finished making changes.
Figure 5.55 You can apply both ingress and egress traffic-shaping policies to a dvPort group on a dvSwitch.
Perform the following steps to modify the NIC teaming and failover policies for a dvPort group:
1. Launch the vSphere Client, and connect to a vCenter Server instance.3. Select an existing dvPort group in the inventory pane on the left, select the Summary tab in the details pane on the right, and click the Edit Settings option in the Commands section.4. Select the Teaming And Failover option from the list of options on the left of the dvPort group Settings dialog box, as illustrated in Figure 5.56.These settings were described in detail in the section “Configuring NIC Teaming,” with one notable exception — version 4.1 and version 5.0 dvSwitches support a new load-balancing type, Route Based On Physical NIC Load. When this load-balancing policy is selected, ESXi checks the utilization of the uplinks every 30 seconds for congestion. In this case, congestion is defined as either transmit or receive traffic greater than 75 percent mean utilization over a 30-second period. If congestion is detected on an uplink, ESXi will dynamically reassign the VM to a different uplink.Requirements for Load-Based TeamingLoad-Based Teaming (LBT) requires that all upstream physical switches be part of the same Layer 2 (broadcast) domain. In addition, VMware recommends that you enable the PortFast or PortFast Trunk option on all physical switch ports connected to a dvSwitch that is using Load-Based Teaming.5. Click OK when you have finished making changes.
Figure 5.56 The Teaming And Failover item in the dvPort group Settings dialog box provides options for modifying how a dvPort group uses dvUplinks.
If you browse through the available settings, you might notice a Blocked policy option. This is the equivalent of disabling a group of ports in the dvPort group. Figure 5.57 shows that the Block All Ports setting is set to either Yes or No. If you set the Block policy to Yes, then all traffic to and from that dvPort group is dropped. Don’t set the Block policy to Yes unless you are prepared for network downtime for all VMs attached to that dvPort group!
Figure 5.57 The Block policy is set to either Yes or No. Setting the Block policy to Yes disables all the ports in that dvPort group.
With a dvSwitch, managing adapters — both virtual and physical — is handled quite differently than with a standard vSwitch. Virtual adapters are VMkernel interfaces, so by managing virtual adapters, I’m really talking about managing VMkernel traffic — management, vMotion, IP-based storage, and Fault Tolerance logging — on a dvSwitch. Physical adapters are, of course, the physical network adapters that serve as uplinks for the dvSwitch. Managing physical adapters means adding or removing physical adapters connected to ports in the dvUplinks dvPort group on the dvSwitch.
Perform the following steps to add a virtual adapter to a dvSwitch:
1. Launch the vSphere Client, and connect to a vCenter Server instance.2. On the vSphere Client home screen, select the Hosts And Clusters option under Inventory. Alternately, from the View menu, select Inventory → Hosts And Clusters. The Ctrl+Shift+H hotkey also takes you to the correct view.3. Select an ESXi host in the inventory pane on the left, click the Configuration tab in the details pane on the right, and select Networking from the Hardware list.4. Click to change the view from vSphere Standard Switch to vSphere Distributed Switch, as illustrated in Figure 5.58.5. Click the Manage Virtual Adapters link. This opens the Manage Virtual Adapters dialog box, as shown in Figure 5.59.6. Click the Add hyperlink. The Add Virtual Adapter Wizard appears, offering you the option to either create a new virtual adapter or migrate existing virtual adapters.Creating a new virtual adapter involves providing new information about the VMkernel port and then attaching the new virtual adapter to an existing dvPort group. The wizard also prompts for IP address information because that is required when creating a VMkernel interface. Refer to the earlier sections about configuring ESXi management and VMkernel networking for more information.In the section “Creating a vSphere Distributed Switch,” I mentioned that I would show you how to migrate virtual adapters. This is where you would migrate a virtual adapter. In the Add Virtual Adapter dialog box, select Migrate Existing Virtual Adapters and click Next.7. For each current virtual adapter, select the new destination port group on the dvSwitch. Deselect the box next to the current virtual adapters that you don’t want to migrate right now.This is illustrated in Figure 5.60. Click Next to continue.8. Review the changes to the dvSwitch — which are helpfully highlighted for easy identification — and click Finish to commit the changes.
Figure 5.58 To manage virtual adapters, switch the Networking view to vSphere Distributed Switch in the vSphere Client.
Figure 5.59 The Manage Virtual Adapters dialog box allows users to create VMkernel interfaces, referred to here as virtual adapters.
Figure 5.60 For each virtual adapter migrating to the dvSwitch, you must assign the virtual adapter to an existing dvPort group.
After creating or migrating a virtual adapter, the same dialog box allows for changes to the virtual port, such as modifying the IP address, changing the dvPort group to which the adapter is assigned, or enabling features such as vMotion or Fault Tolerance logging. You would remove virtual adapters using this dialog box as well.
The Manage Physical Adapters link allows you to add or remove physical adapters connected to ports in the dvUplinks port group on the dvSwitch. Although you can specify physical adapters during the process of adding a host to a dvSwitch, as shown earlier, it might be necessary at times to connect a physical NIC to a port in the dvUplinks port group on the dvSwitch after the host is already participating in the dvSwitch.
Perform the following steps to add a physical network adapter in an ESXi host to the dvUplinks port group on the dvSwitch:
1. Launch the vSphere Client, and connect to a vCenter Server instance.2. On the vSphere Client home screen, select the Hosts And Clusters option under Inventory. Alternately, from the View menu, select Inventory → Hosts And Clusters. The Ctrl+Shift+H hotkey will also take you to the correct view.3. Select an ESXi host in the inventory list on the left, click the Configuration tab in the details pane on the right, and select Networking from the Hardware list.4. Click to change the view from vSphere Standard Switch to vSphere Distributed Switch.5. Click the Manage Physical Adapters link. This opens the Manage Physical Adapters dialog box, as shown in Figure 5.61.6. To add a physical network adapter to the dvUplinks port group, click the Click To Add NIC link.7. In the Add Physical Adapter dialog box, select the physical adapter to be added to the dvUplinks port group, and click OK.8. Click OK again to return to the vSphere Client.
Figure 5.61 The Manage Physical Adapters dialog box provides information on physical NICs connected to the dvUplinks port group and allows you to add or remove uplinks.
In addition to being able to migrate virtual adapters, you can use vCenter Server to assist in migrating VM networking between vSphere Standard Switches and vSphere Distributed Switches, as shown in Figure 5.62.
Figure 5.62 The Migrate Virtual Machine Networking tool automates the process of migrating VMs from vSwitches to dvSwitches and back again.
This tool, accessed using the Migrate Virtual Machine Networking link on the Summary tab of a dvSwitch, will reconfigure all selected VMs to use the selected destination network. This is a lot easier than individually reconfiguring a bunch of VMs! In addition, this tool allows you to easily migrate VMs both to a dvSwitch as well as from a dvSwitch. Let’s walk through the process so that you can see how it works.
Perform the following steps to migrate VMs from a vSphere Standard Switch to a vSphere Distributed Switch:
1. Launch the vSphere Client, and connect to a vCenter Server instance.2. Navigate to the Networking inventory view.3. Select a dvSwitch from the inventory tree on the left, select the Summary tab, and click the Migrate Virtual Machine Networking link from the Commands section.This launches the Migrate Virtual Machine Networking wizard.4. Select the source network that contains the VMs you’d like to migrate.If you prefer to work by dvSwitch and dvPort group, click the Filter By VDS link.5. Select the destination network to which you’d like the VMs to be migrated.Again, use the Filter By VDS link if you’d rather select the destination by dvSwitch and dvPort group.6. Click Next when you’ve finished selecting the source and destination networks.7. A list of matching VMs is generated, and each VM is analyzed to determine if the destination network is Accessible or Inaccessible to the VM.Figure 5.63 shows a list with both Accessible and Inaccessible destination networks. A destination network might show up as Inaccessible if the ESXi host on which that VM is running isn’t part of the dvSwitch (as is the case in this instance). Select the VMs you want to migrate; then click Next.8. Click Finish to start the migration of the selected VMs from the specified source network to the selected destination network.You’ll see a Reconfigure Virtual Machine task spawn in the Tasks pane for each VM that needs to be migrated.
Figure 5.63 You cannot migrate VMs matching your source network selection if the destination network is listed as Inaccessible.
Keep in mind that this tool can migrate VMs from a vSwitch to a dvSwitch or from a dvSwitch to a vSwitch — you only need to specify the source and destination networks accordingly.
Now that I’ve covered the basics of dvSwitches, I’d like to delve into a few advanced topics. First up is network monitoring using NetFlow.
Using NetFlow on vSphere Distributed Switches
NetFlow is a mechanism for efficiently reporting IP-based traffic information as a series of traffic flows. Traffic flows are defined as the combination of source and destination IP address, source and destination TCP or UDP ports, IP, and IP Type of Service (ToS). Network devices that support NetFlow will track and report information on the traffic flows, typically sending this information to a NetFlow collector. Using the data collected, network administrators gain detailed insight into the types and amount of traffic flows across the network.
In vSphere 5.0, VMware introduced support for NetFlow with vSphere Distributed Switches (only on version 5.0.0 dvSwitches). This allows ESXi hosts to gather detailed per-flow information and report that information to a NetFlow collector.
Configuring NetFlow is a two-step process:
1. Configure the NetFlow properties on the dvSwitch.2. Enable or disable NetFlow (the default is disabled) on a per–dvPort group basis.
Let’s take a closer look at these steps.
To configure the NetFlow properties for a dvSwitch, perform these steps:
1. Launch the vSphere Client, and connect to a vCenter Server instance.2. Select View → Inventory → Networking to navigate to the Networking inventory view, where you’ll see your configured dvSwitches and dvPort groups listed.3. Select the dvSwitch for which you’d like to configure the NetFlow properties, and click the Edit Settings link in the Commands section of the Summary tab.This opens the dvSwitch Settings dialog box.4. Click the NetFlow tab.5. As shown in Figure 5.64, specify the IP address of the NetFlow collector, the port on the NetFlow collector, and an IP address to identify the dvSwitch.6. You can modify the Advanced Settings if advised to do so by your networking team.7. If you want the dvSwitch to only process internal traffic flows — that is, traffic flows from VM to VM on that host — select Process Internal Flows Only.8. Click OK to commit the changes and return to the vSphere Client.
Figure 5.64 You’ll need the IP address and port number for the NetFlow collector in order to send flow information from a dvSwitch.
After you configure the NetFlow properties for the dvSwitch, you then enable NetFlow on a per-dvPort group basis. The default setting is Disabled.
Perform these steps to enable NetFlow on a specific dvPort group:
1. In the vSphere Client, switch to the Networking inventory view.2. Select the dvPort group for which NetFlow should be enabled.3. Click the Summary tab and then click Edit Settings in the Commands area.You can also right-click the dvPort group and select Edit Settings from the context menu.4. The dvPort group Settings dialog box appears. Click Monitoring from the list of options on the left.This displays the NetFlow setting, as shown in Figure 5.65.5. From the NetFlow Status drop-down list, select Enabled.6. Click OK to save the changes to the dvPort group.
Figure 5.65 NetFlow is disabled by default. You enable NetFlow on a per–dvPort group basis.
This dvPort group will start capturing NetFlow statistics and reporting that information to the specified NetFlow collector.
Another feature that was present in previous versions of vSphere but has been expanded in vSphere 5.0 is support for switch discovery protocols, as discussed in the next section.
Enabling Switch Discovery Protocols
Previous versions of vSphere supported Cisco Discovery Protocol (CDP), a protocol for exchanging information between network devices. However, it required using the command line to enable and configure CDP.
In vSphere 5.0, VMware added support for Link Layer Discovery Protocol (LLDP), an industry-standardized form of CDP, and provided a location within the vSphere Client where CDP/LLDP support can be configured.
Perform the following steps to configure switch discovery support:
1. In the vSphere Client, switch to the Networking inventory view.2. Select the dvSwitch for which you’d like to configure CDP or LLDP support and click Edit Settings.You can also right-click the dvSwitch and select Edit Settings from the context menu.3. Click Advanced.4. Configure the dvSwitch for CDP or LLDP support, as shown in Figure 5.66.This figure shows the dvSwitch configured for LLDP support, both listening (receiving LLDP information from other connected devices) and advertising (sending LLDP information to other connected devices).5. Click OK to save your changes.
Figure 5.66 LLDP support enables dvSwitches to exchange discovery information with other LLDP-enabled devices over the network.
Once the ESXi hosts participating in this dvSwitch start exchanging discovery information, you can view that information from the physical switch(es). For example, on most Cisco switches the show cdp neighbor command will display information about CDP-enabled network devices, including ESXi hosts. Entries for ESXi hosts will include information on the physical NIC use and the vSwitch involved.
The final advanced networking topic I’ll review is private VLANs. Private VLANs were first added in vSphere 4.0, and support for private VLANs continues in vSphere 5.
Setting Up Private VLANs
Private VLANs (PVLANs) are an advanced networking feature of vSphere that build on the functionality of vSphere Distributed Switches. Private VLANs are possible only when using dvSwitches and are not available to use with vSphere Standard Switches.
I’ll provide a quick overview of private VLANs. PVLANs are a way to further isolate ports within a VLAN. For example, consider the scenario of hosts within a demilitarized zone (DMZ). Hosts within a DMZ rarely need to communicate with each other, but using a VLAN for each host quickly becomes unwieldy for a number of reasons. By using PVLANs, you can isolate hosts from each other while keeping them on the same IP subnet. Figure 5.67 provides a graphical overview of how PVLANs work.
Figure 5.67 Private VLANs can help isolate ports on the same IP subnet.
PVLANs are configured in pairs: the primary VLAN and any secondary VLANs. The primary VLAN is considered the downstream VLAN; that is, traffic to the host travels along the primary VLAN. The secondary VLAN is considered the upstream VLAN; that is, traffic from the host travels along the secondary VLAN.
To use PVLANs, first configure the PVLANs on the physical switches connecting to the ESXi hosts, and then add the PVLAN entries to the dvSwitch in vCenter Server.
Perform the following steps to define PVLAN entries on a dvSwitch:
1. Launch the vSphere Client, and connect to a vCenter Server instance.2. On the vSphere Client home screen, select the Networking option under Inventory. Alternately, from the View menu, select Inventory → Networking or press the Ctrl+Shift+N hotkey.3. Select an existing dvSwitch in the inventory pane on the left, select the Summary tab in the details pane on the right, and click the Edit Settings option in the Commands section.4. Select the Private VLAN tab.5. Add a primary VLAN ID to the list on the left.6. For each primary VLAN ID in the list on the left, add one or more secondary VLANs to the list on the right, as shown in Figure 5.68.Secondary VLANs are classified as one of the two following types:
- Isolated: Ports placed in secondary PVLANs configured as isolated are allowed to communicate only with promiscuous ports in the same secondary VLAN. I’ll explain promiscuous ports shortly.
- Community: Ports in a secondary PVLAN are allowed to communicate with other ports in the same secondary PVLAN as well as with promiscuous ports.Only one isolated secondary VLAN is permitted for each primary VLAN. Multiple secondary VLANs configured as community VLANs are allowed.7. When you finish adding all the PVLAN pairs, click OK to save the changes and return to the vSphere Client.
Figure 5.68 Private VLAN entries consist of a primary VLAN and one or more secondary VLAN entries.
After the PVLAN IDs have been entered for a dvSwitch, you must create a dvPort group that takes advantage of the PVLAN configuration. The process for creating a dvPort group was described previously. Figure 5.69 shows the Create Distributed Port Group wizard for a dvPort group that uses PVLANs.
Figure 5.69 When creating a dvPort group with PVLANs, the dvPort group is associated with both the primary VLAN ID and a secondary VLAN ID.
In Figure 5.69 you can see the term promiscuous again. In PVLAN parlance, a promiscuous port is allowed to send and receive Layer 2 frames to any other port in the VLAN. This type of port is typically reserved for the default gateway for an IP subnet — for example, a Layer 3 router.
PVLANs are a powerful configuration tool but also a complex configuration topic and one that can be difficult to understand. For additional information on PVLANs, I recommend visiting Cisco’s website at www.cisco.com and searching for private VLANs.
As with vSphere Standard Switches, vSphere Distributed Switches provide a tremendous amount of flexibility in designing and configuring a virtual network. But, as with all things, there are limits to the flexibility. Table 5.2 lists some of the configuration maximums for vSphere Distributed Switches.
Table 5.2 Configuration maximums for ESXi networking components (vSphere Distributed Switches)
|Switches per vCenter Server||32|
|Maximum ports per host (vSS/vDS)||4,096|
|vDS ports per vCenter instance||30,000|
|ESXi hosts per vDS||350|
|Static port groups per vCenter instance||5,000|
|Ephemeral port groups per vCenter instance||256|
As if adding vSphere Distributed Switches to vSphere and ESXi 4.0 wasn’t a big enough change from earlier versions of VMware Infrastructure, there’s something even bigger in store for you: the very first third-party vSphere Distributed Switch: the Cisco Nexus 1000V.
The Cisco Nexus 1000V is a third-party vSphere Distributed Switch, the first of its kind. Built as part of a joint engineering effort between Cisco and VMware and released with vSphere 4.0, the Nexus 1000V completely changes the dynamics in how the networking and server teams interact in environments using vSphere 4 and later.
Prior to the arrival of the Cisco Nexus 1000V, the reach of the networking team ended at the uplinks from the ESXi host to the physical switches. The networking team had no visibility into and no control over the networking inside the ESXi hosts. The server team, which used the vSphere Client to create and manage vSwitches and port groups, handled that functionality. The Cisco Nexus 1000V changes all that. Now the networking group will create the port groups that will be applied to VMs, and the server group will simply attach VMs to the appropriate port group — modeling the same behavior in the virtual environment as exists in the physical environment. In addition, organizations gain per-VM network statistics and much greater insight into the type of traffic that’s found on the ESXi hosts.
The Cisco Nexus 1000V has the following two major components:
- The Virtual Ethernet Module (VEM), which executes inside the ESXi hypervisor and replaces the standard vSwitch functionality. The VEM leverages the vSphere Distributed Switch APIs to bring features like quality of service (QoS), private VLANs, access control lists, NetFlow, and SPAN to VM networking.
- The Virtual Supervisor Module (VSM), which is a Cisco NX-OS instance running as a VM (note that Cisco also sells a hardware appliance, called the Nexus 1010, that can provide a Nexus 1000V VSM). The VSM controls multiple VEMs as one logical modular switch. All configuration is performed through the VSM and propagated to the VEMs automatically. The Nexus 1000V supports redundant VSMs, a configuration in which there is both a primary VSM and a secondary VSM.
The Cisco Nexus 1000V marks a new era in virtual networking. Let’s take a closer look at installing and configuring the Nexus 1000V, starting with the installation process.
Installing the Cisco Nexus 1000V
Installing the Nexus 1000V is a two-step process:
- You must first install at least one VSM. If you are going to set up redundant VSMs, you’ll need to wait to create the secondary VSM until after you’ve gotten the primary VSM up, running, and attached to vCenter Server.
- After a VSM is up and running, you use the VSM to push out the VEMs to the various ESXi hosts that use the Nexus 1000V as their dvSwitch.
Fortunately, users familiar with setting up a VM have an advantage in setting up the VSM because it operates as a VM. However, before attempting to set up the VSM as a VM, there are some dependencies that must be addressed. Specifically, you should be sure that you — or the appropriate networking individuals — have performed the following tasks before starting installation of the Nexus 1000V:
- You must identify three VLANs to be used by the Nexus 1000V VSM and VEMs: one VLAN for management traffic, one VLAN for control traffic, and one VLAN for packet traffic. These VLANs are not the same as the VLANs that you will configure on the Nexus 1000V to carry VM traffic or ESXi host traffic; these VLANs are used by the Nexus 1000V for VSM-VEM connectivity. The management VLAN can be the same VLAN that you use for management of the ESXi hosts themselves, if desired.
- You must configure the physical upstream switches to carry traffic from the relevant VLANs to the ESXi host(s) that will support the VSMs and VEMs. This generally means configuring the upstream switch ports as 802.1Q VLAN trunks and allowing all relevant VLANs across the VLAN trunk. The commands to do this vary from manufacturer to manufacturer; on most Cisco switches, you would use the switchport mode trunk and switchport trunk allowed vlan commands.
- On upstream physical switches, you should ensure you are filtering Bridge Protocol Data Units (BPDUs). Either globally enable BPDU filter and BPDU Guard, or use the spanning-tree bpdu filter and spanning-tree bpdu guard commands on the specific interfaces where the Nexus 1000V dvSwitch uplinks will connect.
- Ports on upstream physical switches also require the use of the portfast trunk, portfast edge trunk, or spanning-tree port type edge trunk commands (the command varies based on the switch model). These are Cisco-specific commands, so for other vendors other commands would be necessary.
- The ESXi host that will support the VSM must already have the appropriate VLANs configured and supported, including the control and packet VLANs.
For more complete and detailed information on these dependencies, I encourage you to refer to the official Cisco Nexus 1000V documentation. Once you’ve satisfied these requirements, then you’re ready to start installing the Nexus 1000V VSM.
OVF Template for the Nexus 1000V VSMEarlier versions of the Nexus 1000V that supported vSphere 4.x provided an Open Virtualization Format (OVF) template that simplified the deployment of the VSM (OVF templates are something that I discuss in greater detail in Chapter 10). At the time of this writing, an OVF template was not available for the vSphere 5.0-compatible version of the Nexus 1000V, and so I’ve modified the instructions accordingly.
Setting Up the Nexus 1000V VSM
After you’ve fulfilled all the necessary dependencies and requirements, the first step is to set up the first Nexus 1000V VSM.
Perform the following steps to install a Nexus 1000V VSM:
1. Use the vSphere Client to establish a connection to a vCenter Server or an ESXi host. Although the Nexus 1000V requires vCenter Server, the initial creation of the VSM could be done directly on an ESXi host if necessary.2. Create a new VM with the following specifications:
- Guest OS: Other Linux (64-bit)
- Memory: 2 GB
- CPUs: One vCPU
- Network adapters: Three e1000 network adapters
- Virtual disk: 3 GB with LSI Logic Parallel adapter (Thin Provisioned virtual disks are not supported)For more information on creating VMs and specifying these values, refer to Chapter 7.3. After the VM has been created, edit the VM to reserve 1500 MHz of CPU capacity and 2 GB of RAM. More information on reservations is found in Chapter 11.4. Configure the network adapters so that the first e1000 adapter connects to a VLAN created for control traffic, the second e1000 adapter connects to the management VLAN, and the third e1000 network adapter connects to a VLAN created for packet traffic. These are the three VLANs that you identified earlier in this chapter.It is very important that the adapters are configured in exactly this order.5. Attach the Nexus 1000V VSM ISO image to the VM’s CD-ROM drive, and configure the CD-ROM to be connected at startup, as shown in Figure 5.70.6. Power on the VM.7. From the boot menu, select Install Nexus 1000V And Bring Up The New Image.8. After the installation is complete, walk through the initial setup dialog.During the initial setup dialog, a series of questions prompt you for information such as the password for the admin user account; the VLAN IDs for the management, packet, and data VLANs; the IP address to be assigned to the VSM; and the default gateway for the VSM. When prompted for HA role, enter “standalone”; if you are going to set up redundant VSMs, you’ll perform that task later.
Figure 5.70 The ISO image for the Nexus 1000V VSM should be attached to the VM’s CD-ROM drive for installation.
Once the VSM is up and running, the next step will be to connect it to vCenter Server. To help ensure a smooth process, I’d recommend using the ping command to double-check connectivity to both the VSM and the vCenter Server. This might help identify network connectivity issues, such as connecting the virtual NICs in the VSM in the wrong order, that would prevent successful completion of the next step.
If network connectivity to the VSM and to vCenter Server is working, then you’re ready to proceed with connecting the Nexus 1000V VSM to vCenter Server.
Connecting the Nexus 1000V VSM to vCenter Server
In early versions of the Nexus 1000V, multiple separate steps were needed to connect the VSM to vCenter Server for proper communications. Not too long after the initial release of the Nexus 1000V, Cisco released a web-based tool to help with the installation. These instructions will assume the use of the web-based tool to connect the VSM with vCenter Server.
To connect the Nexus 1000V VSM to vCenter Server using the web-based configuration tool, perform these steps:
1. Open a web browser and navigate to the IP address you assigned to the VSM during the initial setup dialog. For example, if you provided the IP address 10.1.9.110 during the initial setup dialog, you would navigate to http://10.1.9.110 in the web browser.2. Click the Launch Installer Application hyperlink.3. If prompted to run the application, click Run.4. After a few moments, the Nexus 1000V Installation Management Center will launch. At the first screen, enter the admin password for the VSM (you provided this password during the initial setup dialog). Click Next to proceed to the next step.5. Supply the information needed to connect to your vCenter Server instance. This includes the vCenter Server IP address, port number (default is 443), username, and password. This is illustrated in Figure 5.71. Make sure that Use A Configuration File is set to No, then click Next to continue.6. Select the cluster or ESXi host where the VSM VM is currently running. Click Next.7. At the Configure Networking screen, select the VSM VM from the drop-down list of VMs.8. Under Please Choose A Configuration Option, you have three options:
- If you select Default L2: Choose The Management VLAN For All Port Groups, then the management, control, and packet interfaces on the Nexus 1000V VSM will use the management VLAN. Select the option only if you are sharing a single VLAN for all three VSM-VEM traffic types.
- If you select Advanced L2: Configure Each Port Group Individually, then you’ll have the option of selecting the appropriate port group or creating a new port group for each of the three interfaces on the VSM. Select this option if the management, control, or packet VLANs are on a separate VLAN from the others.
- If you select Advanced L3: Configure Port Groups for L3, you’ll have the ability to specify Layer 3 (routed) connectivity for the VSM interfaces. Select this option only if the VSM and the VEM will be separated by a router.In most instances, I recommend either the Default L2 or the Advanced L2 option, depending on how your VLANs are configured. Once you’ve selected the right option, click Next to continue.9. At the Configure VSM screen, you must supply the information requested by the installation application. This includes information like the VSM switch name, admin password, IP address, default gateway, HA role, domain ID, SVS datacenter name (the name of the datacenter object in vCenter Server), and the native (or untagged) VLAN. Click Next once you’re finished filling in the fields.10. If you want to save the configuration out to a file for future reference or use, click Save Configuration To File and then select a destination file. Otherwise, click Next to proceed.11. The Nexus 1000V Installation Management Center will proceed though a series of steps. As each step is completed, a green check mark will appear next to it. At certain points during this process, you might also notice tasks appearing in the Tasks pane of the vSphere Client as the application performs the necessary steps to integrate the VSM with vCenter Server.12. When the checklist is complete, the application will automatically proceed to the next step. When prompted if you want to migrate this host and its networks, select No. Click Next.13. At the Summary screen, review the configuration and then click Close. Integration of the VSM with vCenter Server is now complete.
Figure 5.71 You must supply the necessary information to connect the VSM to vCenter Server.
Integration of the VSM with vCenter Server results in the creation of a vSphere Distributed Switch. If you navigate to the Networking inventory view, you can see the new dvSwitch that was created by the Nexus 1000V Installation Management Center.
At this point, you have the VSM connected to and communicating with vCenter Server. The next step is to configure a system uplink port profile; this is the equivalent of the dvUplinks dvPort group used by native dvSwitches and will contain the physical network adapters that will connect the Nexus 1000V to the rest of the network. While this port profile isn’t required for the VSM, it is required by the VEM, and it’s necessary to have this port profile in place before adding ESXi hosts to the Nexus 1000V and deploying the VEM onto those hosts.
1. Using PuTTY.exe (Windows) or a terminal window (Linux or Mac OS X), establish an SSH session to the VSM, and log in as the admin user.2. Enter the following command to activate configuration mode:
config t3. Enter the following commands to create the system uplink port profile:
port-profile type ethernet system-uplink
switchport mode trunk
switchport trunk allowed vlan 18, 19
system vlan 18, 19
state enabledThe order of the commands is important; some commands, like the system vlan command, won’t be accepted by the VSM until the allowed VLANs are defined and the port is in an active state (accomplished with the no shut command). Additionally, any VLANs that you want to define as system VLANs must already be created on the Nexus 1000V using the vlan command.Replace the VLAN IDs on the system vlan statement with the VLAN IDs of the control and packet VLANs. Likewise, specify the control and packet VLANs — along with any other VLANs that should be permitted across these uplinks — on the switchport trunk allowed vlan command.If you would like to specify a different name for the dvPort group in vCenter Server than the name given in the port-profile statement, append that name to the vmware-port group command, like this:
vmware port-group dv-SystemUplinks4. Use the exit command to exit out of configuration mode to privileged EXEC mode. Depending on where you are in the configuration, you might need to use this command more than once. You can also use Ctrl+Z to exit directly to privileged EXEC mode.5. Copy the running configuration to the startup configuration so that it is persistent across reboots:
copy run start
The purpose of this port profile is to provide the configuration for the physical NICs in the servers that will participate in the Nexus 1000V distributed virtual switch. Without this port profile in place before adding your first ESXi host, the Nexus 1000V wouldn’t know how to configure the uplinks, and the host would become unreachable across the network. You’d then be forced to use the Direct Console User Interface (DCUI) to restore the default virtual network configuration on this host.
With the port profile for the uplinks in place, the one step remaining in the installation of the Nexus 1000V is adding ESXi hosts. The next section covers this procedure.
Adding ESXi Hosts to the Nexus 1000V
Like installing the Nexus 1000V VSM, the process of adding ESXi hosts to the Nexus 1000V is a two step process:
- First, you must ensure the VEM is deployed to all ESXi hosts that you plan to add to the dvSwitch.
- Once the VEM is deployed to the ESXi hosts, you can add them to the Nexus 1000v distributed virtual switch.
I’ll discuss each of these steps in the sections below.
Deploying the VEM to an ESXi Host
The process for deploying the VEM to an ESXi host will depend on whether you have vSphere Update Manager (VUM) present in your environment. If VUM is present and configured with the Nexus 1000V software, then VUM will automatically push the VEM onto an ESXi host when you add the ESXi host to the distributed virtual switch. No additional effort is required once VUM has been configured; the process is automatic. You only need to configure VUM to point to a software repository that contains the Nexus 1000V software. This process is described in Chapter 4, “Installing and Configuring vSphere Update Manager.”
If, on the other hand, you are not using VUM or you have not configured VUM with the Nexus 1000V software, then you’ll have to install the VEM manually before adding an ESXi host to the distributed virtual switch.
Perform these steps to manually install the VEM onto an ESXi host:
1. Using the Datastore Browser in the vSphere Client, upload the VIB file for the Nexus 1000V VEM into a datastore accessible from the ESXi host on which you want to install the VEM. This process is described in Chapter 9 in the section “Working With Installation Media.”2. Run this command, either from a system with the vSphere CLI (vCLI) installed or from the vSphere Management Assistant:
esxcli --server ESXi host IP address software vib install /vmfs/volumes/VMFS datastore name/path to VIB file3. If prompted for username and/or password, supply the appropriate credentials to authenticate to the ESXi host.4. After a couple of minutes, the command will complete and, if successful, will return a message that the operation completed successfully. Repeat these steps for each ESXi host you are going to add to the Nexus 1000V distributed virtual switch.
With the VSM configured and connected to vCenter Server and the VEM installed on the ESXi hosts, you’re ready to add hosts to the Nexus 1000V distributed virtual switch.
Adding an ESXi Host to the Nexus 1000V
Adding an ESXi host to the Nexus 1000V is, for the most part, very much like adding an ESXi host to a VMware dvSwitch.
Perform these steps to add an ESXi host to the Nexus 1000V distributed virtual switch:
1. If it’s not already running, launch the vSphere Client and connect to the vCenter Server instance with which the VSM is connected.Although you could connect directly to an ESXi host to create the VSM VM and install the VEM, you must connect to vCenter Server to add a host to the Nexus 1000V dvSwitch. In addition, since there can be multiple instances of vCenter Server, it must be the instance of vCenter Server with which the VSM has been connected.2. Navigate to the Networking inventory view.3. Right-click on the dvSwitch object that represents the Nexus 1000V and select Add Host. This launches the Add Host To vSphere Distributed Switch wizard.4. Place a check mark next to each ESXi host you want to add to the Nexus 1000V dvSwitch.5. For each ESXi host, place a check mark next to the physical NICs you want to use as uplinks for the Nexus 1000V.I generally recommend migrating only a single physical NIC over to the Nexus 1000V until you’ve verified that the dvSwitch is working as expected. Once you’ve confirmed that the Nexus 1000V configuration is correct and works, then you can migrate the remaining physical NICs.6. For each selected physical NIC on each ESXi host, select the desired uplink port group on the Nexus 1000V. Unless you’ve created additional uplink port groups, there will be only the single uplink port group you created earlier in the section “Connecting the Nexus 1000V VSM to vCenter Server.”Multiple Uplink GroupsOne key change between a native dvSwitch and the Cisco Nexus 1000V is that the Nexus 1000V supports multiple uplink groups. When adding a host to the Nexus dvSwitch, be sure to place the physical network adapters for that host into the appropriate uplink group(s).7. When you’re finished selecting ESXi hosts, physical NICs, and uplink port groups, click Next.8. If you are prompted to migrate one or more VMkernel ports, choose not to migrate them. You can migrate them manually after you’ve verified the operation of the Nexus 1000V. I described the process for migrating both physical and virtual adapters in the “Managing Adapters” section of this chapter. Click Next.9. If you are prompted to migrate VM networking, choose not to migrate them. You can migrate VM networking configurations manually after you’ve verified the operation of the Nexus 1000V. Instructions for migrating VM networking configurations are also provided in the “Managing Adapters” section in this chapter. Click Next to continue.10. Click Finish to complete adding the ESXi host to the Nexus 1000V.
If you didn’t install the VEM manually but are using VUM instead, VUM will automatically push the VEM to the ESXi host as part of adding the host to the Nexus 1000V distributed virtual switch. If you installed the VEM manually, then the host is added to the dvSwitch.
You can verify that the host was added to the Nexus 1000V and that the VEM is working properly by logging into to the VSM and using the show module command. For each ESXi host added and working properly, there will be a Virtual Ethernet Module listed in the output of the command.
Removing a host from a Nexus 1000V distributed virtual switch is the same as for a native dvSwitch, so refer to those procedures in the section “Removing an ESXi Host from a Distributed vSwitch” for more information.
So you’ve installed the Nexus 1000V, but what’s next? In the next section, I’ll take a closer look at some common configuration tasks for the Nexus 1000V.
Configuring the Cisco Nexus 1000V
All configuration of the Nexus 1000V is handled by the VSM, typically at the CLI via SSH or Telnet. Like other members of the Cisco Nexus family, the Nexus 1000V VSM runs NX-OS, which is similar to Cisco’s Internetwork Operating System (IOS). Thanks to the increasing popularity of Cisco’s Nexus switches and the similarity between NX-OS and IOS, I expect that many IT professionals will be able to transition into NX-OS without too much difficulty.
The bulk of the configuration of the Nexus 1000V VSM is performed during installation. After installing the VSM and the VEMs and adding ESXi hosts to the dvSwitch, most configuration tasks after that involve creating, removing, or modifying port profiles. Port profiles are the Nexus 1000V counterpart to VMware distributed virtual port groups (dvPort groups). Every dvPort group on a Nexus 1000V corresponds to a port profile.
Earlier in this section I described how the Nexus 1000V brings the same creation-consumption model to the virtualized environment that currently exists in the physical environment. I’d like to expand a bit more on that concept to help further clarify the relationship between port profiles and vSphere port groups. In the physical data center environment, the networking team creates the appropriate configuration on the physical switches, and the server team consumes that configuration by connecting to the necessary ports. With the Nexus 1000V, the networking team creates the appropriate configuration on the VSM with port profiles. Those port profiles are automatically pushed into vCenter Server as dvPort groups. The server team then consumes that configuration by connecting VMs to the necessary dvPort group. Port profiles are the creation side of the model; port groups are the consumption side.
Now that you have a better understanding of the importance and necessity of port profiles in a Nexus 1000V environment, let’s walk through the process for creating a port profile.
Perform the following steps to create a new port profile:
1. Using PuTTY.exe (Windows) or a terminal window (Linux or Mac OS X), establish an SSH session to the VSM, and log in as the admin user.2. If you are not already in privileged EXEC mode, indicated by a hash sign after the prompt, enter privileged EXEC mode with the enable command, and supply the password.3. Enter the following command to enter configuration mode:
config t4. Enter the following commands to create a new port profile:
port-profile type vethernet port-profile-name
switchport mode access
switchport access vlan 17
vmware port-group VMware-dvPort-Group-Name
state enabledThese commands create a port profile and matching dvPort group in vCenter Server. In this example, ports in this dvPort group will be assigned to VLAN 17. Obviously, you can change the VLAN ID on the switchport access vlan statement, and you can change the name of the dvPort group using the vmware port-group statement.Note that the no shut command is important; without it, virtual Ethernet ports created from this port profile will be administratively down and will not send or receive traffic.5. Use the end command to exit configuration mode and return to privileged EXEC mode.6. Copy the running configuration to the startup configuration so that it is persistent across reboots:
copy run startUpon completion of these steps, a dvPort group, either with the name specified on the vmware port-group statement or with the name of the port profile, will be listed in the vSphere Client under Inventory → Networking.
Perform the following steps to delete an existing port profile and the corresponding dvPort group:
1. Using PuTTY.exe (Windows) or a terminal window (Linux or Mac OS X), establish an SSH session to the VSM, and log in as the admin user.2. If you are not already in privileged EXEC mode, indicated by a hash sign after the prompt, enter privileged EXEC mode with the enable command, and supply the password.3. Enter the following command to enter configuration mode:
config t4. Enter the following commands to create a new port profile:
no port-profile type vethernet port-profile-nameIf there are any VMs assigned to the dvPort group, the VSM CLI will respond with an error message indicating that the port profile is currently in use. You must reconfigure the VM(s) in question to use a different dvPort group before this port profile can be removed.5. The port profile and the matching dvPort group are removed. You will be able to see the dvPort group being removed in the Tasks list at the bottom of the vSphere Client.6. Use the end command to exit configuration mode and return to privileged EXEC mode.7. Copy the running configuration to the startup configuration so that it is persistent across reboots:
copy run start
Perform the following steps to modify an existing port profile and the corresponding dvPort group:
2. If you are not already in privileged EXEC mode, indicated by a hash sign after the prompt, enter privileged EXEC mode with the enable command, and supply the password.3. Enter the following command to enter configuration mode:
config t4. Enter the following commands to configure a specific port profile:
port-profile type vethernet port-profile-name5. Change the name of the associated dvPort group with this command:
vmware port-group New-VMware-dvPort-Group-NameIf there are any VMs assigned to the dvPort group, the VSM CLI will respond with an error message indicating that the port profile was updated locally but not updated in vCenter Server. You must reconfigure the VM(s) in question to use a different dvPort group and repeat this command in order for the change to take effect.6. Change the access VLAN of the associated dvPort group with this command (replace 19 with an appropriate VLAN ID from your environment):
switchport access vlan 197. Remove the associated dvPort group, but leave the port profile intact with this command:
no state enabled8. Shut down the ports in the dvPort group with this command:
Because the VSM runs NX-OS, a wealth of options is available for configuring ports and port profiles. For more complete and detailed information on the Cisco Nexus 1000V, refer to the official Nexus 1000V documentation and the Cisco website at www.cisco.com.
Even though vSwitches and dvSwitches are considered to be “dumb switches” — with the exception of the Nexus 1000V — you can configure them with security policies to enhance or ensure Layer 2 security. For vSphere Standard Switches, you can apply security policies at the vSwitch or at the port group level. For vSphere Distributed Switches, you apply security policies only at the dvPort group level. The security settings include the following three options:
- Promiscuous Mode
- MAC Address Changes
- Forged Transmits
Applying a security policy to a vSwitch is effective, by default, for all connection types within the switch. However, if a port group on that vSwitch is configured with a competing security policy, it will override the policy set at the vSwitch. For example, if a vSwitch is configured with a security policy that rejects the use of MAC address changes but a port group on the switch is configured to accept MAC address changes, then any VMs connected to that port group will be allowed to communicate even though it is using a MAC address that differs from what is configured in its VMX file.
The default security profile for a vSwitch, shown in Figure 5.72, is set to reject Promiscuous mode and to accept MAC address changes and forged transmits. Similarly, Figure 5.73 shows the default security profile for a dvPort group on a dvSwitch.
Figure 5.72 The default security profile for a vSwitch prevents Promiscuous mode but allows MAC address changes and forged transmits.
Figure 5.73 The default security profile for a dvPort group on a dvSwitch matches that for a standard vSwitch.
Each of these security options is explored in more detail in the following sections.
Understanding and Using Promiscuous Mode
The Promiscuous Mode option is set to Reject by default to prevent virtual network adapters from observing any of the traffic submitted through the vSwitch. For enhanced security, allowing Promiscuous mode is not recommended because it is an insecure mode of operation that allows a virtual adapter to access traffic other than its own. Despite the security concerns, there are valid reasons for permitting a switch to operate in Promiscuous mode. An intrusion-detection system (IDS) requires the ability to identify all traffic to scan for anomalies and malicious patterns of traffic.
Previously in this chapter, I talked about how port groups and VLANs did not have a one-to-one relationship and that there might be occasions when you have multiple port groups on a vSwitch configured with the same VLAN ID. This is exactly one of those situations — you have a need for a system, the IDS, to see traffic intended for other virtual network adapters. Rather than granting that ability to all the systems on a port group, you can create a dedicated port group for just the IDS system. It will have the same VLAN ID and other settings but will allow Promiscuous Mode instead of rejecting Promiscuous mode. This allows you, the administrator, to carefully control which systems are allowed to use this powerful and potentially security-threatening feature.
As shown in Figure 5.74, the virtual switch security policy will remain at the default setting of Reject for the Promiscuous Mode option, while the VM port group for the IDS will be set to Accept. This setting will override the virtual switch, allowing the IDS to monitor all traffic for that VLAN.
Figure 5.74 Promiscuous mode, though a reduction in security, is required when using an intrusion-detection system.
Allowing MAC Address Changes and Forged Transmits
When a VM is created with one or more virtual network adapters, a MAC address is generated for each virtual adapter. Just as Intel, Broadcom, and others manufacture network adapters that include unique MAC address strings, VMware is a network adapter manufacturer that has its own MAC prefix to ensure uniqueness. Of course, VMware doesn’t actually manufacture anything because the product exists as a virtual NIC in a VM. You can see the 6-byte, randomly generated MAC addresses for a VM in the configuration file (.vmx) of the VM, as shown in Figure 5.75. A VMware-assigned MAC address begins with the prefix 00:50:56 or 00:0C:29. In previous versions of ESXi, the value of the fourth set (XX) would not exceed 3F to prevent conflicts with other VMware products, but this appears to have changed in vSphere 5. The fifth and sixth sets (YY:ZZ) are generated randomly based on the Universally Unique Identifier (UUID) of the VM that is tied to the location of the VM. For this reason, when a VM location is changed, a prompt appears prior to successful boot. The prompt inquires about keeping the UUID or generating a new UUID, which helps prevent MAC address conflicts.
Manually Setting the MAC AddressManually configuring a MAC address in the configuration file of a VM does not work unless the first three bytes are VMware-provided prefixes and the last three bytes are unique. If a non-VMware MAC prefix is entered in the configuration file, the VM will not power on.
Figure 5.75 A VM’s initial MAC address is automatically generated and listed in the configuration file for the VM.
All VMs have two MAC addresses: the initial MAC and the effective MAC. The initial MAC address is the MAC address discussed in the previous paragraph that is generated automatically and that resides in the configuration file. The guest OS has no control over the initial MAC address. The effective MAC address is the MAC address configured by the guest OS that is used during communication with other systems. The effective MAC address is included in network communication as the source MAC of the VM. By default, these two addresses are identical. To force a non-VMware-assigned MAC address to a guest operating system, change the effective MAC address from within the guest OS, as shown in Figure 5.76.
Figure 5.76 A VM’s source MAC address is the effective MAC address, which by default matches the initial MAC address configured in the VMX file. The guest OS, however, may change the effective MAC address.
The ability to alter the effective MAC address cannot be removed from the guest OS. However, the ability to let the system function with this altered MAC address is easily addressable through the security policy of a vSwitch. The remaining two settings of a virtual switch security policy are MAC Address Changes and Forged Transmits. Both of these security policies are concerned with allowing or denying differences between the initial MAC address in the configuration file and the effective MAC address in the guest OS. As noted earlier, the default security policy is to accept the differences and process traffic as needed.
The difference between the MAC Address Changes and Forged Transmits security settings involves the direction of the traffic. MAC Address Changes is concerned with the integrity of incoming traffic, while Forged Transmits oversees the integrity of outgoing traffic. If the MAC Address Changes option is set to Reject, traffic will not be passed through the vSwitch to the VM (incoming) if the initial and the effective MAC addresses do not match. If the Forged Transmits option is set to Reject, traffic will not be passed from the VM to the vSwitch (outgoing) if the initial and the effective MAC addresses do not match. Figure 5.77 highlights the security restrictions implemented when MAC Address Changes and Forged Transmits are set to Reject.
Figure 5.77 The MAC Address Changes and Forged Transmits security options deal with incoming and outgoing traffic, respectively.
For the highest level of security, VMware recommends setting MAC Address Changes, Forged Transmits, and Promiscuous Mode on each vSwitch to Reject. When warranted or necessary, use port groups to loosen the security for a subset of VMs to connect to the port group.
Virtual Switch Policies for Microsoft Network Load BalancingAs with anything, there are, of course, exceptions. For VMs that will be configured as part of a Microsoft Network Load Balancing (NLB) cluster set in Unicast mode, the VM port group must allow MAC address changes and forged transmits. Systems that are part of an NLB cluster will share a common IP address and virtual MAC address.The shared virtual MAC address is generated by using an algorithm that includes a static component based on the NLB cluster’s configuration of Unicast or Multicast mode plus a hexadecimal representation of the four octets that make up the IP address. This shared MAC address will certainly differ from the MAC address defined in the VMX file of the VM. If the VM port group does not allow for differences between the MAC addresses in the VMX and guest OS, NLB will not function as expected. VMware recommends running NLB clusters in Multicast mode because of these issues with NLB clusters in Unicast mode.
Perform the following steps to edit the security profile of a vSwitch:
1. Use the vSphere Client to establish a connection to a vCenter Server or an ESXi host.2. Click the hostname in the inventory pane on the left, select the Configuration tab in the details pane on the right, and then select Networking from the Hardware menu.3. Click the Properties link for the virtual switch.4. Click the name of the virtual switch under the Configuration list, and then click the Edit button.5. Click the Security tab, and make the necessary adjustments.6. Click OK, and then click Close.
Perform the following steps to edit the security profile of a port group on a vSwitch:
1. Use the vSphere Client to establish a connection to a vCenter Server or an ESXi host.2. Click the hostname in the inventory pane on the left, select the Configuration tab in the details pane on the right, and then select Networking from the Hardware menu.3. Click the Properties link for the virtual switch.4. Click the name of the port group under the Configuration list, and then click the Edit button.5. Click the Security tab, and make the necessary adjustments.6. Click OK, and then click Close.
Perform the following steps to edit the security profile of a dvPort group on a dvSwitch:
1. Use the vSphere Client to establish a connection to a vCenter Server instance.2. On the vSphere Client home screen, select the Networking option under Inventory. Alternatively, from the View menu, select Inventory → Networking.4. Select Security from the list of policy options on the left side of the dialog box.5. Make the necessary adjustments to the security policy.6. Click OK to save the changes and return to the vSphere Client.
Managing the security of a virtual network architecture is much the same as managing the security for any other portion of your information systems. Security policy should dictate that settings be configured as secure as possible to err on the side of caution. Only with proper authorization, documentation, and change-management processes should security be reduced. In addition, the reduction in security should be as controlled as possible to affect the least number of systems if not just the systems requiring the adjustments.
Identify the components of virtual networking.
Virtual networking is a blend of virtual switches, physical switches, VLANs, physical network adapters, virtual adapters, uplinks, NIC teaming, VMs, and port groups.
What factors contribute to the design of a virtual network and the components involved?
Create virtual switches (vSwitches) and distributed virtual switches (dvSwitches).
vSphere introduces a new type of virtual switch, the vSphere Distributed Virtual Switch, as well as continuing to support the host-based vSwitch (now referred to as the vSphere Standard Switch) from previous versions. vSphere Distributed Switches bring new functionality to the vSphere networking environment, including private VLANs and a centralized point of management for ESXi clusters.
You’ve asked a fellow vSphere administrator to create a vSphere Distributed Virtual Switch for you, but the administrator is having problems completing the task because they can’t find the right command-line switches for vicfg-vswitch. What should you tell this administrator?
Install and perform basic configuration of the Cisco Nexus 1000V.
The Cisco Nexus 1000V is the first third-party Distributed Virtual Switch for vSphere. Running Cisco’s NX-OS, the Nexus 1000V uses a distributed architecture that supports redundant supervisor modules and provides a single point of management. Advanced networking functionality like quality of service (QoS), access control lists (ACLs), and SPAN ports is made possible via the Nexus 1000V.
A vSphere administrator is trying to use the vSphere Client to make some changes to the VLAN configuration of a dvPort group configured on a Nexus 1000V, but the option to edit the settings for the dvPort group isn’t showing up. Why?
Create and manage NIC teaming, VLANs, and private VLANs.
NIC teaming allows for virtual switches to have redundant network connections to the rest of the network. Virtual switches also provide support for VLANs, which provide logical segmentation of the network, and private VLANs, which provide added security to existing VLANs while allowing systems to share the same IP subnet.
You’d like to use NIC teaming to bond multiple physical uplinks together for greater redundancy and improved throughput. When selecting the NIC teaming policy, you select Route Based On IP Hash, but then the vSwitch seems to lose connectivity. What could be wrong?
Configure virtual switch security policies.
Virtual switches support security policies for allowing or rejecting Promiscuous Mode, allowing or rejecting MAC address changes, and allowing or rejecting forged transmits. All of the security options can help increase Layer 2 security.
You have a networking application that needs to see traffic on the virtual network that is intended for other production systems on the same VLAN. The networking application accomplishes this by using Promiscuous mode. How can you accommodate the needs of this networking application without sacrificing the security of the entire virtual switch?
Share "Mastering VMware vSphere 5":
Download for all devices (12.58 MB)