Introduction
VMware vSphere 8 represents a significant evolution in virtualization, and for network engineers, it introduces capabilities that blur the line between physical and virtual networking. Whether you are a traditional network engineer stepping into the virtualization world or a seasoned VMware admin looking to deepen your networking knowledge, understanding how vSphere 8 handles networking is essential for modern data center operations.
This guide covers the key networking features in vSphere 8 from a network engineer’s perspective, focusing on the concepts, architecture, and practical configurations that matter most when you are managing or troubleshooting virtual infrastructure alongside your physical network.
vSphere 8 Networking Architecture Overview
Before diving into the new features, it helps to understand the fundamental networking building blocks in vSphere. If you come from a Cisco or Juniper background, think of vSphere networking as a software-defined Layer 2 switching fabric running inside each ESXi host.
Standard Virtual Switch (vSwitch)
The Standard vSwitch is a per-host Layer 2 switch. Each ESXi host has its own independent vSwitch configuration. This is analogous to having a standalone access switch at each host — no centralized management, no cross-host consistency guarantees. Configuration must be replicated manually or via scripts on every host.
Distributed Virtual Switch (VDS)
The VDS is managed centrally through vCenter and spans multiple ESXi hosts. Think of it as a single logical switch fabric stretched across your entire cluster. Configuration changes made at the VDS level automatically propagate to all member hosts. This is the recommended switch type for production environments and is required for many advanced networking features.
Key Terminology for Network Engineers
| Physical Network Term | vSphere Equivalent | Description |
|---|---|---|
| Physical NIC | vmnic / Uplink | Physical network adapter on the ESXi host connecting to your physical switch |
| Access Port | VM Port Group | A logical port group assigned a single VLAN ID where VMs connect |
| Trunk Port | Uplink with VLAN Trunking | Physical NIC configured to carry multiple VLANs (802.1Q) to the physical switch |
| SVI / VLAN Interface | VMkernel Port (vmk) | IP-addressable interface for ESXi management, vMotion, storage, etc. |
| Port Channel / LACP | Link Aggregation Group (LAG) | Bonded uplinks using LACP (VDS only) |
| QoS / Traffic Shaping | Network I/O Control (NIOC) | Traffic prioritization and bandwidth allocation |
| ACL / Firewall | Distributed Firewall (NSX) | Per-VM stateful firewall rules (requires NSX) |
What Changed in vSphere 8 for Networking
1. Enhanced Distributed Virtual Switch (VDS 8.0)
vSphere 8 ships with VDS version 8.0, which includes several improvements relevant to network engineers:
- Uniform Passthrough (UPT) v2: Dramatically improves network performance for latency-sensitive workloads by allowing VMs to bypass the virtual switch and communicate almost directly with the physical NIC via SR-IOV, while still maintaining vMotion compatibility and VDS policy enforcement
- Enhanced traffic analytics: Better visibility into per-flow statistics on distributed port groups, making it easier to troubleshoot network performance issues without deploying a separate packet capture appliance
- Improved LACP support: Enhanced Link Aggregation Control Protocol with better convergence times and more flexible hashing algorithms for load distribution
2. NSX Integration Improvements
vSphere 8 tightens the integration with VMware NSX (now NSX 4.x), making software-defined networking more seamless:
- Simplified NSX deployment: NSX can now be installed and managed more directly from the vSphere Client, reducing the separate management overhead
- Enhanced DPU/SmartNIC support: NSX networking and security functions can be offloaded to Data Processing Units (DPUs) like the NVIDIA BlueField, freeing host CPU cycles for application workloads
- Improved microsegmentation: The distributed firewall gains better performance and more granular policy controls, allowing network engineers to implement zero-trust security at the virtual NIC level
3. Network I/O Control v3 Enhancements
Network I/O Control (NIOC) is vSphere’s QoS mechanism. In vSphere 8, NIOC v3 provides more granular bandwidth management:
- Per-VM bandwidth reservations: Guarantee minimum bandwidth for critical VMs regardless of contention
- Traffic type classification: Separate bandwidth pools for management, vMotion, vSAN, NFS, and VM traffic
- Shares and limits: Similar to QoS priority queuing, you can assign relative shares and hard bandwidth limits to each traffic type
4. vSphere Lifecycle Manager Network Awareness
vSphere 8’s Lifecycle Manager (vLCM) now has better awareness of network driver and firmware versions, helping ensure your physical NICs are running compatible and tested driver versions. This is critical for network stability — many ESXi networking issues trace back to driver/firmware mismatches.
Configuring a Distributed Virtual Switch
For network engineers, the VDS is where you will spend most of your time. Here is a walkthrough of setting up a VDS for a typical production environment.
Create the VDS in vCenter
- In the vSphere Client, navigate to your Datacenter object
- Right-click and select Distributed Switch > New Distributed Switch
- Name it descriptively (e.g.,
DSwitch-Prod-DC1) - Select version 8.0.0 for full vSphere 8 feature support
- Set the number of uplinks (typically 2 or 4 to match your physical NIC count)
- Enable Network I/O Control for traffic management
- Create a default port group or skip and create specific ones later
Create Distributed Port Groups
Port groups are the equivalent of VLAN assignments on a physical switch. Create one port group per VLAN/subnet that your VMs need access to.
Example Port Group Configuration:
Port Group Name VLAN ID Purpose
--------------------------------------------------
DPG-Mgmt-VLAN10 10 ESXi Management
DPG-vMotion-VLAN20 20 vMotion Traffic
DPG-Production-VLAN100 100 Production VMs
DPG-Dev-VLAN200 200 Development VMs
DPG-Storage-VLAN30 30 NFS/iSCSI Storage
DPG-Backup-VLAN40 40 Backup Network
For each port group, configure these settings that mirror physical switch configurations:
- VLAN Type: Set to “VLAN” and enter the VLAN ID (this is like setting
switchport access vlan 100on a Cisco switch) - Teaming and failover: Choose your NIC teaming policy (Route based on originating virtual port, IP hash, or explicit failover order)
- Security policies: Promiscuous mode, MAC address changes, and Forged transmits (default all to Reject for security)
- Traffic shaping: Set ingress and egress bandwidth limits if needed
Add Hosts to the VDS
- Right-click the VDS and select Add and Manage Hosts
- Select the ESXi hosts to add
- Assign physical NICs (vmnics) to VDS uplinks — this is the equivalent of cabling a server to your physical switch
- Migrate existing VMkernel adapters (management, vMotion) from the standard vSwitch to the VDS
- Migrate VM networking to the new distributed port groups
Critical tip: Always keep at least one management path active during migration. If you are migrating the management VMkernel from a standard switch to the VDS, do it last or ensure you have out-of-band access (iLO/iDRAC/CIMC) in case connectivity is lost.
Physical Switch Configuration for ESXi
One of the most common sources of issues in vSphere networking is a mismatch between the virtual switch configuration and the physical switch port configuration. Here is what your physical switch ports need to look like.
Cisco IOS/IOS-XE Trunk Port for ESXi
! Configure the port connected to ESXi as an 802.1Q trunk
interface GigabitEthernet1/0/1
description ** ESXi Host 01 - vmnic0 **
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk allowed vlan 10,20,30,40,100,200
switchport trunk native vlan 10
spanning-tree portfast trunk
no shutdown
! If using LACP (for VDS LAG):
interface Port-channel1
description ** ESXi Host 01 - LAG **
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk allowed vlan 10,20,30,40,100,200
switchport trunk native vlan 10
spanning-tree portfast trunk
interface range GigabitEthernet1/0/1-2
description ** ESXi Host 01 - LACP member **
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk allowed vlan 10,20,30,40,100,200
switchport trunk native vlan 10
channel-group 1 mode active
spanning-tree portfast trunk
Cisco Nexus Trunk Port for ESXi
interface Ethernet1/1
description ** ESXi Host 01 - vmnic0 **
switchport mode trunk
switchport trunk allowed vlan 10,20,30,40,100,200
switchport trunk native vlan 10
spanning-tree port type edge trunk
no shutdown
! For vPC (Virtual Port Channel) with LACP:
interface port-channel10
description ** ESXi Host 01 - vPC LAG **
switchport mode trunk
switchport trunk allowed vlan 10,20,30,40,100,200
switchport trunk native vlan 10
spanning-tree port type edge trunk
vpc 10
interface Ethernet1/1-2
description ** ESXi Host 01 - vPC member **
switchport mode trunk
switchport trunk allowed vlan 10,20,30,40,100,200
switchport trunk native vlan 10
channel-group 10 mode active
Common Physical Switch Mistakes
- Forgetting spanning-tree portfast: Without portfast, the port goes through STP listening/learning states (30 seconds) before forwarding. This causes ESXi network timeouts during boot or NIC failover
- Not pruning VLANs: Allowing all VLANs on the trunk when ESXi only needs specific ones wastes bandwidth and can cause broadcast storms
- Native VLAN mismatch: If the native VLAN on the physical switch does not match the VLAN configuration in vSphere, untagged traffic will end up on the wrong network
- MTU mismatch: If you enable jumbo frames (MTU 9000) on ESXi, every hop in the path (physical switch, upstream routers, storage array) must also support jumbo frames. A single hop with default MTU 1500 will cause packet fragmentation or drops
LACP Configuration on VDS
Link Aggregation Control Protocol (LACP) is supported only on the VDS (not on standard vSwitches). Here is how to configure a LAG on a VDS, which is the vSphere equivalent of a port channel on a physical switch.
- Navigate to the VDS settings in the vSphere Client
- Go to Configure > LACP
- Click + New to create a new LAG
- Configure the LAG:
- Name: e.g.,
LAG-Uplink - Number of ports: Match your physical NIC count (typically 2)
- Mode: Active (both sides initiate LACP negotiation)
- Load balancing: Source and Destination IP Address (matches
src-dst-ipon Cisco)
- Name: e.g.,
- On each host, assign the physical vmnics to the LAG ports
- Update distributed port group teaming to use the LAG as the active uplink
Important: The LACP hashing algorithm on the VDS and the physical switch must be compatible. If the VDS uses “Source and Destination IP,” configure the physical switch port channel with port-channel load-balance src-dst-ip (or the closest equivalent).
Network I/O Control (NIOC) Deep Dive
NIOC is vSphere’s built-in QoS engine. For network engineers who manage traffic prioritization on physical networks, NIOC provides similar capabilities at the hypervisor level.
Enable and Configure NIOC
- Select the VDS in the vSphere Client
- Go to Configure > Resource Allocation
- Verify NIOC is enabled (it should be if you enabled it during VDS creation)
- Configure shares and reservations for each traffic type:
| Traffic Type | Shares | Reservation | Limit |
|---|---|---|---|
| Management | Normal (50) | None | Unlimited |
| vMotion | Normal (50) | None | Unlimited |
| vSAN | High (100) | 2 Gbps | Unlimited |
| NFS Storage | High (100) | 1 Gbps | Unlimited |
| Virtual Machine | High (100) | None | Unlimited |
| Fault Tolerance | Normal (50) | None | Unlimited |
NIOC works by enforcing these allocations only during contention. When bandwidth is available, all traffic types can use the full pipe. When multiple traffic types compete for the same physical uplinks, NIOC ensures each type gets its fair share based on the configured shares and reservations.
Troubleshooting vSphere Networking with esxcli
For network engineers who prefer the CLI, ESXi provides powerful networking commands via esxcli. SSH into the ESXi host and use these commands for troubleshooting.
View Network Configuration
# List all physical NICs and their status
esxcli network nic list
# Show detailed info for a specific NIC (speed, duplex, driver)
esxcli network nic get -n vmnic0
# List all virtual switches
esxcli network vswitch standard list
esxcli network vswitch dvs vmware list
# List all port groups
esxcli network vswitch standard portgroup list
# Show VMkernel interfaces
esxcli network ip interface list
esxcli network ip interface ipv4 get
Check Connectivity and Routing
# Ping from a specific VMkernel interface
vmkping -I vmk0 10.1.1.1
# Ping with jumbo frames to test MTU path
vmkping -I vmk0 -d -s 8972 10.1.1.1
# Show routing table
esxcli network ip route ipv4 list
# Show ARP table
esxcli network ip neighbor list
# Check DNS resolution
nslookup vcenter.yourcompany.local
Capture Network Traffic
# Packet capture on a vmnic (physical NIC)
pktcap-uw --uplink vmnic0 -o /tmp/capture_vmnic0.pcap
# Capture on a specific port group
pktcap-uw --switchport 67108867 -o /tmp/capture_pg.pcap
# Capture only VLAN 100 traffic
pktcap-uw --uplink vmnic0 --vlan 100 -o /tmp/vlan100.pcap
# Capture with filters (e.g., only TCP port 443)
pktcap-uw --uplink vmnic0 --proto 0x06 --dstport 443 -o /tmp/https.pcap
# View capture in real-time (text output)
pktcap-uw --uplink vmnic0 --proto 0x01 -o - | tcpdump-uw -enr -
The pktcap-uw utility is ESXi’s built-in packet capture tool. It generates standard pcap files that you can download and analyze in Wireshark.
Jumbo Frames Configuration
Jumbo frames (MTU 9000) improve performance for storage and vMotion traffic by reducing per-packet overhead. However, the entire network path must support jumbo frames for them to work correctly.
Enable Jumbo Frames End-to-End
# Set MTU on the VDS (via vSphere Client):
# VDS > Configure > Settings > Properties > MTU: 9000
# Set MTU on a VMkernel adapter
esxcli network ip interface set -i vmk1 -m 9000
# Verify the MTU setting
esxcli network ip interface list
# Test jumbo frame path (the -d flag prevents fragmentation)
vmkping -I vmk1 -d -s 8972 10.1.1.1
# If this fails, something in the path does not support jumbo frames
# On your Cisco switch:
interface GigabitEthernet1/0/1
mtu 9216
# Note: Cisco uses 9216 to account for Layer 2 overhead
Tip: Start with storage and vMotion interfaces only. Leave management VMkernel and VM port groups at MTU 1500 unless you have a specific reason to change them. This limits the blast radius if there is an MTU path issue.
Security Best Practices for Network Engineers
- Disable Promiscuous Mode: Set to Reject on all port groups unless specifically needed (e.g., for network monitoring appliances or IDS). Promiscuous mode allows a VM to see all traffic on the port group, which is a security risk
- Reject MAC Address Changes: Prevents VMs from changing their MAC address, which could be used for MAC spoofing attacks
- Reject Forged Transmits: Prevents VMs from sending traffic with a source MAC different from what is configured in their virtual NIC
- Use Private VLANs: VDS supports PVLANs (Promiscuous, Isolated, Community) for microsegmentation without NSX
- Separate traffic types physically: Use dedicated NICs for management, vMotion, and storage when possible, rather than relying solely on VLANs and NIOC
- Lock down ESXi firewall: Use
esxcli network firewallto restrict which services accept inbound connections
Quick Reference: vSphere 8 Network Limits
| Parameter | Maximum |
|---|---|
| VDS per vCenter | 128 |
| Hosts per VDS | 2,000 |
| Port groups per VDS | 10,000 |
| Ports per VDS | 60,000 |
| Uplinks per VDS | 32 |
| LAGs per VDS | 64 |
| NICs per ESXi host | Varies (typically 2-8 for converged, more with expansion cards) |
Related Resources on UnifiedGuru
- vSphere Client Configuration Parameters
- VLAN ID Service Console in ESX
- All VMware Configuration Templates
Conclusion
vSphere 8 continues VMware’s trend of making virtual networking more powerful and more aligned with physical network concepts. For network engineers, the key takeaway is that the Distributed Virtual Switch is your primary tool — it provides centralized management, LACP support, traffic shaping, and the visibility you need to manage virtual networking as confidently as physical infrastructure. Combined with proper physical switch configuration and the esxcli troubleshooting commands covered in this guide, you have everything you need to manage vSphere 8 networking effectively.
Have questions about vSphere networking or want to share your own configurations? Join the discussion in our ESXi/vSphere forum.