Creating VLAN-Backed Port Groups in Oracle Cloud VMware Solution – VMware Cloud Community


Because Oracle Cloud VMware Solution (OCVS) is a fully customer-managed implementation of the VMware Cloud Foundation stack, hosted on Oracle’s global Cloud Infrastructure platform, it’s arguably the Hyperscaler VMware offering that is most like your existing on-premises deployment. However, unless your on-prem deployment is on an “OCI Dedicated Region Cloud @ Customer” it’s unlikely that your network underlay is quite the same as the one underpinning OCVS. In this post, we’ll see one of the ways this shows up, and how you can work around it.

The background

OCVS is built on top of Oracle Cloud Infrastructure (OCI), and inherits many of its capabilities, as well as a few new ones, most noticeably, “VLANs.” VLANs in OCI are a bit like “Subnets.” In OCI, Subnets get Layer 3 attributes like IP addresses, gateways, subnet masks, etc. But within OCI, they also get Route Table and Security List(s).

You can think of it as if every subnet is terminated on a little router protected by a security guard with a list. You know… Sorry, you’re not on the list, you’re not coming in!

VLANs get those same Layer 3 attributes, but they also get some Layer 2 ones. Running on top of the OCI Software Defined Network underlay (which is usually invisible to end users), VLANs are software-defined constructs. They have a VLAN ID as you’d expect, but that ID is local to the VCN that they’re in, and doesn’t really exist in the traditional sense. You might see the same ID in other tenants or VCNs, but it’s not the same VLAN.

OCI VLANs support enough Layer 2 services to make vSphere and vSAN work. They’re not fully featured in the way that say, a VLAN on a traditional network switch would be. When we stand up an OCVS Software-Defined Data Center running vSphere 6.5 or 6.7, there are eight VLANs created in the parent VCN’s network layer, and for vSphere 7.0 * that goes up to ten. Not that it matters here, but those VLANs are:

Name Description
1. NSX Edge Uplink 1 Used for the T0 uplinks to OCI
2. NSX Edge Uplink 2 (Not currently used)
3. NSX Edge VTEP Underlay for Edge Node VMs on Host Transport Nodes.
4. NSX VTEP Underlay for Host Transport Nodes.
5. vMotion Err… vMotion
6. vSAN vSAN replication etc.
7. vSphere vCenter, NSX Mgr, HCX Mgr, etc.
(Not hosts, they are on the Provisioning Subnet)
8. HCX “Uplink” VLAN for HCX Interconnect Appliance and
Layer 2 Extension Appliances.
9. Replication * Used for vSphere replication.
10. Provisioning * Used for provisioning activities.

*If you’re looking for ESXi host management, that’s on a separate L3 Subnet used for Provisioning, iSCSI boot, and host management.

Each SDDC’s VLANs are assigned VLAN IDs (not the 1-10 I used in the list above) from the usual VLAN range. It’s worth noting that these IDs are local to each VCN, so you may see the same IDs duplicated in different VCNs, they aren’t the same VLANs. This is where one of our challenges starts. In vCenter, even the one in OCVS, you can create new Port Groups which are backed by VLANs in the physical underlay. In a traditional datacenter or most on-premises deployments, this underlay will be the network switches sitting in racks (or occasionally under desks, propped up on boxes or plastic crates).

Adding a VLAN ID to a new Port Group within vCenter will usually work fine for VMs on the same host as each other. However, depending on the type of network hardware or organization security policy, adding a VLAN tag to an ESXi host and having the network actually do anything that tag might not be the same thing.

In the image above, we’re creating a new Distributed Port Group and tagging traffic leaving the host with VLAN ID “113”. If we have two VMs in this Port Group on the same host, their traffic won’t leave the host, so, they should be able to communicate with each other. If we move both VMs to another host, as long as they’re both on that host, their traffic won’t leave the new host, and they will also be able to talk to each other.

If we (or the vSphere Dynamic Resource Scheduler (DRS)) were to move one of the VMs to another host, then things would change. Now, the ESXi host with the source VM will tag the traffic with our VLAN ID of 113, and send it out to the physical network in the hope that it will be delivered to the “other” host. In most datacenters, the switch ports connected to the NICs on the ESXi hosts are configured as “Trunk” ports in order to pass the VLAN tagged traffic to and from the ESXi host. Traffic with a VLAN tag received by the ESXi host will be directed to the VMs connected to that Port Group, and traffic received by the switch from the host will be sent to other switch ports connected to that VLAN, or Trunk ports which carry either “All VLANs” or, a subset of them which includes the VLAN tagged in the traffic.

The back story

All good, but what does this have to do with Oracle Cloud VMware Solution? Well, we recently had a customer try and duplicate their on-premises application to OCVS. It should have worked as-is without any changes and mostly did. But, sometimes it didn’t and the customer asked for help. The problem was, that their automation script created new Port Groups on the existing distributed switch, and then deployed VMs to that Port Group. That all worked, but with a catch. In their on-prem environment, the hosts’ ports were VLAN trunks, and the upstream switch ports were configured for “All VLANs”. That meant any VLAN tag coming from the Port Group membership was honored, and sent to all other trunk ports which were also configured for “All VLANs”. But that wasn’t the case within OCVS and meant that anytime the VMs in one Port Group landed on different hosts, they were unable to talk to each other.

The OCI underlay behaved like a traditional switch would do if trunk ports were configured to only carry a list of permitted VLAN tags. In the OCI case, it was because there was no VLAN matching the newly created Port Group VLAN ID, so OCI had no way to know which other Compute Instances (in our case OCVS ESXi hosts) were interested in the traffic.

Fixing the problem

Our problem can be solved by ensuring the OCI underlay knows about our new VLAN. This post isn’t a VLAN creation tutorial, but we’ll cover the main steps. The first of which is using “Create VLAN” in the OCI Console to err.. create a new VLAN, but with the new Port Group’s VLAN ID. Followed by ensuring all the ESXi hosts are connected to the VLAN.

You can see the “Create VLAN” step here:

creating vlan in oci console

In the screenshot above, we’re creating a VLAN called vlan-sandbox-01 with VLAN ID of 60, assigning it a subnet of 10.60.60.0/24, attaching it to a Route Table, and, although we haven’t got that far in this picture, attaching a Network Security Group. Remember, a VLAN is like a Layer 3 subnet, but with Layer 2 capabilities, so does need all that L3 stuff too.

When you create a VLAN, remember, it is (currently) local to a single Availability Domain. So, make sure you create the VLAN in the same AD as your SDDC. It will still create successfully if you do create it in the wrong AD, but you won’t find your VLAN in the list, when you go to attach it to your SDDC hosts in the next steps.
I’m not saying that’s what I did when collecting these screenshots, just err… offering it as a useful tip…

Adding the new VLAN to our hosts is a two-stage process. At the vSphere end, things are pretty much the same as they would be in a regular on-prem deployment, so we’ll come back to that in a moment. First then, now we have the VLAN in OCI, we need to tell the Oracle Cloud underlay which hosts we want to be attached to the VLAN. We do this in a way you may find counterintuitive if you’re used to traditional network switches. This is because Oracle Cloud uses “off-box” virtualization to create the network topology independently of the workloads connected to it. This allows bare-metal hosts to sit next to native OCI VMs and other network citizens without each needing a virtualization client.

To tell OCI to present the VLAN to a host we “create a VNIC” on the host, and attach that VNIC to the VLAN as you can see here:

creating and attaching a vnic

We chose the “Advanced Setup: VLAN” option and picked the vlan-sandbox-01 VLAN name-from the list. Annoyingly, hidden behind the dropdown is another selection item that we can’t see. It’s here where we choose which of the host’s two physical NICs we want to present this new virtual NIC to. Here’s what that looks like:

allocate new virtual nic

On a vSphere 6.x host we would attach the VLAN to NIC 0 only. That’s because there, NIC 1 is used for the GENEVE traffic within NSX-T. For vSphere 7.x we need to go through this step twice, adding one vNIC, with our VLAN, to each of the host’s NICs, 0 and 1. That’s because, as of vSphere 7, OCVS uses the Converged VDS which allows both host NICs to be used for VLAN and/or GENEVE backed Port Groups. We need to repeat this process for each of the hosts in our cluster. Here’s the end result viewed from our v6.7 vCenter.

dswitch in v6.7 vcenter

Here you can see our Sandbox-01 Port Group in the list and in the Topology image. As this is a three-node cluster running vSphere 6.7, we’re only attached to one physical NIC per host and we see “(3 NIC Adapters)” in the uplink set for “DSwitch”.

If you want to learn more about VLANs and Layer 2 networking in OCVS, you can check out the Oracle Docs here.

Now that’s magic…

Did you see the sleight of hand there? No? No, me neither first time around. A couple of pictures up, we were creating a new virtual NIC on our hosts. Thinking in traditional networking terms, perhaps if you’ve worked on compute blades that virtualize physical network hardware into virtual server NICs, you might expect to see those NICs, without their VLAN tag, as pNICs on the hosts. That would have meant creating new dvSwitches, with untagged Port Groups and attaching the new virtual NICs as the switch uplinks. But we didn’t do that!

Cleverly, creating those vNICs on the hosts in OCI didn’t create NICs we could see from vSphere. Instead, what it did was present our VLAN60 as an 802.1Q trunk member to the host. This meant we could simply tag our new Port Group on the existing vSwitch with VLAN 60, and we’re all set. Now, as Paul Daniels used to say… “that’s magic…”

Wrapping things up

Our customer was building their environments on top of OCVS using the same automation tools they had used for their on-premises version. Because of that, it wasn’t a big job to add an additional step which, once the script had selected the new VLAN ID, called the OCI platform APIs to create the new VLAN, and add it to the ESXi hosts. Simple… at least it was when the customer knew what the problem was, and now, you do too.

So, that’s it? Problem solved? Well, kinda…

There are a couple of things to consider here, starting with maintaining the same process as the customer did on-premises. Clearly, that’s an easier route than being forced to make changes as you adopt a public cloud service. As you can see though, in OCVS, although the SDDC is just the same as the customer’s on-prem, the infrastructure layers aren’t. We need to think about this when migrating workloads to the Cloud. This isn’t always bad though. Sometimes we can take advantage of the capabilities of the Cloud environment. Here for example, the customer could have changed their automation to use NSX-T logical segments, which would greatly have simplified the infrastructure deployment part of their build.

VLANs were a great way to maintain the on-prem model, but, as the benefits of NSX have been with us for many years now, ideally, we’d have been moving an on-prem NSX-V (or maybe NSX-T) model, rather than the more limited VLAN one. This customer used a pool of VLAN IDs which the automation would pick from. In OCI, as we’ve seen, VLANs are presented to ESXi compute hosts as vNICs, and there’s currently a limit of 52 per host. In practice, with the Converged VDS in vSphere 7.x, this means every VLAN on each of the two host NICs, so 2x 26. In practice, we’re already using 2x 10 of them for VLANs, and a pair for the host management Subnet, while there are some, there aren’t a lot left to play with. We’re also working on some cool high-performance use cases which might steal a couple more, so don’t really want to run out.

As we hinted at earlier, VLANs make use of Network Security Groups (NSG) which you can read about on the Oracle Docs page here. As with any Security Control, if you’re environment is constantly changing, switching from a traditional security model to one which more closely matches your “application” layout rather than your subnet/network layout can be a big help. Using NSGs within the OCI layer, or dynamic inclusion policies in Groups within the NSX-T firewall layers can greatly simplify your security policies and ensure they are applied consistently.

So, all in all, OCVS is a great place to migrate on-premises workloads to without needing to modify them (at least in the vSphere layers), but sometimes a new platform can be an ideal time to look at ways to improve what we already have.

Closing credits

I’d like to thank Richard Garsthagen and Adeel Amin over at Oracle Cloud for their help with this post. The OCI/OCVS insights I gained through my many questions and numerous, “can I just check something?” are theirs, the mistakes and typos, as always, are mine…

If you have any questions about the content of this post, or about Oracle Cloud VMware Solution, please drop them in the comments below.





Source link