- Mastering Azure management: A comparative analysis of leading cloud platforms
- Sweat the small stuff: Data protection in the age of AI
- GAO report says DHS, other agencies need to up their game in AI risk assessment
- This LG Bluetooth speaker impressed me with a design feature I've yet to see on competitors
- Amazon's AI Shopping Guides helps you research less and shop more. Here's how it works
Intro to Google Cloud VMware Engine – Common Networking Scenarios, Part 1 – VMware Cloud Community
This post is the sixth in a series on Google Cloud VMware Engine and Google Cloud Platform. This post covers common networking scenarios, like creating network segments, assigning a public IP to a VM, and load balancing.
Other posts in this series:
Your Google Cloud VMware Engine private cloud initially comes with networking pre-configured, and you don’t need to worry about configuring and trunking VLANs. Instead, any new networking configuration will be done in NSX-T. If you are new to NSX-T, the Google Cloud VMware Engine documentation covers creating new workload segments, which should be your first step before creating or migrating any VMs to your private cloud.
This diagram represents the initial setup of my Google Cloud VMware Engine environment, and I will be building on this example over the following sections. If you’ve been following along with this blog series, this should look familiar. You can see a “Customer Data Center” on the left, which in my case is a lab, but it could be any environment connected to GCP via Cloud VPN or Cloud Interconnect. There is also a VPC peered with my private cloud, which is where my bastion host is running.
I’ve created a workload segment, 192.168.83.0/24
, and connected three Ubuntu Linux VMs to it. A few essential steps must be completed outside of NSX-T when new segments are created while using VPC peering or dynamic routing over Cloud VPN or Cloud Interconnect.
First, you must have Import/export custom routes
enabled in private service access for the VPC peered with your private cloud. Custom routes are covered in my previous post, Connecting a VPC to Private Cloud. Notice that my newly created segment shows up under Imported Routes
.
Second, any workload segments must be added as a custom IP range to any Cloud Router participating in BGP peering to advertise routes back to your environment. This would apply to both Cloud Interconnect and Cloud VPN, where BGP is used to provide dynamic routing. Configuring this will ensure that the workload subnet will be advertised to your environment. More information can be found here.
NSX-T has an excellent Terraform provider, and I have already covered several GCP Terraform examples in previous posts. My recommendation is to add new NSX-T segments via Terraform and add the custom subnet advertisement for the segment to any Cloud Routers via Terraform in the same workflow. This way, you will be sure you never forget to update your Cloud Router advertisements after adding a new segment.
Let’s add an application into the mix. I have a test webserver running on VM1
that I want to expose to the internet.
In Google Cloud VMware Engine, public IPs are not assigned directly to a VM. Instead, public IPs are allocated through the portal and assigned to the private IP of the relevant VM. This creates a simple destination NAT from the allocated public IP to the internal private IP.
Browse to Network > Public IPs
and click Allocate
to allocate a public IP. You will be prompted to supply a name and the region for the public IP. Click Submit
, and you will be taken back to the Public IPs
page. This page will now show the public IP that has been allocated. The internal address it is assigned to is listed under the Attached Address
column.
You can find more information on public IPs in the Google Cloud VMware Engine documentation.
Creating Firewall Rules
Google Cloud VMware Engine also includes a firewall beyond the NSX-T boundary, so it will need to be configured to allow access to the public IP that was just allocated. To do this, browse to Network > Firewall tables
and click Create new firewall table
. Provide a name for the firewall table and click Add Rule
.
Configure the rule to allow the desired traffic, choosing Public IP
as the destination. Choose the newly allocated public IP from the dropdown, and click Done
.
The new firewall table will be displayed. Click Attached Subnets
, then Attach to a Subnet
. This will attach the firewall table to a network.
Choose your SDDC along with System management
from the Select a Subnet
dropdown, and click Save
. System management
is the correct subnet to use when applying the firewall table to traffic behind NSX-T per the Google Cloud VMware Engine documentation.
I am now able to access my test webserver via the allocated public IP. Huzzah! More information on firewall tables can be found in the Google Cloud VMware Engine documentation.
Now that the test webserver is working as expected, it’s time to implement a load balancer in NSX-T. Keep in mind that GCP also has a native load balancing service, but that is beyond the scope of this post.
Public IPs can be assigned to any private IP, not just IPs assigned to VMs. For this example, I’ll configure the NSX-T load balancer and move the previously allocated public IP to the load balancer VIP. There are several steps needed to create a load balancer, so let’s dive in.
The first step is to create a new load balancer via the Load Balancing
screen in NSX-T Manager. Provide a name, choose a size, and the tier 1 router to host the load balancer. Click Save
. Now, expand the Virtual Servers
section and click Set Virtual Servers
.
This is where the virtual server IP (VIP) will be configured, along with a backing server pool. Provide a name and internal IP for the VIP. I used an IP that lives in the same segment as my servers, but you could create a dedicated segment for your VIP. Click the dropdown under Server Pool
and click Create New
.
Next, provide a name for your server pool, and choose a load balancing algorithm. Click Select Members
to add VMs to the pool.
Click Add Member
to add a new VM to the pool and provide the internal IP and port. Rinse and repeat until you’ve added all of the relevant VMs to your virtual server pool, then click Apply
.
You’ll be taken back to the server pool screen, where you can add a monitor to check the health of the VMs in your pool. Click Set Monitors
to choose a monitor.
My pool members are running a simple webserver on port 80, so I’m using the default-http-lb-monitor
. After choosing the appropriate monitor, click Apply
.
Review the settings for the VIP and click Close
.
Finally, click Save
to apply the new settings to your load balancer.
The last step is to browse to Network > Public IPs
in the portal and edit the existing public IP allocation. Update the name as appropriate, and change the attached local address to the load balancer VIP. No firewall rules need to be changed since the traffic is coming in over the same port (tcp/80
).
Browsing to the allocated public IP and pressing refresh a few times shows that our load balancer is working as expected!
That’s it for this post! Part 2 will cover accessing cloud-native services, viewing routing information, VPN connectivity, DNS notes, and more.