A Practical Guide for your Proof of Concept for Vmware Cloud Foundation and VMware Tanzu
April 22, 2021:
Authors: Shrikant S, Vimal P, Aman B, Dinesha S & Brian L
- Introduction
- Minimum Hardware Requirements
- Sizing Compute and Storage Resources for vSphere7 with Tanzu
- Software Requirements
- Networking Requirements
- External Service Requirements
- Solution Design Overview
- Installation Workflow
- Enabling vSphere7 with Tanzu solution on VMware Cloud Foundation
VMware Cloud Foundation is an integrated software stack that bundles compute virtualization (VMware vSphere), storage virtualization (VMware vSAN), network virtualization (VMware NSX), and cloud management and monitoring (VMware vRealize Suite) into a single platform that can be deployed on premises as a private cloud or run as a service within a public cloud. VMware Cloud Foundation 4.0, which is shipped with vSphere 7 will now support building more modern applications on solid infrastructure with native support for Kubernetes.
vSphere was primarily an infrastructure-centric platform and workloads were just VMs. However, modern applications today are a combination of containers, services, VMs, and more. Kubernetes service on vSphere lives under VMware Tanzu and is the “Run” component of the Build, Run, and Manage. Kubernetes service on vSphere allows to become an application-centric platform by delivering Kubernetes natively in vSphere. This means, enterprises can now accelerate the development and operation of modern apps on VMware vSphere while continuing to take advantage of existing investments in technology, tools and skillsets.
The purpose of this document is to provide partners with a summary of requirements for deploying VMware vSphere7 with Tanzu on VMware Cloud foundation, so that they can plan, coordinate, and prepare infrastructure resources for a functional evaluation of the platform and its supported integrations.

Minimum Hardware Requirements for the Management Cluster
| Components | Requirements |
| Servers | 7 vSAN Ready Nodes For information on compatible vSAN ReadyNodes, see the VMware Compatibility Guide. |
| CPU Cores Per Host | Aligns with minimum requirements for vSAN ReadyNodes. For more information, refer to the VMware vSAN Documentation. |
| Memory Per Host | 256 GB |
| Shared Datastore | Aligns with minimum requirements for vSAN ReadyNodes. For more information, refer to the VMware vSAN Documentation. |
| NICs Per Host | Two 10 GbE (or faster) NICs |
Sizing Compute and Storage Resources for vSphere7 with Tanzu
Compute and storage requirements for each component are key considerations when considering how to size for the solution.
Sizing the specific compute and storage requirements for the vSphere7 with Tanzu management workloads, Tanzu Kubernetes Grid cluster management workloads, VMware NSX-T Edge appliances, and tenant workloads deployed within either the Supervisor Cluster or a Tanzu Kubernetes Grid cluster.
| Virtual Machine | vCPU | Memory | Storage | No. per deployment |
| Supervisor cluster control plane (small nodes – up to 2000 pods per Supervisor cluster) | 4 vCPU | 16GB | 200GB (total) | 3 |
| TKG cluster control plane (small nodes) | 2 vCPU | 4GB | 16GB | 3 per TKG cluster |
| TKG cluster worker nodes (small nodes) | 2 vCPU | 4GB | 16GB | 3 per TKG cluster |
| VMware NSX-T Edge appliance | 8 vCPU | 32GB | 200GB | Minimum 2 |
| vSphere registry service | 7 vCPU | 7GB | 200GB | 1 |
VMware Software and Licenses requirements for the Management Cluster
| Product | Supported Version | Required /Optional | Download Location |
| Cloud Builder VM | Please refer to latest release notes | Required | VMware Product Downloads |
| VMware Software Licenses | Supported Version | Required /Optional |
| SDDC Manager | Please refer to latest release notes | Required |
| VMware vSphere 7 Enterprise Plus with Add-on for Kubernetes | Please refer to latest release notes | Required |
| VMware vCenter Server | Please refer to latest release notes | Required |
| VMware vSAN | Please refer to latest release notes | Required |
| VMware NSX-T | Please refer to latest release notes | Required |
Networking Requirements –
The physical switched infrastructure must honor an end-to-end MTU of 1600 bytes for any potential paths between all ESXi hosts’ NICs connected to the Tunnel Network, including cross-fabric connections. This is a hard system requirement, mandatory for system installation. Best practice with 9000.
VLAN REQUIREMENTS
| Function | MTU | ||
| Management Network | Yes | ||
| vMotion Network | Yes | ||
| vSAN Network | Yes | ||
| NSX-T Host Overlay (DHCP) | Yes | ||
| NSXT Edge Uplink 1 | Yes | ||
| NSXT edge Uplink 2 | Yes | ||
| NSXT Edge Overlay | Yes |
| Functions | Routable | Quantity | Notes |
| Management Network Subnet | Yes | As per the deployment (For details check the notes) | One IP address per host VMkernel adapter.One IP address for the vCenter Server Appliance.Four IP addresses for NSX Manager. Four when performing NSX Manager clustering of 3 nodes and 1 virtual IP (VIP).5 IP addresses for the Kubernetes control plane. |
| Static IPs for Kubernetes control plane VMs | Yes | Block of 5 | Supervisor Control Plane VMs, requires total of 5 IPs VLAN based (3 for supervisor VMs, 1 for Supervisor VIP and 1 for Lifecycle management) |
| Ingress Pool | Yes (Overlay) | /27 Static IP Addresses | Ingress CIDRs are Overlay based and routable The minimum is a CIDR of /27 or more. |
| Egress Pool | Yes (Overlay) | /27 Static IP Addresses | Egress CIDRs are Overlay based and routable The minimum is a CIDR of /27 or more. |
| vSphere Pod CIDR range Pod Pool | No | /24 Private IP addresses | A private CIDR range that providers IP addresses for vSphere Pods. The Pod CIDRs do not need to be routable |
| Kubernetes services CIDR range Service Pool | No | /16 Private IP addresses | A private CIDR range to assign IP addresses to Kubernetes services The Service CIDRs do not need to be routable |
External Services
A variety of external services are required for the initial deployment of Cloud Foundation
| Service | Purpose |
| Domain Name Services (DNS) | Provides forward and reverse name resolution for the various components in the solution. |
| Network Time Protocol (NTP) | Synchronizes time between the various components. |
| Dynamic Host Configuration Protocol (DHCP) | Provides automated IP address allocation for VXLAN Tunnel Endpoints (VTEPs) and NSX-T host VTEPs. |
Solution Overview

Installation Workflow Overview

Steps for Enabling vSphere7 with Tanzu on VCF
- Deploy Management Domain
- Deploy the VMware Cloud Builder appliance on a suitable platform. This can be on a laptop under VMware Workstation or VMware Fusion, or on an ESXi host. The VMware Cloud Builder appliance must have network access to all hosts on the management network. Follow the procedure here to deploy the VMware Cloud Builder appliance on an ESXi host.
- Initiate the Cloud Foundation Bring-Up Process, detailed instruction can be found here

- Using SDDC manager deploy VI workload domain, all the detailed steps can be found here
- Deploy a VMware NSX Edge™ cluster in the management domain
- Add FQDN for the edge transport nodes
- Deploy the NSX Edge cluster using the SDDC Manager
Log in to the SDDC Manager. From the dashboard, navigate to Workload Domains > select mgmt-domain; under ACTIONS, select Add Edge Cluster
- FQDN for the edge transport nodes (ensure that DNS records have been added)
- IP addresses for the edge transport node management network
- VLAN ID and IP addresses for the edge transport node tunnel endpoint IPs (TEPs)
- VLAN ID and IP addresses for the two uplink networks
- (When using BGP) The BGP ASN and peering information

- Enable vSphere7 with Tanzu on VI Workload domain
- Validate the inputs in SDDC manager for workload management solution, and then click on the Complete in vSphere button to navigate to vSphere Workload Management to proceed with the deployment.

- Log in to the vSphere Web Client instance and Navigate to Menu -> Workload Management
- Select mgmt-cluster and Set the Control Plane size

- Enter the Management Network details

- Enter the networking details for the Kubernetes control plane.

- Select the vSAN Storage Policy to use for the Control Plane Node, Ephemeral Disks, and the Image Cache

- Click Finish to enable vSphere7 with Tanzu on the management domain

- Once the configuration completes agents gets installed on the ESXi hosts and 3 supervisor cluster master VMs get deployed

- Create a content library on the management domain VMware vCenter Server® instance using the subscription URL https://wp-content.vmware.com/v2/latest/lib.json . For detailed instruction please follow stepshere.
- Enable the Harbor image registry on the management domain cluster as specified in the documentation here.
- Create a namespace and configure access to vSphere7 with Tanzu, follow the link for detailed instruction.
- To run applications that require upstream Kubernetes compliance, you must provision a Tanzu Kubernetes Cluster, follow the steps mentioned here to deploy fully upstream-complaint Kubernetes cluster on top of the Supervisor cluster.