Prerequisites

  • PhotonOS Kubernetes template within vSphere
  • HAProxy template within vSphere
  • Working Management Cluster (follow these steps Cluster API Setup Steps (vSphere))
  • Access to the Management Cluster with kubectl
  • Access to a machine with clusterctl

Create clusterctl.yaml to store configuration values. Replace the values where required. Take particular note of values between < and >.

mkdir -p $HOME/.cluster-api
tee $HOME/.cluster-api/clusterctl.yaml >/dev/null <<EOF
## -- Controller settings -- ##
VSPHERE_USERNAME: "[email protected]"            # The username used to access the remote vSphere endpoint
VSPHERE_PASSWORD: "vmware1"                                # The password used to access the remote vSphere endpoint

## -- Required workload cluster default settings -- ##
VSPHERE_SERVER: "vcenter1.domain.com"                                 # The vCenter server IP or FQDN
VSPHERE_DATACENTER: "Datacenter"                                      # The vSphere datacenter to deploy the management cluster on
VSPHERE_DATASTORE: "Datastore"                                        # The vSphere datastore to deploy the management cluster on
VSPHERE_NETWORK: "VM Network"                                         # The VM network to deploy the management cluster on
VSPHERE_RESOURCE_POOL: "<ClusterName>/Resources/<ResourcePoolName>"   # The vSphere resource pool for your VMs
VSPHERE_FOLDER: "vm/<FolderName>/<ChildFolderName>"                   # The VM folder for your VMs. Set to "" to use the root vSphere folder
VSPHERE_TEMPLATE: "photon-3-kube-v1.18.2"                             # The VM template to use for your management cluster.
VSPHERE_HAPROXY_TEMPLATE: "capv-haproxy-v0.6.4"                       # The VM template to use for the HAProxy load balancer
VSPHERE_SSH_AUTHORIZED_KEY: "<ssh key>"                               # The public ssh authorized key on all machines in this cluster. Set to "" if you don't want to enable SSH, or are using another solution.
EOF

Change the value of WORKLOAD_CLUSTER_NAME to be the name of your new workload cluster.

export WORKLOAD_CLUSTER_NAME="wlc01"
mkdir -p $HOME/$WORKLOAD_CLUSTER_NAME
clusterctl config cluster $WORKLOAD_CLUSTER_NAME --infrastructure vsphere --kubernetes-version v1.18.2 --control-plane-machine-count 1 --worker-machine-count 3 > $HOME/$WORKLOAD_CLUSTER_NAME/cluster.yaml

Review the cluster config file and make any required changes. Some suggestions to review are CPU, Memory, Storage and Pods CIDR range.

You can use sed to do a find and replace within the file, the below examples are for changing the Storage and Memory on all virtual machines as part of this deployment and the Pods CIDR.

  • Pod CIDR from 192.168.0.0/16 to 192.168.200.0/24
  • Memory from 8192 to 4096
  • Storage from 25 to 20
sed -i "s/192.168.0.0\/16/192.168.200.0\/24/g" $HOME/$WORKLOAD_CLUSTER_NAME/cluster.yaml
sed -i "s/memoryMiB: 8192/memoryMiB: 4096/g" $HOME/$WORKLOAD_CLUSTER_NAME/cluster.yaml
sed -i "s/diskGiB: 25/diskGiB: 20/g" $HOME/$WORKLOAD_CLUSTER_NAME/cluster.yaml

Deploy the cluster components. Monitor the deployment of virtual machines within vSphere.

kubectl apply -f $HOME/$WORKLOAD_CLUSTER_NAME/cluster.yaml

Use kubectl to see cluster progress. The control planes won’t be Ready until we install a CNI in a later step.

kubectl get cluster --all-namespaces
kubectl get kubeadmcontrolplane --all-namespaces

Retrieve the kubeconfig file for the workload cluster, and set it as KUBECONFIG environment variable.

kubectl get secret $WORKLOAD_CLUSTER_NAME-kubeconfig -o=jsonpath='{.data.value}' | { base64 -d 2>/dev/null || base64 -D; } > $HOME/$WORKLOAD_CLUSTER_NAME/kubeconfig
export KUBECONFIG="$HOME/$WORKLOAD_CLUSTER_NAME/kubeconfig"

Deploy Calico CNI to $WORKLOAD_CLUSTER_NAME. (or any other CNI)

Because we have set KUBECONFIG in the above step our kubectl commands will run against the local-control-plane cluster. If you want to be certain, add --kubeconfig="$HOME/$WORKLOAD_CLUSTER_NAME/kubeconfig" to the commands.

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
#Example with kubeconfig specified
#kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml --kubeconfig="$HOME/$WORKLOAD_CLUSTER_NAME/kubeconfig"

Reset your KUBECONFIG back to the management cluster

unset KUBECONFIG
export KUBECONFIG=$HOME/local-control-plane/kubeconfig

You will now be able to continue performing cluster management operations. Some examples below.

#Retrieve related vSphere VMs
kubectl get vspheremachine
#Retrieve related machines
kubectl get machine
#Retrieve additional machine details
kubectl describe machine <machine-name>
#Retrieve additional cluster information
kubectl describe cluster local-control-plane
#Get machinedeployment (replicaset equivalent for machines)
kubectl get machinedeployment
#Scale worker nodes machinedeployment
kubectl scale machinedeployment $WORKLOAD_CLUSTER_NAME-md-0 --replicas=4
#Get various machine resources at once
kubectl get cluster,machine,machinesets,machinedeployment,vspheremachine
#Get most/all Cluster API resources
kubectl get clusters,machinedeployments,machinehealthchecks,machines,machinesets,providers,kubeadmcontrolplanes,machinepools,haproxyloadbalancers,vsphereclusters,vspheremachines,vspheremachinetemplates,vspherevms

The next article will cover deploying an application to your new workload cluster.

Previous Article in this Series: Cluster API Setup Steps (vSphere)

Share to TwitterShare to FacebookShare to LinkedinShare to PocketCopy URL address
Written by

Sam Perrin@samperrin

Automation Consultant, currently working at Xtravirt. Interested in all things automation/devops related.

Related Posts