Deploying vSphere IaaS Control Plane with Avi Load Balancer and VDS – part 2

This a second part of the deployment vSphere IaaS Control Plane with Avi Load Balancer and VDS. In this post, we will enable Workload Management, create vSphere namespace and a cluster. Also, I’m going to deploy 3 tier app inside the Kubernetes workload cluster.

Requirements:

Get to know and go through all stages of the first part of the implementation – PART 1

1. Workload Management deployment

1. Log in to the vCenter and go to the Workload Management section.

2. Let’s Get Started.

3. Networking stack should be automatically selected as vSphere Distributed Switch (VDS).

4. Type any Supervisor name and choose compatible vSphere cluster.

5. Select Control Plane Storage Policy created in the previous blog – part 1.

6. Load Balancer configuration:
Name – type any name;
Load Balancer Type – select NSX Advanced Load Balancer (aka Avi Load Balancer);
NSX Advanced Load Balancer Controller Endpoint – <IP address of the Avi Load Balancer>:443;
Username – admin;
Password – password;)
Server Certificate – paste created in the previous blog – part 1 SSL/TLS certificate data;
Cloud Name – leave empty;
About SSL/TLS certiifcate, log in to the Avi Controller and copy the data:

7. Management Network:
Network Mode – Static;
Network – select Management network/portgroup;
Starting IP Address – You need 5 IP Addresses from Management network/portgroup;
– Type Subnet Mask, Gateway, DNS Server(s), DNS Search Domain(s) and NTP Server(s)

8. Workload Network:
Network Mode – Static;
Internal Network for Kubernetes Services – leave default values;
Port Group – find Workload network/portgroup;
IP Address Ranges – usable range for deployment SupervisorControlPlaneVMs and for the future workload/guest Kubernetes clusters;
– Type Subnet Mask, Gateway, DNS Server(s), DNS Search Domain(s) and NTP Server(s)

9. For the homelab purposes, Small Supervisor Control Plane Size is more than enough. If you want a bigger SupervisorControlPlaneVMs, choose different type from the drop-down list.
You can leave empty API Server DNS Name(s).

Review summary and if everything is fine, tick Export configuration (to save this Workload Management config) and click blue Finish button. It’s under export option.

10. Deployment of the Workload Management is in progress…

11. After some time, installation was completed successfully! 😉
Don’t worry about exclamation mark. It’s because the initial 60 days free license. After apply a different license, warning gone.

12. By default, new Supervisor deployment creates his own Content Library. Full images are not downloaded, only metadata. Let’s change it to previously prepared Content Library in the previous post – part 1.
In the Supervisor section, go to the Configure tab, than General, expand Namespace Service and click Edit.

13. Check custom Content Library and click OK to save changes.

2. Prepare vSphere namespace

1. Go to the Workload Management section and select Namespaces tab. Choose New Namespace to create a new space.
By default, after deployment of the Workload Management 2 empty vSphere namespaces are created. You can leave them as is.

2. Be sure to select right Supervisor, type any name and be sure to select properWorkload Network. Click Create.

3. This a default view of the new vSphere Namespace. Let’s make some changes.

4. Permissions – click Add Permissions and set a user with a role Can edit to give him privilege to access to the vSphere namespace.
You have a 3 roles to choose: can edit, can read, and owner.

5. Storage – click Add Storage and and choose from the list Storage Policy (created in the previous post – part 1 ) Selected Storage Policy will appear in as a Kubernetes object – storage class.

6. Capacity and Usage – click Edit Limits to set limitation on CPU, RAM and attached storage. This setup applies on the whole vSphere namespace (ns01). That means, you won’t be able to deploy more resources than the max value of the limits.

7. Choose a VM Class. VM Class is a definition (CPU, RAM, CPU/RAM Reservation) of the Kubernetes nodes that can be deployed inside this vSphere namespace.

3. Log in to the vSphere namespace and create a cluster

1. Using a browser, paste a Control Plane Node Address and open the page. You can find that IP Address when, you look at Workload Management section and change tab to Supervisors (check step 11 in the previous section).
Depending on your operating system, download a proper zip package.
Archive includes two files: kubectl and kubectl-vsphere.
Give them permission to execute and copy to your PATH

chmod +x kubectl*
cp * /usr/local/bin

2. Using terminal/shell log in to the vSphere namespace using a command as below.
–server=10.0.42.10 -> Control Plane Node Address;
–vsphere-username -> username added to the vSphere namespace;
–tanzu-kubernetes-cluster-namespace -> vSphere namespace;

kubectl vsphere login --server=10.0.42.10 --vsphere-username mateusz@vsphere.local --tanzu-kubernetes-cluster-namespace ns01 --insecure-skip-tls-verify

After login you need to change (set) a target vSphere namespace.

kubectl config use-context ns01

3. Prepare simple yaml file that describes a new cluster configuration like mine and apply it.
You can use a default template for v1beta1 Cluster configuration from THERE

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: vmattroman-cl01
  namespace: ns01 
spec:
  clusterNetwork:
    services:
      cidrBlocks: ["10.96.0.0/23"]
    pods:
      cidrBlocks: ["172.20.0.0/16"]
    serviceDomain: "cluster.local"
  topology:
    class: tanzukubernetescluster 
    version: v1.30.1---vmware.1-fips-tkg.5 
    controlPlane:
      replicas: 1 
    workers:
      machineDeployments:
        - class: node-pool
          name: worker
          replicas: 2
    variables:
      - name: vmClass
        value: best-effort-small 
      - name: storageClass
        value: tanzu-sp 
      - name: defaultStorageClass
        value: tanzu-sp 

To see what TKR (Tanzu Kubernetes Releases, aka version of the Kubernetes cluster) are possible to use type a command as below. If READY and COMPATIBLE flag is set to True, you can use that version/image.

kubectl get tkr

4. You can monitor status from terminal using commands as below or look into the vCenter.

kubectl get cluster
kubectl get tanzukubernetescluster,cluster,virtualmachinesetresourcepolicy,virtualmachineservice,kubeadmcontrolplane,machinedeployment,machine,virtualmachine

5. After some time, cluster is up and ready! 😉

6. To log in to the new Kubernetes cluster, provide command as below.
After that, set a context to the cluster context. List all available nodes to check the status.

kubectl vsphere login --server=10.0.42.10 --vsphere-username mateusz@vsphere.local --tanzu-kubernetes-cluster-namespace ns01 --tanzu-kubernetes-cluster-name vmattroman-cl01 --insecure-skip-tls-verify

######

kubectl config use-context vmattroman-cl01

######

kubectl get nodes

4. Deploying the application

1. When we try to run any pod inside this cluster in a default namespace we will receive an error.
It’ because TKG (Taznu Kubernetes Grid) releases v1.25 and later by default enable the Pod Security Admission (PSA) controller. Starting with TKG release v1.26, PSA is enforced. With PSA you can uniformly enforce pod security using namespace labels.
By default, TKG clusters provisioned with TKG release v1.25 have PSA modes warn and audit set to restricted for non-system namespaces.
By default, TKG clusters provisioned with TKG releases v1.26 and later have the PSA mode enforce set to restricted for non-system namespaces.

More info about PSA in TKR (Tanzu Kuberetes Releases) you can find HERE

2. To workaround this, let’s change Pod Security Admission setting in a default namespace from restricted to privileged.
Now, nginx pod is running properly. If you want to apply this label to another namespace, change default value to a different one.

kubectl label --overwrite ns default pod-security.kubernetes.io/enforce=privileged

Remember, this solution is a good for lab/dev/testing/homelab purposes! In the production environment, you need to implement Security Context for Individual Pods or configure PSA Cluster-wide regarding to the best practices and documentation.

3. Let’s deploy more complex, 3 tier application – Yelb
Download or copy yelb-k8s-loadbalancer.yamlhttps://github.com/mreferre/yelb/blob/master/deployments/platformdeployment/Kubernetes/yaml/yelb-k8s-loadbalancer.yaml
– Create a new namespace yelbapp;
– Change Pod Security Admission label in a yelbapp namespace from restricted to privileged;
– Apply this application in the yelbapp namespace;
– List all objects created in the yelbapp namespace.

kubectl create ns yelbapp

######

kubectl label --overwrite ns yelbapp pod-security.kubernetes.io/enforce=privileged

######

kubectl apply -f yelbapp.yaml -n yelbapp

######

kubectl get all -n yelbapp

4. In a web browser, use the External IP from the service/yelb-ui to confirm that deployment was successful.
Remember, if you want to expose services from the other applications, they need to have serviceType set to LoadBalancer.

5. Also, in the Avi Controller you can find this service.

Summary

vSphere IaaS Control Plane was deployed successfully and now it can host various of Kubernetes applications. I hope you found clear instructions in these 2 articles to correctly run it in your environment.

If you have any questions, fell free to contact with me!;)

Leave a Reply

Your email address will not be published. Required fields are marked *