<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Warroyo's Blog]]></title><description><![CDATA[Welcome to Warroyo's Blog. This is where I write things about mostly tech, some food and any other ramblings I have.]]></description><link>https://blog.warroyo.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 17:56:35 GMT</lastBuildDate><atom:link href="https://blog.warroyo.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Integrating ArgoCD  authentication with VCF Automation]]></title><description><![CDATA[With support for ArgoCD as a service added to VCF I have been using it a lot more for automating my K8s environments. I also heavily use VCF Automation(VCFA) as my main cloud console for my VCF privat]]></description><link>https://blog.warroyo.com/vcfa-argocd-oidc</link><guid isPermaLink="true">https://blog.warroyo.com/vcfa-argocd-oidc</guid><category><![CDATA[Vcf]]></category><category><![CDATA[VCF9]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[vmware]]></category><category><![CDATA[OIDC]]></category><dc:creator><![CDATA[Will Arroyo]]></dc:creator><pubDate>Mon, 16 Mar 2026 23:33:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/61d376d098628373c7a0522f/d719440b-501c-44e9-910a-3d93ffe3abf8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>With support for <a href="https://blogs.vmware.com/cloud-foundation/2025/07/11/gitops-for-vcf-broadcom-argo-cd-operator-now-available/">ArgoCD as a service</a> added to VCF I have been using it a lot more for automating my K8s environments. I also heavily use VCF Automation(VCFA) as my main cloud console for my VCF private cloud. I recently came across a feature in VCFA that is really useful, this is the ability to create "relying parties" this adds the ability to use VCFA and it's backing OIDC provider as an OIDC provider to other applications.</p>
<p>With these two features it immediately became clear that there is a good pairing here. I can have ArgoCD OIDC use VCFA as the provider making it so that users can seamlessly login into ArgoCD and as an administrator I can use common roles/groups for access. The rest of this post walks through setting up this integration.</p>
<img src="https://cdn.hashnode.com/uploads/covers/61d376d098628373c7a0522f/e34c18dc-5fef-48c8-9a43-e14f03404cb6.gif" alt="" style="display:block;margin:0 auto" />

<h2>Architecture</h2>
<img src="https://cdn.hashnode.com/uploads/covers/61d376d098628373c7a0522f/e4772d8f-b77e-44b1-ac39-f3224b5d56ee.png" alt="" style="display:block;margin:0 auto" />

<h2>Implementation</h2>
<h3>Setup OIDC for your Org</h3>
<p>The first step in this process is making sure you have OIDC setup for your tenant Org. VCFA has the ability to do per tenant(Org) identity provider configuration. You can configure it with OIDC, SAML, or AD, once this is configured then logging into the tenant goes through your provider of choice and we have access to all of the claims, groups etc. that you setup in your upstream IDP. Ultimately we will use these groups in ArgoCD as well. I am not going to go into detail on how to set up the tenant OIDC in this post, but below is a screenshot of my configuration for Okta integration and here are the <a href="https://techdocs.broadcom.com/us/en/vmware-cis/vcf/vcf-9-0-and-later/9-0/organization-management/managing-identity-providers-in-vcfa/configure-your-vcf-automation-organization-to-use-an-openid-identity-provider.html">official docs</a> to configure it.</p>
<img src="https://cdn.hashnode.com/uploads/covers/61d376d098628373c7a0522f/961adc59-8770-4aa3-8675-2b6641129cbb.png" alt="" style="display:block;margin:0 auto" />

<h3>Setup the OIDC Service</h3>
<p>This is the "relying party" I mentioned in the intro. It is basically a way to create OIDC clients that use VCFA as the provider. So in this case because Okta is my backing IDP for VCFA that means the relying party I create will also be able to use Okta for auth, but with far less configuration. Since it all goes through VCFA it will be treated as a single sign on as well so once I log into VCFA then I can seamlessly login to ArgoCD.</p>
<ol>
<li>Login to the the VCFA provider portal as an administrator and go to <code>OIDC services-&gt; Relying Parties</code></li>
</ol>
<img src="https://cdn.hashnode.com/uploads/covers/61d376d098628373c7a0522f/5253e401-db3b-48ad-ab68-8aceaecf7210.png" alt="" style="display:block;margin:0 auto" />

<ol>
<li><p>Create a new relying party using the DNS name or IP address of the ArgoCD instance that will be deployed. Once you hit save it will generate a client secret, be sure to save that.</p>
<img src="https://cdn.hashnode.com/uploads/covers/61d376d098628373c7a0522f/a59ea86b-9c4f-4b65-b8ad-7aff512238ba.png" alt="" style="display:block;margin:0 auto" /></li>
</ol>
<h3>Deploy ArgoCD</h3>
<p>In this step, ArgoCD will be deployed as a service and using the new OIDC client it will be integrated into VCFA authentication.</p>
<ol>
<li>The below yaml can be used to deploy ArgoCD and also integrate OIDC with VCFA. Update the fields that have <code>##UPDATE THIS</code> next to them. This should use details from the previous steps.</li>
</ol>
<pre><code class="language-yaml">apiVersion: argocd-service.vsphere.vmware.com/v1alpha1
kind: ArgoCD
metadata:
  name: argocd-dev
  namespace: infra-ty3qk ##UPDATE THIS
spec:
  applicationSet:
    enabled: true
  enableLoadBalancer: true
  oidc:
    clientID: 4060b628-c297-49d3-ae0f-31cdcfb9ce86 ##UPDATE THIS
    clientSecret: mmSKfx8UZN6oYa31hGu95N+t+y1rbEih ##UPDATE THIS
    enabled: true
    insecure: true
    issuer: https://vcf-a.vcf.lab/oidc ##UPDATE THIS
    name: vcfa
    requestedIDTokenClaims:
      groups:
        essential: true
      preferred_username:
        essential: true
  rbac:
    policy: |
      g, "Organization Administrator", role:admin 
    policyMatchMode: glob
    scopes: '[groups,roles]'
  serverSideDiff: true
  url: https://argocd-dev.vcf.lab ##UPDATE THIS
  version: 3.0.19+vmware.1-vks.1
</code></pre>
<ol>
<li><p>Apply the yaml into a supervisor namespace. You should see the ArgoCD pods come up and become healthy.</p>
</li>
<li><p>Add DNS. In this example I used <code>argocd-dev.vcf.lab</code> as my DNS auth callback in the relying party config, so ArgoCD needs to be available at that address. You can get the IP address for the ArgoCD Server by doing a <code>kubectl get svc -n &lt;supervisor-ns&gt;</code> and getting the external IP for the server.</p>
</li>
</ol>
<p>There are a couple of things to note from the above YAML:</p>
<ul>
<li><p><code>issuer: https://vcf-a.vcf.lab/oidc</code> - this is your VCFA instance</p>
</li>
<li><p><code>g, "Organization Administrator", role:admin</code> - this is what maps the role from VCFA to the ArgoCD admin role. You can make as many policy rules as you would like and also utilize groups from your upstream IDP. For example I also have a group called <code>argocd-admin</code> in my Okta. I could also add the policy <code>g, "argocd-admin", role:admin</code></p>
</li>
<li><p><code>scopes: '[groups,roles]'</code> - this tells ArgoCD to get both the groups and the roles from the token and use them for policy mapping.</p>
</li>
<li><p><code>url: https://argocd-dev.vcf.lab</code> - this is a required setting, make sure this matches the callback url without the <code>auth/callback</code></p>
</li>
</ul>
<h3>Validation</h3>
<p>Now that the integration is complete we can test the login and make sure that groups and roles are propagating correctly.</p>
<ol>
<li><p>Go to your ArgoCD server, in my case <a href="https://argocd-dev.vcf.lab">https://argocd-dev.vcf.lab</a> . You will now see a "login with OIDC" button. If you are already logged into VCFA it will log you in as this user so be sure to logout if you want to use a different user.</p>
</li>
<li><p>When it redirects to VCFA choose your org that you have setup the OIDC on</p>
</li>
<li><p>Click "Login with OIDC" and it will take you through the normal process and redirect you back to ArgoCD.</p>
</li>
<li><p>Validate your groups and username in ArgoCD. Go to User Info on the left side panel. This is what I see when logging in:</p>
<img src="https://cdn.hashnode.com/uploads/covers/61d376d098628373c7a0522f/7bc82f62-b86a-412a-8de3-fdabbeee7792.png" alt="" style="display:block;margin:0 auto" /></li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Deploying Windows Clusters on vSphere Kubernetes Service with VKS Image Builder]]></title><description><![CDATA[There have been numerous how-to guides over the past few years on building and deploying Windows clusters on the different Kubernetes distributions supported by VMware. This post aims to solidify the ]]></description><link>https://blog.warroyo.com/windows-vks</link><guid isPermaLink="true">https://blog.warroyo.com/windows-vks</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Windows]]></category><category><![CDATA[vks]]></category><category><![CDATA[vmware]]></category><category><![CDATA[broadcom]]></category><category><![CDATA[vsphere]]></category><dc:creator><![CDATA[Will Arroyo]]></dc:creator><pubDate>Tue, 27 Jan 2026 16:35:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1769531540712/7998fa68-64dc-4207-a808-d3e7e67e8823.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There have been numerous how-to guides over the past few years on building and deploying Windows clusters on the different Kubernetes distributions supported by VMware. This post aims to solidify the understanding of the current most up to date process of doing this on vSphere Kubernetes Service which is the definitive Kubernetes service for VMware. At the time of writing the latest <a href="https://techdocs.broadcom.com/us/en/vmware-cis/vcf/vsphere-supervisor-services-and-standalone-components/latest/release-notes/vmware-vkr-release-notes.html">vSphere Kubernetes Release(VKR)</a> is <code>v1.34.1---vmware.1-vkr.4</code> so that is what this will be based on.</p>
<div>
<div>💡</div>
<div><strong>Note: this is based on VKS 3.5 using VKR 1.34.1. Be sure to check your versions and select the right image builder release based on your environment.</strong></div>
</div>

<h2>Building the Image</h2>
<p>Before deploying a cluster, we need to build a windows image. There is a process for this using the <a href="https://github.com/vmware/vks-image-builder">VKS image builder</a> , this is what we will use in the next few steps to build the image.</p>
<h3>Pre-requisites</h3>
<ol>
<li><p>Download the Windows ISO for windows server 2022. You can download the <a href="https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2022">evaluation</a> if needed.</p>
</li>
<li><p>Download the Windows <a href="https://support.broadcom.com/group/ecx/productdownloads?subfamily=VMware%20Tools&amp;freeDownloads=true">VMware tools</a> ISO</p>
</li>
<li><p>Upload the ISOs to a directory on a datastore in your vSphere environment that you will be running this build against. The below script can be used with <a href="https://github.com/vmware/govmomi/tree/main/govc">GOVC</a> to upload the ISOs.</p>
</li>
</ol>
<pre><code class="language-bash">#!/bin/bash


set -e # Exit immediately if a command exits with a non-zero status


print_usage() {
    echo "Usage: $0 &lt;LOCAL_ISO_PATH&gt; &lt;DATASTORE_NAME&gt; &lt;REMOTE_FOLDER&gt;"
    echo ""
    echo "Arguments:"
    echo "  LOCAL_ISO_PATH   Path to the ISO file on your local machine."
    echo "  DATASTORE_NAME   Name of the vSphere Datastore (e.g., vsanDatastore)."
    echo "  REMOTE_FOLDER    Folder path inside the datastore (e.g., ISOs/Linux)."
    echo ""
    echo "Example:"
    echo "  $0 ./ubuntu.iso vsanDatastore ISOs/Ubuntu"
}

check_govc() {
    if ! command -v govc &amp;&gt; /dev/null; then
        echo "Error: 'govc' is not installed or not in your PATH."
        echo "Please install it from: https://github.com/vmware/govmomi/tree/master/govc"
        exit 1
    fi
}


if [ "$#" -ne 3 ]; then
    print_usage
    exit 1
fi

LOCAL_ISO="$1"
DATASTORE="$2"
REMOTE_FOLDER="$3"
FILENAME=\((basename "\)LOCAL_ISO")
REMOTE_PATH="\(REMOTE_FOLDER/\)FILENAME"

check_govc

if [ ! -f "$LOCAL_ISO" ]; then
    echo "Error: Local file '$LOCAL_ISO' not found."
    exit 1
fi

if [ -z "$GOVC_URL" ]; then
    echo "Error: GOVC_URL environment variable is not set."
    echo "Please export GOVC_URL, GOVC_USERNAME, and GOVC_PASSWORD."
    exit 1
fi

echo "--- Starting Upload Process ---"
echo "File:      $FILENAME"
echo "Datastore: $DATASTORE"
echo "Folder:    $REMOTE_FOLDER"

if govc datastore.ls -ds="\(DATASTORE" "\)REMOTE_FOLDER" &amp;&gt; /dev/null; then
    echo "[OK] Remote folder '$REMOTE_FOLDER' exists."
else
    echo "[INFO] Remote folder '$REMOTE_FOLDER' does not exist. Creating it..."
    if govc datastore.mkdir -ds="\(DATASTORE" "\)REMOTE_FOLDER"; then
        echo "[OK] Folder created successfully."
    else
        echo "Error: Failed to create folder '\(REMOTE_FOLDER' on datastore '\)DATASTORE'."
        exit 1
    fi
fi

if govc datastore.ls -ds="\(DATASTORE" "\)REMOTE_PATH" &amp;&gt; /dev/null; then
    echo "Warning: File '\(REMOTE_PATH' already exists on datastore '\)DATASTORE'."
    read -p "Do you want to overwrite it? (y/N): " -n 1 -r
    echo
    if [[ ! \(REPLY =~ ^[Yy]\) ]]; then
        echo "[INFO] Upload skipped by user."
        exit 0
    fi
    echo "[INFO] Overwriting existing file..."
fi

echo "[INFO] Uploading '$LOCAL_ISO'... (This may take a while)"
if govc datastore.upload -ds="\(DATASTORE" "\)LOCAL_ISO" "$REMOTE_PATH"; then
    echo ""
    echo "Success: Upload complete!"
    echo "Location: [\(DATASTORE] \)REMOTE_PATH"
else
    echo ""
    echo "Error: Upload failed."
    exit 1
fi
</code></pre>
<p>Example usage:</p>
<pre><code class="language-bash">./uploadiso.sh ./VMware-tools-windows-12.5.0-23800621.iso cls-wld9-vsan01 isos
</code></pre>
<h3>Setting up the repo</h3>
<ol>
<li>Clone the vks-image-builder repo</li>
</ol>
<pre><code class="language-bash">git clone https://github.com/vmware/vks-image-builder.git
cd vks-image-builder
</code></pre>
<ol>
<li>Create the windows answers file, the Upstream file can be found <a href="https://raw.githubusercontent.com/kubernetes-sigs/image-builder/refs/heads/main/images/capi/packer/ova/windows/windows-2022-efi/autounattend.xml">here</a>. I have added one below with a few updates. The updates have been marked with comments to highlight them. You will need to also update the areas that have an “Updates this” marker, this is really just updating the Windows password. If you are using the eval version you need to remove the product key from the file.</li>
</ol>
<div>
<div>💡</div>
<div><strong>Note: In the below file there is an addition that I made that adds rules to the windows firewall. I saw an issue where the windows firewall believed I was coming from a public network. This is likely due to my specific network setup so you can leave it there or test without it. It is marked with a comment below.</strong></div>
</div>

<pre><code class="language-xml">&lt;unattend xmlns="urn:schemas-microsoft-com:unattend" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State"&gt;
    &lt;settings pass="windowsPE"&gt;
        &lt;component name="Microsoft-Windows-PnpCustomizationsWinPE" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"&gt;
            &lt;DriverPaths&gt;
                &lt;PathAndCredentials wcm:action="add" wcm:keyValue="A"&gt;
                    &lt;Path&gt;a:\&lt;/Path&gt;
                &lt;/PathAndCredentials&gt;
            &lt;/DriverPaths&gt;
        &lt;/component&gt;
        &lt;component name="Microsoft-Windows-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"&gt;
            &lt;DiskConfiguration&gt;
                &lt;Disk wcm:action="add"&gt;
                    &lt;CreatePartitions&gt;
                        &lt;CreatePartition wcm:action="add"&gt;
                            &lt;Order&gt;1&lt;/Order&gt;
                            &lt;Type&gt;EFI&lt;/Type&gt;
                            &lt;Size&gt;100&lt;/Size&gt;
                        &lt;/CreatePartition&gt;
                        &lt;CreatePartition wcm:action="add"&gt;
                            &lt;Order&gt;2&lt;/Order&gt;
                            &lt;Type&gt;MSR&lt;/Type&gt;
                            &lt;Size&gt;16&lt;/Size&gt;
                        &lt;/CreatePartition&gt;
                        &lt;CreatePartition wcm:action="add"&gt;
                            &lt;Order&gt;3&lt;/Order&gt;
                            &lt;Type&gt;Primary&lt;/Type&gt;
                            &lt;Extend&gt;true&lt;/Extend&gt;
                        &lt;/CreatePartition&gt;
                    &lt;/CreatePartitions&gt;
                    &lt;ModifyPartitions&gt;
                        &lt;ModifyPartition wcm:action="add"&gt;
                            &lt;Order&gt;1&lt;/Order&gt;
                            &lt;Format&gt;FAT32&lt;/Format&gt;
                            &lt;Label&gt;System&lt;/Label&gt;
                            &lt;PartitionID&gt;1&lt;/PartitionID&gt;
                        &lt;/ModifyPartition&gt;
                        &lt;ModifyPartition wcm:action="add"&gt;
                            &lt;Order&gt;2&lt;/Order&gt;
                            &lt;PartitionID&gt;2&lt;/PartitionID&gt;
                        &lt;/ModifyPartition&gt;
                        &lt;ModifyPartition wcm:action="add"&gt;
                            &lt;Order&gt;3&lt;/Order&gt;
                            &lt;Format&gt;NTFS&lt;/Format&gt;
                            &lt;Label&gt;Windows&lt;/Label&gt;
                            &lt;Letter&gt;C&lt;/Letter&gt;
                            &lt;PartitionID&gt;3&lt;/PartitionID&gt;
                        &lt;/ModifyPartition&gt;
                    &lt;/ModifyPartitions&gt;
                    &lt;WillWipeDisk&gt;true&lt;/WillWipeDisk&gt;
                    &lt;DiskID&gt;0&lt;/DiskID&gt;
                &lt;/Disk&gt;
            &lt;/DiskConfiguration&gt;
            &lt;ImageInstall&gt;
                &lt;OSImage&gt;
                    &lt;InstallTo&gt;
                        &lt;DiskID&gt;0&lt;/DiskID&gt;
                        &lt;PartitionID&gt;3&lt;/PartitionID&gt;
                    &lt;/InstallTo&gt;
                    &lt;InstallFrom&gt;
                        &lt;MetaData wcm:action="add"&gt;
                            &lt;Key&gt;/IMAGE/NAME&lt;/Key&gt;
                            &lt;Value&gt;Windows Server 2022 SERVERSTANDARDCORE&lt;/Value&gt;
                        &lt;/MetaData&gt;
                    &lt;/InstallFrom&gt;
                &lt;/OSImage&gt;
            &lt;/ImageInstall&gt;
            &lt;UserData&gt;
                &lt;AcceptEula&gt;true&lt;/AcceptEula&gt;
                &lt;FullName&gt;Administrator&lt;/FullName&gt;
                &lt;Organization&gt;Organization&lt;/Organization&gt;
                &lt;ProductKey&gt;
                    &lt;Key&gt;VDYBN-27WPP-V4HQT-9VMD4-VMK7H&lt;/Key&gt;
                    &lt;WillShowUI&gt;OnError&lt;/WillShowUI&gt;
                &lt;/ProductKey&gt;
            &lt;/UserData&gt;
            &lt;EnableFirewall&gt;true&lt;/EnableFirewall&gt;
        &lt;/component&gt;
        &lt;component name="Microsoft-Windows-International-Core-WinPE" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"&gt;
            &lt;SetupUILanguage&gt;
                &lt;UILanguage&gt;en-US&lt;/UILanguage&gt;
            &lt;/SetupUILanguage&gt;
            &lt;InputLocale&gt;0409:00000409&lt;/InputLocale&gt;
            &lt;SystemLocale&gt;en-US&lt;/SystemLocale&gt;
            &lt;UILanguage&gt;en-US&lt;/UILanguage&gt;
            &lt;UILanguageFallback&gt;en-US&lt;/UILanguageFallback&gt;
            &lt;UserLocale&gt;en-US&lt;/UserLocale&gt;
        &lt;/component&gt;
    &lt;/settings&gt;
    &lt;settings pass="offlineServicing"&gt;
        &lt;component name="Microsoft-Windows-LUA-Settings" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"&gt;
            &lt;EnableLUA&gt;false&lt;/EnableLUA&gt;
        &lt;/component&gt;
    &lt;/settings&gt;
    &lt;settings pass="generalize"&gt;
        &lt;component name="Microsoft-Windows-Security-SPP" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"&gt;
            &lt;SkipRearm&gt;1&lt;/SkipRearm&gt;
        &lt;/component&gt;
    &lt;/settings&gt;
    &lt;settings pass="specialize"&gt;
        &lt;component name="Microsoft-Windows-Deployment" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"&gt;
            &lt;RunSynchronous&gt;
                &lt;RunSynchronousCommand wcm:action="add"&gt;
                    &lt;WillReboot&gt;Always&lt;/WillReboot&gt;
                    &lt;Path&gt;%SystemRoot%\System32\reg.exe ADD "HKLM\System\CurrentControlSet\Control\TimeZoneInformation" /v RealTimeIsUniversal /d 1 /t REG_DWORD /f&lt;/Path&gt;
                    &lt;Order&gt;1&lt;/Order&gt;
                &lt;/RunSynchronousCommand&gt;
                &lt;RunSynchronousCommand wcm:action="add"&gt;
                    &lt;WillReboot&gt;Always&lt;/WillReboot&gt;
                    &lt;Path&gt;e:\setup.exe /s /v "/qb REBOOT=R ADDLOCAL=ALL"&lt;/Path&gt;
                    &lt;Order&gt;2&lt;/Order&gt;
                &lt;/RunSynchronousCommand&gt;
            &lt;/RunSynchronous&gt;
        &lt;/component&gt;
        &lt;component name="Microsoft-Windows-International-Core" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"&gt;
            &lt;InputLocale&gt;0409:00000409&lt;/InputLocale&gt;
            &lt;SystemLocale&gt;en-US&lt;/SystemLocale&gt;
            &lt;UILanguage&gt;en-US&lt;/UILanguage&gt;
            &lt;UILanguageFallback&gt;en-US&lt;/UILanguageFallback&gt;
            &lt;UserLocale&gt;en-US&lt;/UserLocale&gt;
        &lt;/component&gt;
        &lt;component name="Microsoft-Windows-Security-SPP-UX" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"&gt;
            &lt;SkipAutoActivation&gt;true&lt;/SkipAutoActivation&gt;
        &lt;/component&gt;
        &lt;component name="Microsoft-Windows-SQMApi" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"&gt;
            &lt;CEIPEnabled&gt;0&lt;/CEIPEnabled&gt;
        &lt;/component&gt;
        &lt;component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"&gt;
            &lt;ComputerName /&gt;
            &lt;ProductKey&gt;VDYBN-27WPP-V4HQT-9VMD4-VMK7H&lt;/ProductKey&gt;
        &lt;/component&gt;
    &lt;/settings&gt;
    &lt;settings pass="oobeSystem"&gt;
        &lt;component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS"&gt;
            &lt;AutoLogon&gt;
                &lt;Password&gt;
                    &lt;Value&gt;VMware123!&lt;/Value&gt; &lt;!-- Update this --&gt;
                    &lt;PlainText&gt;true&lt;/PlainText&gt;
                &lt;/Password&gt;
                &lt;Enabled&gt;true&lt;/Enabled&gt;
                &lt;Username&gt;Administrator&lt;/Username&gt;
            &lt;/AutoLogon&gt;
            &lt;FirstLogonCommands&gt;
                &lt;SynchronousCommand wcm:action="add"&gt;
                    &lt;Order&gt;1&lt;/Order&gt;
                    &lt;Description&gt;Set Execution Policy 64 Bit&lt;/Description&gt;
                    &lt;CommandLine&gt;cmd.exe /c powershell -Command "Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force"&lt;/CommandLine&gt;
                    &lt;RequiresUserInput&gt;true&lt;/RequiresUserInput&gt;
                &lt;/SynchronousCommand&gt;
                &lt;SynchronousCommand wcm:action="add"&gt;
                    &lt;Order&gt;2&lt;/Order&gt;
                    &lt;Description&gt;Set Execution Policy 32 Bit&lt;/Description&gt;
                    &lt;CommandLine&gt;%SystemDrive%\Windows\SysWOW64\cmd.exe /c powershell -Command "Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force"&lt;/CommandLine&gt;
                    &lt;RequiresUserInput&gt;true&lt;/RequiresUserInput&gt;
                &lt;/SynchronousCommand&gt;
                &lt;SynchronousCommand wcm:action="add"&gt;
                    &lt;CommandLine&gt;%SystemRoot%\System32\reg.exe ADD HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced\ /v HideFileExt /t REG_DWORD /d 0 /f&lt;/CommandLine&gt;
                    &lt;Order&gt;3&lt;/Order&gt;
                    &lt;Description&gt;Show file extensions in Explorer&lt;/Description&gt;
                &lt;/SynchronousCommand&gt;
                &lt;SynchronousCommand wcm:action="add"&gt;
                    &lt;CommandLine&gt;%SystemRoot%\System32\reg.exe ADD HKCU\Console /v QuickEdit /t REG_DWORD /d 1 /f&lt;/CommandLine&gt;
                    &lt;Order&gt;4&lt;/Order&gt;
                    &lt;Description&gt;Enable QuickEdit mode&lt;/Description&gt;
                &lt;/SynchronousCommand&gt;
                &lt;SynchronousCommand wcm:action="add"&gt;
                    &lt;CommandLine&gt;%SystemRoot%\System32\reg.exe ADD HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced\ /v Start_ShowRun /t REG_DWORD /d 1 /f&lt;/CommandLine&gt;
                    &lt;Order&gt;5&lt;/Order&gt;
                    &lt;Description&gt;Show Run command in Start Menu&lt;/Description&gt;
                &lt;/SynchronousCommand&gt;
                &lt;SynchronousCommand wcm:action="add"&gt;
                    &lt;CommandLine&gt;%SystemRoot%\System32\reg.exe ADD HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced\ /v StartMenuAdminTools /t REG_DWORD /d 1 /f&lt;/CommandLine&gt;
                    &lt;Order&gt;6&lt;/Order&gt;
                    &lt;Description&gt;Show Administrative Tools in Start Menu&lt;/Description&gt;
                &lt;/SynchronousCommand&gt;
                &lt;SynchronousCommand wcm:action="add"&gt;
                    &lt;CommandLine&gt;%SystemRoot%\System32\reg.exe ADD HKLM\SYSTEM\CurrentControlSet\Control\Power\ /v HibernateFileSizePercent /t REG_DWORD /d 0 /f&lt;/CommandLine&gt;
                    &lt;Order&gt;7&lt;/Order&gt;
                    &lt;Description&gt;Zero Hibernation File&lt;/Description&gt;
                &lt;/SynchronousCommand&gt;
                &lt;SynchronousCommand wcm:action="add"&gt;
                    &lt;CommandLine&gt;%SystemRoot%\System32\reg.exe ADD HKLM\SYSTEM\CurrentControlSet\Control\Power\ /v HibernateEnabled /t REG_DWORD /d 0 /f&lt;/CommandLine&gt;
                    &lt;Order&gt;8&lt;/Order&gt;
                    &lt;Description&gt;Disable Hibernation Mode&lt;/Description&gt;
                &lt;/SynchronousCommand&gt;
                &lt;SynchronousCommand wcm:action="add"&gt;
                    &lt;CommandLine&gt;cmd.exe /c wmic useraccount where "name='Administrator'" set PasswordExpires=FALSE&lt;/CommandLine&gt;
                    &lt;Order&gt;9&lt;/Order&gt;
                    &lt;Description&gt;Disable password expiration for Administrator user&lt;/Description&gt;
                &lt;/SynchronousCommand&gt;
                &lt;SynchronousCommand wcm:action="add"&gt;
                    &lt;CommandLine&gt;cmd.exe /c %SystemDrive%\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File a:\enable-winrm.ps1&lt;/CommandLine&gt;
                    &lt;Description&gt;Enable WinRM&lt;/Description&gt;
                    &lt;Order&gt;10&lt;/Order&gt;
                &lt;/SynchronousCommand&gt;
                &lt;SynchronousCommand wcm:action="add"&gt;
                    &lt;CommandLine&gt;cmd.exe /c a:\disable-network-discovery.cmd&lt;/CommandLine&gt;
                    &lt;Description&gt;Disable Network Discovery&lt;/Description&gt;
                    &lt;Order&gt;11&lt;/Order&gt;
                &lt;/SynchronousCommand&gt;
                &lt;SynchronousCommand wcm:action="add"&gt; &lt;!-- Customization for public network issue --&gt;
                    &lt;Order&gt;12&lt;/Order&gt;
                    &lt;Description&gt;Configure additional policy for public network&lt;/Description&gt;
                    &lt;CommandLine&gt;cmd.exe /c powershell -NoProfile -ExecutionPolicy Bypass -Command "Set-NetFirewallRule -Name 'WINRM-HTTP-In-TCP-PUBLIC' -RemoteAddress Any; Set-Item -Path 'WSMan:\localhost\Service\Auth\Basic' -Value \(true; Set-Item -Path 'WSMan:\localhost\Service\AllowUnencrypted' -Value \)true"&lt;/CommandLine&gt;
                    &lt;/SynchronousCommand&gt;
            &lt;/FirstLogonCommands&gt;
            &lt;OOBE&gt;
                &lt;HideEULAPage&gt;true&lt;/HideEULAPage&gt;
                &lt;HideLocalAccountScreen&gt;true&lt;/HideLocalAccountScreen&gt;
                &lt;HideOEMRegistrationScreen&gt;true&lt;/HideOEMRegistrationScreen&gt;
                &lt;HideOnlineAccountScreens&gt;true&lt;/HideOnlineAccountScreens&gt;
                &lt;HideWirelessSetupInOOBE&gt;true&lt;/HideWirelessSetupInOOBE&gt;
                &lt;NetworkLocation&gt;Work&lt;/NetworkLocation&gt;
                &lt;ProtectYourPC&gt;1&lt;/ProtectYourPC&gt;
                &lt;SkipMachineOOBE&gt;true&lt;/SkipMachineOOBE&gt;
                &lt;SkipUserOOBE&gt;true&lt;/SkipUserOOBE&gt;
            &lt;/OOBE&gt;
            &lt;RegisteredOrganization&gt;Organization&lt;/RegisteredOrganization&gt;
            &lt;RegisteredOwner&gt;Owner&lt;/RegisteredOwner&gt;
            &lt;DisableAutoDaylightTimeSet&gt;false&lt;/DisableAutoDaylightTimeSet&gt;
            &lt;TimeZone&gt;Pacific Standard Time&lt;/TimeZone&gt;
            &lt;UserAccounts&gt;
                &lt;AdministratorPassword&gt;
                    &lt;Value&gt;VMware123!&lt;/Value&gt; &lt;!-- Update this --&gt;
                    &lt;PlainText&gt;true&lt;/PlainText&gt;
                &lt;/AdministratorPassword&gt;
           &lt;LocalAccounts&gt;
            &lt;LocalAccount wcm:action="add"&gt;
                &lt;Description&gt;Administrator&lt;/Description&gt;
                &lt;DisplayName&gt;Administrator&lt;/DisplayName&gt;
                &lt;Group&gt;Administrators&lt;/Group&gt;
                &lt;Name&gt;Administrator&lt;/Name&gt;
            &lt;/LocalAccount&gt;
            &lt;LocalAccount wcm:action="add"&gt;
                &lt;Password&gt;
                    &lt;Value&gt;VMware123!&lt;/Value&gt; &lt;!-- Update this --&gt;
                    &lt;PlainText&gt;true&lt;/PlainText&gt;
                &lt;/Password&gt;
                &lt;Description&gt;For log collection&lt;/Description&gt;
                &lt;DisplayName&gt;Admin Account&lt;/DisplayName&gt;
                &lt;Name&gt;WindowsAdmin&lt;/Name&gt;
                &lt;Group&gt;Administrators&lt;/Group&gt;
            &lt;/LocalAccount&gt;
        &lt;/LocalAccounts&gt;
            &lt;/UserAccounts&gt;
        &lt;/component&gt;
    &lt;/settings&gt;
&lt;/unattend&gt;
</code></pre>
<ol>
<li>Update the vsphere packer variables in <code>packer-variables/vsphere.j2</code> here is an example file, the fields that must be updates are marked with a comment <code>{# Update this #}</code>.</li>
</ol>
<pre><code class="language-json">{
    {# vCenter server IP or FQDN #}
    "vcenter_server":"vcsa9-wld.vcf.lab",    {# Update this #}
    {# vCenter username #}
    "username":"administrator@vcf9-wld.local", {# Update this #}
    {# vCenter user password #}
    "password":"VMware123!VMware123!", {# Update this #}
    {# Datacenter name where packer creates the VM for customization #}
    "datacenter":"wld9-DC", {# Update this #}
    {# Datastore name for the VM #}
    "datastore":"cls-wld9-vsan01", {# Update this #}
    {# [Optional] Folder name #}
    "folder":"",
    {# Cluster name where packer creates the VM for customization #}
    "cluster": "wls-wld9", {# Update this #}
    {# Packer VM network #}
    "network": "/wld9-DC/network/Virtual Private Clouds/image-builder-3/vm-public/vm-public", {# Update this #}
    {# To use insecure connection with vCenter  #}
    "insecure_connection": "true",
    {# TO create a clone of the Packer VM after customization#}
    "linked_clone": "true",
    {# To create a snapshot of the Packer VM after customization #}
    "create_snapshot": "true",
    {# To destroy Packer VM after Image Build is completed #}
    "destroy": "true"
}
</code></pre>
<ol>
<li>Update the windows specific packer vars. There are two files <code>packer-variables/windows/default-args-windows.j2</code> and <code>packer-variables/windows/vsphere-windows.j2</code> . Below are sample files with comments on where to update.</li>
</ol>
<p><strong>vsphere-windows.j2 -</strong> update the paths with the output from the iso upload script</p>
<pre><code class="language-json">{
    {# [Optional] Windows only: Windows OS Image #}
    "os_iso_path": "[cls-wld9-vsan01] isos/en-us_windows_server_2022_x64_dvd_620d7eac.iso", {# Update this #}
    {# [Optional] Windows only: VMware Tools Image #}
    "vmtools_iso_path": "[cls-wld9-vsan01] isos/vmtools-windows.iso" {# Update this #}
}
</code></pre>
<p><strong>default-args-windows.j2 -</strong> all that needs to be added here is the <code>windows_admin_password</code></p>
<pre><code class="language-json">{
  "additional_executables_destination_path": "C:\\ProgramData\\Temp",
  "additional_executables_list": "http://{{ host_ip }}:{{ artifacts_container_port }}/artifacts/{{ kubernetes_version }}/bin/windows/amd64/registry.exe,http://{{ host_ip }}:{{ artifacts_container_port }}/artifacts/{{ kubernetes_version }}/bin/windows/amd64/goss.exe",
  "additional_executables": "true",
  "additional_url_images": "false",
  "additional_url_images_list": "",
  "additional_prepull_images": "",
  "build_version": "{{ os_type }}-kube-{{ kubernetes_series }}-{{ ova_ts_suffix }}",
  "cloudbase_init_url": "http://{{ host_ip }}:{{ artifacts_container_port }}/artifacts/{{ kubernetes_version }}/bin/windows/amd64/CloudbaseInitSetup_x64.msi",
  "cloudbase_real_time_clock_utc": "true",
  "containerd_url": "http://{{ host_ip }}:{{ artifacts_container_port }}/artifacts/{{ kubernetes_version }}/bin/windows/amd64/cri-containerd.tar",
  "containerd_sha256_windows": "{{ containerd_sha256_windows_amd64 }}",
  "containerd_version": "{{ containerd }}",
  "convert_to_template": "true",
  "create_snapshot": "false",
  "disable_hypervisor": "false",
  "disk_size": "40960",
  "kubernetes_base_url": "http://{{ host_ip }}:{{ artifacts_container_port }}/artifacts/{{ kubernetes_version }}/bin/windows/amd64",
  "kubernetes_series": "{{ kubernetes_series }}",
  "kubernetes_semver": "{{ kubernetes_version }}",
  "kubernetes_typed_version": "{{ image_version }}",
  "load_additional_components": "true",
  "netbios_host_name_compatibility": "false",
  "nssm_url": "http://{{ host_ip }}:{{ artifacts_container_port }}/artifacts/{{ kubernetes_version }}/bin/windows/amd64/nssm.exe",
  "prepull": "false",
  "pause_image": "localhost:5000/vmware.io/pause:{{ pause }}",
  "runtime": "containerd",
  "template": "",
  "unattend_timezone": "Pacific Standard Time",
  "windows_updates_categories": "",
  "windows_updates_kbs": "",
  "wins_url": "",
  "custom_role": "true",
  "custom_role_names": "/image-builder/images/capi/image/ansible-windows",
  "ansible_user_vars": "ansible_winrm_read_timeout_sec=600 ansible_winrm_operation_timeout_sec=590 artifacts_container_url=http://{{ host_ip }}:{{ artifacts_container_port }} imageVersion={{ image_version|replace('-', '.') }} registry_store_archive_url=http://{{ host_ip }}:{{ artifacts_container_port }}/artifacts/{{ kubernetes_version }}/registries/{{ registry_store_path }}",
  "vmx_version": "21",
  "debug_tools": "false",
  "enable_auto_kubelet_service_restart": "false",
  "windows_admin_password": "VMware123!" {# Update this #}
}
</code></pre>
<h2>Running the Build</h2>
<p>Now that all of the specific settings are in place we can run the build. This will output an OVA that can then be uploaded to a content lib and used in a VKS cluster.</p>
<ol>
<li>Start the artifacts container and run the build. The <code>TKR_SUFFIX</code> should match the suffix of your current linux VKR versions, this used when resolving the correct ova from the content lib. The <code>HOST_IP</code> should be your workstation IP that this command is running from. The <code>IMAGE_ARTIFACTS_PATH</code> is where you want the OVA to be created. The <code>AUTO_UNATTEND_ANSWER_FILE_PATH</code> is the path to your answers file.</li>
</ol>
<pre><code class="language-shell"> ##start the artifacts container

make run-artifacts-container ARTIFACTS_CONTAINER_PORT=8081
</code></pre>
<pre><code class="language-shell">##run this from the vks-image-builder directory
make build-node-image OS_TARGET=windows-2022-efi TKR_SUFFIX=vkr.4 HOST_IP=10.0.0.180 IMAGE_ARTIFACTS_PATH=/home/will/windows-build-2/image ARTIFACTS_CONTAINER_PORT=8081 PACKER_HTTP_PORT=8082 AUTO_UNATTEND_ANSWER_FILE_PATH=/home/will/windows-build-2/vks-image-builder/windows_autounattend.xml
</code></pre>
<ol>
<li>Create a content lib and upload the OVA.</li>
</ol>
<pre><code class="language-shell">## update the datastore with your datastore as well as the path to the ova
govc library.create -ds "cls-wld9-vsan01" "windows-vkrs"
govc library.import windows-vkrs  ./image/ovas/windows-2022-amd64-v1.34.1---vmware.1-vkr.4.ova
</code></pre>
<h2>Setting up the content library</h2>
<p>Before deploying a cluster, we need to associate our new content library with the supervisor. We can do this by going to the supervisor settings and under general adding another content library.</p>
<div>
<div>💡</div>
<div><strong>Note: when you do this there is a warning about multiple content libraries and needing to disambiguate between them. depending on how you build clusters you may need to add the correct annoations to specify the content library. This </strong><a target="_self" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://techdocs.broadcom.com/us/en/vmware-cis/vcf/vsphere-supervisor-services-and-standalone-components/latest/managing-vsphere-kubernetes-service/administering-kubernetes-releases-for-tkg-service-clusters/understanding-tkr-resolution/resolve-os-image-conflicts.html" style="pointer-events:none"><strong>doc </strong></a><strong>has more details.</strong></div>
</div>

<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769442508513/54b8836f-34b4-424c-b070-9628bf4f8e39.png" alt="" style="display:block;margin:0 auto" />

<p>Verify that the image is showing up in the supervisor. Run this from the supervisor cluster context.</p>
<pre><code class="language-bash">k get osimages -A | grep windows
vmi-4aacfc6e370e28a1e   v1.34.1+vmware.1           windows   2022         amd64   cvmi                13s
</code></pre>
<pre><code class="language-bash">k get cclitem -A | grep windows
clitem-4aacfc6e370e28a1e   windows-2022-amd64-v1.34.1---vmware.1-vkr.4                                 cl-c80585866b93825a5       OVF    True    true     20615149892   true                100s
</code></pre>
<p>Take note of the content library ID for future use in cluster builds.</p>
<h2>Deploy a cluster</h2>
<p>Now that we have verified the image is available we can deploy a cluster. There are a number of ways to do this but for the purpose of this post I am just going to share the yaml for the cluster and you can deploy it how you would like. The main thing to note below is the Windows node poo along with the annotation that tells it which content library to use.</p>
<pre><code class="language-yaml">apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: windows
  namespace: dev-ts6hw
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
        - 192.168.156.0/20
    services:
      cidrBlocks:
        - 10.96.0.0/12
    serviceDomain: cluster.local
  topology:
    class: builtin-generic-v3.5.0
    classNamespace: vmware-system-vks-public
    version: v1.34.1---vmware.1-vkr.4
    variables:
      - name: kubernetes
        value:
          certificateRotation:
            enabled: true
            renewalDaysBeforeExpiry: 90
      - name: vmClass
        value: best-effort-small
      - name: storageClass
        value: vsan-default-storage-policy
    controlPlane:
      replicas: 1
      metadata:
        annotations:
          run.tanzu.vmware.com/resolve-os-image: os-name=photon, content-library=cl-a3b5e06b12f2737ca
    workers:
      machineDeployments:
        - class: node-pool
          name: windows-np-j72b
          replicas: 1
          variables:
            overrides:
              - name: vmClass
                value: best-effort-xsmall
        - class: node-pool
          name: windows-nodepool-l7rs
          replicas: 1
          metadata:
            annotations:
              run.tanzu.vmware.com/resolve-os-image: os-type=windows, content-library=cl-c80585866b93825a5
          variables:
            overrides:
              - name: vmClass
                value: best-effort-large
</code></pre>
<h2>Customizing images with Ansible</h2>
<p>Sometimes it may be necessary to modify the image with custom scripts or binaries. This should be used with caution and also should only be used for things that cannot be done through a native K8s operator during runtime. Also this should not be used for anything that would require embedding passwords into the OVA. With those caveats out of the way let’s add some Ansible to modify the image build.</p>
<p>In this example we will add a simple Ansible task that runs a powershell script. This script does not perform any meaningful actions in this example but provides the details on how to set this up so that you can use a similar process for anything you need to install.</p>
<ol>
<li>Add a new file to the <code>vks-image-builder/ansible-windows/tasks</code> folder. This <code>ansible-windows</code> folder is the Ansible role that executes by default during the build process.</li>
</ol>
<pre><code class="language-bash">touch vks-image-builder/ansible-windows/tasks/exec-pwsh.yml
</code></pre>
<ol>
<li>Add the contents of the Ansible task to the new file .</li>
</ol>
<pre><code class="language-yaml">- name: Execute custom BYOI script
  ansible.builtin.script: scripts/helloworld.ps1
</code></pre>
<ol>
<li>Add the powershell script to the files in the Ansible role.</li>
</ol>
<pre><code class="language-bash">touch vks-image-builder/ansible-windows/files/scripts/helloworld.ps1
</code></pre>
<ol>
<li>Add the contents of the script to the file.</li>
</ol>
<pre><code class="language-powershell">Write-Output "Hello, World!"
</code></pre>
<ol>
<li>Update the <code>main.yml</code> to include your new tasks</li>
</ol>
<pre><code class="language-yaml">
- import_tasks: exec-pwsh.yml
</code></pre>
<h2>Updating cluster nodes for patching</h2>
<p>Since these are images that you maintain, you may need to update an image with a one off patch etc. This may also be the case for images that don’t have a new k8s version to go along with it. In the case of patching a with new K8s version it’s pretty simple, you build a new image with the new K8s version and then update the K8s version on your cluster definition. But what about the case where there are no changes upstream from Broadcom, that’s what we will cover here. In this case we want to patch or change something on the image and want to roll that out to our clusters without changing anything else.</p>
<p>To do this, we need to understand.a bit about how an OVA in a content library is resolved by VKS.</p>
<ol>
<li><p>The VKR version selected in the cluster definition. This looks something like <code>v1.34.1---vmware.1-vkr.4</code>, we can see that this is version 1.34.1 of K8s, also that it’s a patch release of <code>vmware.1-vkr.4</code> this suffix is used during the image resolution process.</p>
</li>
<li><p>The OS , this is set on the node pools usually Photon, Ubuntu, or Windows.</p>
</li>
<li><p>The OS version, for Ubuntu we might see 22.04, Windows we would see 2022</p>
</li>
<li><p>The content library setting</p>
</li>
</ol>
<p>When these are combined VKS looks at the available OS images that match the right VKR version and OS settings and then pulls the right ova from the content library to create the node.</p>
<p>Now the challenge we run into is that if we just update our patch suffix to something like <code>vkr.5</code> and then when we build a cluster the Windows nodes would be resolved but if our Linux nodes don’t also have a <code>vkr.5</code> patch release then it won’t find the right images. We also need to make sure our content library item is following the standard naming convention so that it is picked up bu VKS, the name geenrated by the image builder creates this in the proper format(<code>windows-2022-amd64-v1.34.1---vmware.1-vkr.4</code>). Since there is already an OVA in the content library named that we can’t upload the new one unless we want to overwrite it. Overwriting the image is a perfectly accetable way to roll out a new patch however I would prefer to have some more control over the process. So we need another way to manage this. This is where the content library setting comes in, we can create a new content library for our patch and upload the new OVA then selectively roll out the new node image.</p>
<ol>
<li>Create a new content library for the patch and upload the new image.</li>
</ol>
<pre><code class="language-bash">govc library.create -ds "cls-wld9-vsan01" "windows-patch-12626"
govc library.import windows-patch-12626  ./image/ovas/windows-2022-amd64-v1.34.1---vmware.1-vkr.4.ova
</code></pre>
<ol>
<li>Update supervisor config to add the new content library</li>
</ol>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769477511048/e323333c-6fa9-4596-8c1d-8fb05ff51e1e.png" alt="" style="display:block;margin:0 auto" />

<ol>
<li>Update your cluster yaml to use the new content library ID.</li>
</ol>
<pre><code class="language-bash">k get cclitem -A | grep -i windows
</code></pre>
<pre><code class="language-yaml"> run.tanzu.vmware.com/resolve-os-image: os-type=windows, content-library=cl-1c5b6ba4a0e41aa16
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Auditing CNS Volumes in VCF]]></title><description><![CDATA[When deploying VKS clusters, VMs, or Pods that use extra volumes in VCF with the Supervisor, the volumes are managed through PVCs and in turn PVs. These PVs create CNS(Cloud Native Storage) volumes which integrate vSphere and Kubernetes to enable the...]]></description><link>https://blog.warroyo.com/auditing-cns-volumes-in-vcf</link><guid isPermaLink="true">https://blog.warroyo.com/auditing-cns-volumes-in-vcf</guid><category><![CDATA[cloud native]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[vsphere]]></category><category><![CDATA[Vcf]]></category><category><![CDATA[cns]]></category><dc:creator><![CDATA[Will Arroyo]]></dc:creator><pubDate>Tue, 06 Jan 2026 18:11:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767722380333/4dca93e5-4a80-4da1-8c81-caf0b1b62166.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When deploying VKS clusters, VMs, or Pods that use extra volumes in VCF with the Supervisor, the volumes are managed through PVCs and in turn PVs. These PVs create <a target="_blank" href="https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/container-storage-plugin/3-0/getting-started-with-vmware-vsphere-container-storage-plug-in-3-0/vsphere-container-storage-plug-in-concepts.html">CNS(Cloud Native Storage)</a> volumes which integrate vSphere and Kubernetes to enable the creation and management of container volumes in a vSphere environment. This is an extremely common thing to do when deploying a VKS cluster for example , you usually add additional volumes to the nodes for additional storage of images( <code>var/lib/containerd</code>). This additional volume is managed through a PVC and ultimately a PV.</p>
<p>One challenge with this from an administrator point of view is getting insight into all of the volumes and what they are being used for and where. vCenter does have a native UI to explore these and filter them and <a target="_blank" href="https://github.com/vmware/govmomi/blob/main/govc/USAGE.md#volumels">govc</a> can also be used to query them with the CLI, however many times I want to see these volumes from a namespace and PVC based view. It can also be challenging to associate the volumes with the PVC when looking through the raw details from the govc output. That’s why I created <a target="_blank" href="https://github.com/warroyo/cns-tooling/tree/main/pvc-audit">this simple python script</a> to help with getting the details for volumes based on a Supervisor namespace.</p>
<p>The script used both kubectl output and govc output and correlates the PVCs to the underlying CNS volumes, it then returns relevant details about the volumes as well as sorts them between node volumes and volumes that are being used in the clusters for workload PVCs.</p>
<h2 id="heading-example-usage">Example Usage</h2>
<p>let’s say I have a supervisor namespace called <code>gitops-932tv</code> and in that namespace I have a VKS cluster deployed. Also in that cluster I have an app deployed that uses a number of persistent disks. As an administrator I may need to determine where these volumes are located on my datastores and also get some details like there volume IDs etc. in case I need to relocate them. To get this info I can simple run the following.</p>
<ol>
<li><p>Setup the kube context and govc details</p>
<pre><code class="lang-bash"> <span class="hljs-comment"># sets the kube context</span>
 vcf context use &lt;supervisor-conext&gt;
 <span class="hljs-built_in">export</span> GOVC_INSECURE=<span class="hljs-literal">true</span>
 <span class="hljs-built_in">export</span> GOVC_PASSWORD=<span class="hljs-string">'password'</span>
 <span class="hljs-built_in">export</span> GOVC_USERNAME=administrator@vsphere.local
 <span class="hljs-built_in">export</span> GOVC_URL=https://vcsa.myorg.com
</code></pre>
</li>
<li><p>clone the repo and run the script</p>
<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/warroyo/cns-tooling
 <span class="hljs-built_in">cd</span> cns-tooling/pvc-audit
 python3 vks_disk_audit.py gitops-932tv
</code></pre>
</li>
</ol>
<p>With those two steps the output will look similar to this. You will notice that the we get two separate sections, one for node volumes and one for in cluster volumes. The node volumes section shows which node the volumes are attached to as well as details about the size, cluster, datastore etc. For the in cluster volumes, these are typically PVCs used by workloads in the cluster, these show similar details but also include the referred entity metadata about the PVC name in the cluster and which pod is using it.</p>
<pre><code class="lang-bash">--- Starting Audit <span class="hljs-keyword">for</span> Namespace: gitops-932tv ---
Mapping VSphereMachines to Clusters...
Mapped 2 nodes across clusters.
Querying all PVCs <span class="hljs-keyword">in</span> namespace...
Found 7 PVC(s). Resolving PV handles...
Querying vSphere CNS <span class="hljs-keyword">for</span> 7 volumes...


=======================================================================================
                             NODE VOLUMES (Attached)
=======================================================================================
PVC Name                       Node                           Cluster              Volume Name                         Volume ID                                Datastore            Capacity   Referred Entity
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
tfd-1-8q26s-ql69w-vol-b9xl     tfd-1-8q26s-ql69w              tfd-1                pvc-9e7d6921-d573-4203-ad8a-9e420f490446 560fd41e-f243-4c96-997e-8bf7b7996e95     vsan:8740804e6737443c-b388c53757aaaf93/ 20.00 GB   -
tfd-1-tfd-1-jrxdt-pmhmn-cv4hj-vol-b9xl tfd-1-tfd-1-jrxdt-pmhmn-cv4hj  tfd-1                pvc-2417b407-2bed-4cdb-88fb-7b1ebf6653ad ecfffa9d-483c-4fe1-a391-e4fe1986e52d     vsan:8740804e6737443c-b388c53757aaaf93/ 20.00 GB   -


=======================================================================================
                             IN-CLUSTER PVCs
=======================================================================================
PVC Name                       Cluster              Volume Name                         Volume ID                                Datastore            Capacity   Referred Entity
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
f478b761-3832-4c56-8cc7-10b44e5a968b-1fb60ee1-66e8-4ce6-97a6-c7cb12070bdf tfd-1                pvc-6b57785d-828b-4f1d-ab32-7c617e60a908 2b2deeab-53cf-4551-bba4-e787dc1567d8     vsan:8740804e6737443c-b388c53757aaaf93/ 1.00 GB    PVC:music-store/order-pvc, Pod:order-service-69987cbbfd-mlj2c
f478b761-3832-4c56-8cc7-10b44e5a968b-232680b1-d8d7-4eaa-9e50-66a12bb0fb0a tfd-1                pvc-788a9fa5-6db8-4151-ad5b-3f32c5dcda16 b2c6ac20-b002-4f0a-948b-4fccef7d8c3a     vsan:8740804e6737443c-b388c53757aaaf93/ 1.00 GB    PVC:music-store/cart-pvc, Pod:cart-service-7cc8794c86-x2m6v
f478b761-3832-4c56-8cc7-10b44e5a968b-3519cba5-07ba-45a5-b329-e647a3500c34 tfd-1                pvc-b7ec835b-ef13-4b56-9fae-4f6b2c6395ce 5fd5d74c-2b77-4ecb-ae95-c4a22ae99111     vsan:8740804e6737443c-b388c53757aaaf93/ 1.00 GB    Pod:postgres-bcf8997c4-89pkj, PVC:music-store/postgres-pvc
f478b761-3832-4c56-8cc7-10b44e5a968b-7f7b7933-d67a-479c-8197-47c61708f1c6 tfd-1                pvc-92f87c6b-cfdc-4377-8a79-11d6dd6b3b1a 6fe37d12-9c30-4ab5-aaf7-7f9276e3ba49     vsan:8740804e6737443c-b388c53757aaaf93/ 1.00 GB    PVC:music-store/music-store-1-pvc
f478b761-3832-4c56-8cc7-10b44e5a968b-d1d19249-7cb0-4c7d-914e-ec866e998aaa tfd-1                pvc-06cbb085-57c3-4b27-856a-713b45e0c53b 0f28021c-c85c-4f36-b9de-faa26fedb232     vsan:8740804e6737443c-b388c53757aaaf93/ 1.00 GB    PVC:music-store/users-pvc, Pod:users-service-855678b958-9zcdb

--- Audit Complete ---
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Templating cluster creation with  Tanzu Mission Control]]></title><description><![CDATA[Overview
I have had a question come up a few times with customers and coworkers about how to reduce duplication when creating clusters with Tanzu Mission Control(TMC). The question or issue that is usually brought up is that the platform engineering ...]]></description><link>https://blog.warroyo.com/templating-cluster-creation-with-tanzu-mission-control</link><guid isPermaLink="true">https://blog.warroyo.com/templating-cluster-creation-with-tanzu-mission-control</guid><category><![CDATA[TANZU]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[vmware]]></category><category><![CDATA[tmc]]></category><category><![CDATA[tkg]]></category><dc:creator><![CDATA[Will Arroyo]]></dc:creator><pubDate>Thu, 02 Nov 2023 15:28:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1698938855211/ef5c3194-bcf0-43c8-886d-a32165ca3474.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-overview">Overview</h1>
<p>I have had a question come up a few times with customers and coworkers about how to reduce duplication when creating clusters with Tanzu Mission Control(TMC). The question or issue that is usually brought up is that the platform engineering team wants to be able to create clusters quickly and many of the settings between cluster creation are the exact same, thus having a lot of duplication between clusters. When looking at the TMC UI there's not a way to set custom defaults today to be able to remove the need to fill in every field each time you create a cluster. However, using the UI is probably not the approach a platform team wants to take to scale anyway. It's much more efficient to codify the clusters and automate the creation. In this post, we will walk through creating cluster templates and using the Tanzu CLI to create clusters with minimal inputs. We will focus on TKG Clusters mostly, but I will also provide some commands that work with AKS and EKS clusters are well.</p>
<h1 id="heading-brief-note-on-the-tanzu-cli">Brief note on the Tanzu CLI</h1>
<p>The Tanzu CLI can now be installed through standard package managers or directly via the binary from GitHub, see the install <a target="_blank" href="https://github.com/vmware-tanzu/tanzu-cli/blob/main/docs/quickstart/install.md#from-the-binary-releases-in-github-project">instructions here</a>. The TMC standalone CLI is in the process of being deprecated so you will want to use the new plugins for the Tanzu CLI. They will be installed when you <a target="_blank" href="https://docs.vmware.com/en/VMware-Tanzu-Mission-Control/services/tanzu-cli-ref-tmc/install-cli.html">connect the Tanzu CLI to your TMC endpoint</a>.</p>
<h1 id="heading-templating-clusters">Templating clusters</h1>
<h2 id="heading-getting-the-base-template">Getting the base template</h2>
<p>The CLI provides a way to get an existing cluster's spec, I always recommend going with this approach over trying to create one from scratch. Before we start templating the cluster, create a cluster through the UI. Then use the below command to pull the cluster's yaml spec back and save it in a file.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># for TKG </span>
tanzu tmc cluster get &lt;cluster-name&gt; -m &lt;mgmt-cluster&gt; -p &lt;provisioner&gt; &gt; template.yml
<span class="hljs-comment">## for EKS</span>
tanzu tmc ekscluster get &lt;cluster-name&gt; -c &lt;credential&gt; -r &lt;region&gt; &gt; template.yml
<span class="hljs-comment">## for AKS</span>
tanzu tmc akscluster get &lt;cluster-name&gt; -c &lt;credential&gt; -r &lt;resource-group&gt; -s &lt;subscription&gt; &gt; template.yml
</code></pre>
<p>The contents of the above command will look slightly different depending on the k8s provider you are using, but all of them will be a yaml file that describes all of the settings for the cluster.</p>
<h2 id="heading-remove-any-extra-fields">Remove any extra fields</h2>
<p>Just like when pulling back a resource from a k8s cluster there will be fields that we will want to omit from our template. For example the <code>status</code> section. This will be different between providers, but because we are focused on TKG in this example I have listed the field that I omitted below. Technically many of these fields do not need to be removed since the API will just ignore them, but since we are creating a template and don't want a bunch of extra stuff that might be confusing we will remove anything that is not needed.</p>
<ul>
<li><p><code>fullname.orgId</code></p>
</li>
<li><p><code>meta</code></p>
<ul>
<li><p><code>labels</code></p>
<ul>
<li><code>tmc.cloud.vmware.com/creator</code></li>
</ul>
</li>
<li><p><code>annotations</code></p>
</li>
<li><p><code>creationTime</code></p>
</li>
<li><p><code>generation</code></p>
</li>
<li><p><code>resourceVersion</code></p>
</li>
<li><p><code>uid</code></p>
</li>
<li><p><code>updateTime</code></p>
</li>
</ul>
</li>
<li><p><code>spec.topology.variables</code></p>
<ul>
<li><p><code>extensionCert</code></p>
</li>
<li><p><code>user</code> - if you would like to provide your ssh public key, keep this one.</p>
</li>
<li><p><code>clusterEncryptionConfigYaml</code></p>
</li>
<li><p><code>TKR_DATA</code></p>
</li>
</ul>
</li>
<li><p><code>status</code> - entire section</p>
</li>
</ul>
<p>All of the fields above are generated by TMC when the cluster is created. Your YAML file should now look similar to the one below.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">fullName:</span>
  <span class="hljs-attr">managementClusterName:</span> <span class="hljs-string">h2o-4-19340</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">tmc-base-template</span>
  <span class="hljs-attr">provisionerName:</span> <span class="hljs-string">lab</span>
<span class="hljs-attr">meta:</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">example-label:</span> <span class="hljs-string">example</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">clusterGroupName:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">tmcManaged:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">topology:</span>
    <span class="hljs-attr">clusterClass:</span> <span class="hljs-string">tanzukubernetescluster</span>
    <span class="hljs-attr">controlPlane:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">annotations:</span>
          <span class="hljs-attr">example-cp-annotation:</span> <span class="hljs-string">example</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">example-cp-label:</span> <span class="hljs-string">example</span>
      <span class="hljs-attr">osImage:</span>
        <span class="hljs-attr">arch:</span> <span class="hljs-string">amd64</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">ubuntu</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">"20.04"</span>
      <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
    <span class="hljs-attr">network:</span>
      <span class="hljs-attr">pods:</span>
        <span class="hljs-attr">cidrBlocks:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-number">172.20</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">/16</span>
      <span class="hljs-attr">serviceDomain:</span> <span class="hljs-string">cluster.local</span>
      <span class="hljs-attr">services:</span>
        <span class="hljs-attr">cidrBlocks:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-number">10.96</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">/16</span>
    <span class="hljs-attr">nodePools:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">info:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">md-0</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">class:</span> <span class="hljs-string">node-pool</span>
        <span class="hljs-attr">metadata:</span>
          <span class="hljs-attr">labels:</span>
            <span class="hljs-attr">exmaple-np-label:</span> <span class="hljs-string">example</span>
        <span class="hljs-attr">osImage:</span>
          <span class="hljs-attr">arch:</span> <span class="hljs-string">amd64</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">ubuntu</span>
          <span class="hljs-attr">version:</span> <span class="hljs-string">"20.04"</span>
        <span class="hljs-attr">overrides:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">vmClass</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">best-effort-large</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">storageClass</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">vc01cl01-t0compute</span>
        <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
    <span class="hljs-attr">variables:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">defaultStorageClass</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">vc01cl01-t0compute</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">storageClass</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">vc01cl01-t0compute</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">storageClasses</span>
      <span class="hljs-attr">value:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">vc01cl01-t0compute</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">vmClass</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">best-effort-large</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ntp</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">time1.oc.vmware.com</span>
    <span class="hljs-attr">version:</span> <span class="hljs-string">v1.23.8+vmware.2-tkg.2-zshippable</span>
<span class="hljs-attr">type:</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">TanzuKubernetesCluster</span>
  <span class="hljs-attr">package:</span> <span class="hljs-string">vmware.tanzu.manage.v1alpha1.managementcluster.provisioner.tanzukubernetescluster</span>
  <span class="hljs-attr">version:</span> <span class="hljs-string">v1alpha1</span>
</code></pre>
<h2 id="heading-templating-with-ytt">Templating with YTT</h2>
<p>There are many options for templating files, but since this is a YAML file and we like <a target="_blank" href="https://carvel.dev/ytt/">Carvel YTT</a> that is what is used for this example. I would highly recommend reading up on YTT and testing it out for different use cases, it's a very powerful YAML templating language.</p>
<h3 id="heading-determine-the-variable-fields">Determine the variable fields</h3>
<p>first, we need to determine which fields should be variable. This could be any field, but we want to reuse as much as possible as well. These fields are entirely up to you and your needs.</p>
<h3 id="heading-template-the-fields-with-ytt">Template the fields with YTT</h3>
<p>the <a target="_blank" href="https://carvel.dev/ytt/#example:example-load-data-values">YTT docs</a> explain how to use data values in a YAML file. This is what we will be using to template the file. Using the same file from above the below template is what I have come up with.</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#@ load("@ytt:data", "data")</span>
<span class="hljs-attr">fullName:</span>
  <span class="hljs-attr">managementClusterName:</span> <span class="hljs-comment">#@ data.values.mgmt_cluster_name</span>
  <span class="hljs-attr">name:</span> <span class="hljs-comment">#@ data.values.cluster_name</span>
  <span class="hljs-attr">provisionerName:</span> <span class="hljs-comment">#@ data.values.provisioner</span>
<span class="hljs-attr">meta:</span>
  <span class="hljs-comment">#@ if/end hasattr( data.values, "cluster_labels"):</span>
  <span class="hljs-attr">labels:</span> <span class="hljs-comment">#@ data.values.cluster_labels</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">clusterGroupName:</span> <span class="hljs-comment">#@ data.values.cluster_group</span>
  <span class="hljs-attr">tmcManaged:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">topology:</span>
    <span class="hljs-attr">clusterClass:</span> <span class="hljs-string">tanzukubernetescluster</span>
    <span class="hljs-attr">controlPlane:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-comment">#@ if/end hasattr( data.values, "cp_annotations"):</span>
        <span class="hljs-attr">annotations:</span> <span class="hljs-comment">#@ data.values.cp_annotations</span>
        <span class="hljs-comment">#@ if/end hasattr( data.values, "cp_labels"):</span>
        <span class="hljs-attr">labels:</span> <span class="hljs-comment">#@ data.values.cluster_labels</span>
      <span class="hljs-attr">osImage:</span>
        <span class="hljs-attr">arch:</span> <span class="hljs-string">amd64</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">ubuntu</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">"20.04"</span>
      <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
    <span class="hljs-attr">network:</span>
      <span class="hljs-attr">pods:</span>
        <span class="hljs-attr">cidrBlocks:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-number">172.20</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">/16</span>
      <span class="hljs-attr">serviceDomain:</span> <span class="hljs-string">cluster.local</span>
      <span class="hljs-attr">services:</span>
        <span class="hljs-attr">cidrBlocks:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-number">10.96</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">/16</span>
    <span class="hljs-attr">nodePools:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">info:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">md-0</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">class:</span> <span class="hljs-string">node-pool</span>
        <span class="hljs-attr">metadata:</span>
          <span class="hljs-comment">#@ if/end hasattr( data.values, "node_labels"):</span>
          <span class="hljs-attr">labels:</span> <span class="hljs-comment">#@ data.values.node_labels</span>
        <span class="hljs-attr">osImage:</span>
          <span class="hljs-attr">arch:</span> <span class="hljs-string">amd64</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">ubuntu</span>
          <span class="hljs-attr">version:</span> <span class="hljs-string">"20.04"</span>
        <span class="hljs-attr">overrides:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">vmClass</span>
          <span class="hljs-attr">value:</span> <span class="hljs-comment">#@ data.values.cp_vm_size</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">storageClass</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">vc01cl01-t0compute</span>
        <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
    <span class="hljs-attr">variables:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">defaultStorageClass</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">vc01cl01-t0compute</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">storageClass</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">vc01cl01-t0compute</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">storageClasses</span>
      <span class="hljs-attr">value:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">vc01cl01-t0compute</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">vmClass</span>
      <span class="hljs-attr">value:</span> <span class="hljs-comment">#@ data.values.worker_vm_size</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ntp</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">time1.oc.vmware.com</span>
    <span class="hljs-attr">version:</span> <span class="hljs-string">v1.23.8+vmware.2-tkg.2-zshippable</span>
<span class="hljs-attr">type:</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">TanzuKubernetesCluster</span>
  <span class="hljs-attr">package:</span> <span class="hljs-string">vmware.tanzu.manage.v1alpha1.managementcluster.provisioner.tanzukubernetescluster</span>
  <span class="hljs-attr">version:</span> <span class="hljs-string">v1alpha1</span>
</code></pre>
<p>You can see that a number of fields now have YTT logic in them. Here is a quick breakdown of what we are doing.</p>
<ul>
<li><p><code>#@ load("@ytt:data", "data")</code> - tell ytt to load data values into the data object</p>
</li>
<li><p><code>#@ data.values.mgmt_cluster_name</code> - I won't go through every variable but this syntax is used to pull out a value from the values file that we will create in the next section.</p>
</li>
<li><p><code>#@ if/end hasattr( data.values, "cp_annotations"):</code> - this syntax is also used a few times, this check to see if our values file has a field and if it does then it adds the field below. This is used becuase certain fields are optional.</p>
</li>
</ul>
<p>There is a lot more that can be done when templating with YTT and this is a fairly basic example. The docs on YTT have a lot of example that can be referenced.</p>
<h3 id="heading-create-a-values-file">Create a values file</h3>
<p>The values file is what we will use to specify the values for all of the fields that we have templated. This is really just a yaml file with a single line of YTT at the top that let's the YTT engine know that the fields are to be used as data values. Since this is just plain yaml, you can also have nested fields etc. Below you can see the values file that was created to work with the above template.</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#@data/values</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">mgmt_cluster_name:</span> <span class="hljs-string">h2o-4-19340</span>
<span class="hljs-attr">cluster_name:</span> <span class="hljs-string">cluster-from-template</span>
<span class="hljs-attr">provisioner:</span> <span class="hljs-string">lab</span>
<span class="hljs-attr">cp_vm_size:</span> <span class="hljs-string">best-effort-large</span>
<span class="hljs-attr">worker_vm_size:</span> <span class="hljs-string">best-effort-large</span>
<span class="hljs-attr">cluster_group:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">cluster_labels:</span>
  <span class="hljs-attr">test:</span> <span class="hljs-string">test</span>
</code></pre>
<h1 id="heading-create-a-new-cluster">Create a new cluster</h1>
<p>Finally, we can combine these two files into a command that with generate our cluster configuration and then apply it to TMC.</p>
<p>If you want to test the outputs of the templating prior to sending it to TMC you can simply run the below command which will generate the resulting yaml and output to <code>stdout</code>.</p>
<pre><code class="lang-yaml"><span class="hljs-string">ytt</span> <span class="hljs-string">-f</span> <span class="hljs-string">values.yml</span> <span class="hljs-string">-f</span> <span class="hljs-string">template.yml</span>
</code></pre>
<p>The next command will generate the resulting yaml and instead of sending it <code>stdout</code> , it will pass it directly to the Tanzu CLI and start creating the cluster.</p>
<pre><code class="lang-yaml"><span class="hljs-string">tanzu</span> <span class="hljs-string">tmc</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span>  <span class="hljs-string">&lt;(ytt</span> <span class="hljs-string">-f</span> <span class="hljs-string">values.yml</span> <span class="hljs-string">-f</span> <span class="hljs-string">template.yml)</span>
</code></pre>
<p>If you are using EKS or AKS the command is slightly different since support is not yet added to the <code>apply</code> command for those clusters. Hopefully it will be added soon. You can still do this with the <code>create</code> and <code>update</code> commands though. See the examples below.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># redirecting output does not work currently for the create commands</span>

<span class="hljs-comment">#EKS</span>
<span class="hljs-string">ytt</span> <span class="hljs-string">-f</span> <span class="hljs-string">values.yml</span> <span class="hljs-string">-f</span> <span class="hljs-string">template.yml</span> <span class="hljs-string">&gt;</span> <span class="hljs-string">eks.yml</span>
<span class="hljs-string">tanzu</span> <span class="hljs-string">tmc</span> <span class="hljs-string">ekscluster</span> <span class="hljs-string">create</span> <span class="hljs-string">-f</span> <span class="hljs-string">eks.yml</span>

<span class="hljs-comment">#AKS</span>
<span class="hljs-string">ytt</span> <span class="hljs-string">-f</span> <span class="hljs-string">values.yml</span> <span class="hljs-string">-f</span> <span class="hljs-string">template.yml</span> <span class="hljs-string">&gt;</span> <span class="hljs-string">aks.yml</span>
<span class="hljs-string">tanzu</span> <span class="hljs-string">tmc</span> <span class="hljs-string">akscluster</span> <span class="hljs-string">create</span> <span class="hljs-string">-f</span> <span class="hljs-string">aks.yml</span>
</code></pre>
<h1 id="heading-summary">Summary</h1>
<p>I summary, this article should give you a good idea of how to make re-usable templates for TMC created clusters. This could even be used to create "plans" for clusters so that it makes self service easier for teams. An example of using this for slef service would be to have a pipeline that executes the apply commands and allow developers, operators, etc. to manage their values file in a git repo. This would provide a nice gitops driven way to create on demand clusters with the ability to apply policy and restrict which fields could be changed.</p>
]]></content:encoded></item><item><title><![CDATA[Setting static IPs for workloads in TKG]]></title><description><![CDATA[A question comes up often of how can a static IP be set for workloads running in TKG. The answer is generally "It depends" and then followed by a series of questions about why it's needed and if there are alternatives that could be done etc. In many ...]]></description><link>https://blog.warroyo.com/setting-static-ips-for-workloads-in-tkg</link><guid isPermaLink="true">https://blog.warroyo.com/setting-static-ips-for-workloads-in-tkg</guid><category><![CDATA[TANZU]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[vmware]]></category><category><![CDATA[NSX]]></category><category><![CDATA[antrea]]></category><dc:creator><![CDATA[Will Arroyo]]></dc:creator><pubDate>Thu, 21 Sep 2023 17:50:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695318535218/a2d899fb-abf6-46ff-aa70-f64895e48132.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A question comes up often of how can a static IP be set for workloads running in TKG. The answer is generally "It depends" and then followed by a series of questions about why it's needed and if there are alternatives that could be done etc. In many scenarios, this is needed so that workloads running in a container on TKG can be identified by an external firewall and be allowed to talk to some external service. For example, maybe a workload needs to get access to a particular database and it has a strict access policy based on IP.</p>
<p>The overall solution depends on what the full networking stack is that you are using with TKG, but a common challenge is doing this when you are using TKG with supervisor(TKGs) that is backed by NSX-T. The issue is that when using this architecture, all traffic that comes out of a supervisor namespace is routed through a namespace-specific T1 and has a SNAT rule that maps it to a single IP address. This means that all workloads in a supervisor namespace appear as the same IP to the external network.</p>
<p>In this article, we will walk through a solution to the problem mentioned above so that we can associate different external IPs with specific workloads running in TKG clusters.</p>
<h2 id="heading-the-solution">The solution</h2>
<p>To solve this problem we will use a combination of Antrea and NSX-T. Ultimately we need to be able to make sure that when a container in a pod makes a request, eventually it is associated with a specific IP on the physical network. We also want to potentially have pods in the same cluster that have different external-facing IPs on the network. To do this we can use Antrea egress policies and custom NSX-T SNAT rules.</p>
<p><strong>Antrea egress policy</strong>: This allows us to specify a static IP or pool of IPs that traffic will be SNAT'd to when leaving the node if it matches a specific set of labels. You can read all the details <a target="_blank" href="https://antrea.io/docs/v1.13.1/docs/egress/">here</a>.</p>
<p><strong>NSX-T SNAT rules:</strong> This allows us to specify a rule that matches an IP or range of IPs and translates it to an IP on the physical network as it exits the T0.</p>
<p>We can now combine the two features above to solve the problem. The way this works is that the Antrea egress policy will handle making sure that the traffic leaving the node for a specific pod(s) will be a specific IP address, rather than the node's IP address. We can then use the NSX-T SNAT rule to ensure that the IP we specified in the Antrea egress rule is translated to the right external facing IP address. This means that we now have a way to label a pod and have it result in a specific IP address on the physical network.</p>
<p>This diagram shows a simplified flow of the IP translations.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695312712785/6403bf90-1a48-4093-be7f-10ee2ad5c11c.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-implementation">Implementation</h2>
<p>This will assume that the following prereqs are met before implementing the solution.</p>
<ul>
<li><p>TKG w/supervisor deployed</p>
</li>
<li><p>TKG w/supervisor backed my NSX-T</p>
</li>
<li><p>A TKG workload cluster deployed</p>
</li>
<li><p>Antrea as a CNI</p>
</li>
</ul>
<h3 id="heading-create-the-egress-policy">Create the egress policy</h3>
<p>The first step is to create the Antrea egress policy and IP pool. This will be where the label matching is set so that we can target specific pods or namespaces in the cluster.</p>
<p>create the following YAML file and apply it to your workload cluster.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">crd.antrea.io/v1alpha2</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Egress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">egress-external-db</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">appliedTo:</span>
    <span class="hljs-attr">podSelector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">external-db-access:</span> <span class="hljs-string">'true'</span>
  <span class="hljs-attr">externalIPPool:</span> <span class="hljs-string">external-ip-pool</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">crd.antrea.io/v1alpha2</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ExternalIPPool</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">external-ip-pool</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">ipRanges:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">start:</span> <span class="hljs-number">10.244</span><span class="hljs-number">.0</span><span class="hljs-number">.70</span>  <span class="hljs-comment"># IP from segment that nodes are on</span>
    <span class="hljs-attr">end:</span> <span class="hljs-number">10.244</span><span class="hljs-number">.0</span><span class="hljs-number">.70</span>
  <span class="hljs-attr">nodeSelector:</span> {}     <span class="hljs-comment"># All Nodes can be Egress Nodes</span>
</code></pre>
<p>After applying the file you should see a status like this:</p>
<pre><code class="lang-bash"> k get egresses.crd.antrea.io egress-external-db
NAME                 EGRESSIP      AGE   NODE
egress-external-db   10.244.0.70   18s   tes-snat-1-md-0-bzfm4-7678dc8766-w7rz2
</code></pre>
<p>In the above file, you can see that we are creating two resources. the <code>Egress</code> resource defines the match criteria and the IP pool we will use. The <code>ExternalIPPool</code> defines the IP range that we want to use. In this case, it's set to just 1 IP. This IP is pulled from the cluster's segment CIDR in NSX-T, so it is an IP that would typically be used for a node. This IP could be from another range but for simplicity, we are using the segment's CIDR.</p>
<h3 id="heading-create-a-workload-with-matching-labels">Create a workload with matching labels</h3>
<p>We now need to create a workload that matches our egress policy label selectors. This will start up a pod that matches the labels, in this example, we are using <a target="_blank" href="https://github.com/nicolaka/netshoot">netshoot</a> which is an open-source tool for network troubleshooting.</p>
<p>Apply the following YAML into the workload cluster.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">netshoot</span>
    <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">netshoot</span>
<span class="hljs-attr">spec:</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
    <span class="hljs-attr">selector:</span>
        <span class="hljs-attr">matchLabels:</span>
            <span class="hljs-attr">app:</span> <span class="hljs-string">netshoot</span>
    <span class="hljs-attr">template:</span>
        <span class="hljs-attr">metadata:</span>
          <span class="hljs-attr">labels:</span>
            <span class="hljs-attr">app:</span> <span class="hljs-string">netshoot</span>
            <span class="hljs-attr">external-db-access:</span> <span class="hljs-string">'true'</span>
        <span class="hljs-attr">spec:</span>
            <span class="hljs-attr">containers:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">netshoot</span>
              <span class="hljs-attr">image:</span> <span class="hljs-string">nicolaka/netshoot</span>
              <span class="hljs-attr">command:</span> [<span class="hljs-string">"/bin/bash"</span>]
              <span class="hljs-attr">args:</span> [<span class="hljs-string">"-c"</span>, <span class="hljs-string">"while true; do ping localhost; sleep 60;done"</span>]
</code></pre>
<p>At this point, the traffic leaving this pod will be translated to our Antrea egress IP <code>10.244.0.70</code> . However we are not finished just yet, the IP at this point will still be translated to the default NSX-T egress IP when leaving the T0 because our default SNAT rule that is created by TKG is capturing all traffic on that segment's subnet.</p>
<h3 id="heading-add-the-custom-nsx-t-snat-rule">Add the custom NSX-T SNAT rule</h3>
<p>This is where we will create a rule in NSX-T to translate the Antrea egress IP to the final outbound IP address. This can be done either through the UI or automation with the API, etc. For simplicity, this shows how to create the rule in the UI.</p>
<ol>
<li><p>go into the NSX-T console and navigate to the Networking-&gt;NAT section.</p>
</li>
<li><p>find the T1 that is associated with your supervisor namespace in the dropdown. the T1 name will contain the ns name.</p>
</li>
<li><p>create a new SNAT rule, see the image below for details. This will set the Antrea egress IP as the source and the destination is the outbound IP we want from the NSX-T egress range.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695313418510/43871db1-139f-4935-8eac-aa9bc626e255.png" alt class="image--center mx-auto" /></p>
<p> We can now test to see if the IP is being translated properly.</p>
</li>
</ol>
<h3 id="heading-validate-it">Validate it</h3>
<p>To validate this we will exec into the container and ping a Linux box running on a different network while watching traffic with <code>tcpdump</code>. This will make the traffic flow the full course of the network and we should see the outbound translated IP.</p>
<p>Run the ping command to the Linux box on <code>10.220.13.144</code></p>
<pre><code class="lang-bash">k <span class="hljs-built_in">exec</span> -it netshoot-75dd7f67c9-zck8f -- bash
netshoot-75dd7f67c9-zck8f: ping
netshoot-75dd7f67c9-zck8f: ping 10.220.13.144 
PING 10.220.13.144 (10.220.13.144) 56(84) bytes of data.
</code></pre>
<p>Check the tcpdump on the Linux box. Notice the IP is <code>10.214.185.200</code> which is our custom SNAT rule IP.</p>
<pre><code class="lang-bash">tcpdump -i eth0 icmp
tcpdump: verbose output suppressed, use -v[v]... <span class="hljs-keyword">for</span> full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
17:16:23.009148 IP 10.214.185.200 &gt; photon-machine: ICMP <span class="hljs-built_in">echo</span> request, id 2823, seq 1, length 64
</code></pre>
<h2 id="heading-summary">Summary</h2>
<p>In summary, using this approach will allow you to be able to conform to existing firewall rules and policies that rely on specifying an IP address for specific workloads. This could be used for many different use cases that require this type of granularity. additionally it lets us use a labeling and code-based mechanism to assign these IPs to workloads and can be very dynamic if needed.</p>
]]></content:encoded></item><item><title><![CDATA[How to create custom workload types with TAP]]></title><description><![CDATA[I was working with a customer the other day and some questions came up about how to support workloads that may vary from the ones that TAP supports out-of-the-box. Looking at the docs we can see that there are 3 types available to use today. This wil...]]></description><link>https://blog.warroyo.com/how-to-create-custom-workload-types-with-tap</link><guid isPermaLink="true">https://blog.warroyo.com/how-to-create-custom-workload-types-with-tap</guid><category><![CDATA[k8s]]></category><category><![CDATA[TANZU]]></category><category><![CDATA[vmware]]></category><dc:creator><![CDATA[Will Arroyo]]></dc:creator><pubDate>Thu, 19 Jan 2023 16:43:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1674146505209/ff88adf0-9015-4dc7-90fd-115db737df49.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was working with a customer the other day and some questions came up about how to support workloads that may vary from the ones that TAP supports out-of-the-box. Looking at <a target="_blank" href="https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.4/tap/workloads-workload-types.html">the docs</a> we can see that there are 3 types available to use today. This will support the majority of workloads that are commonly used, but what if another use case comes up and we need to quickly add support for deploying that workload on our existing TAP environment? This post will walk through the basic steps of adding a new workload type into TAP.</p>
<h1 id="heading-how-it-works">How it works</h1>
<p>First, let's break down what is meant by a "custom workload type". In TAP there is a resource called a "workload" which defines some minimal details about an application and abstracts away many of the underlying complexities of the infrastructure and the path that the app will take to get to a running state on that infrastructure i.e. the path to prod. The "workload" definition makes it easy for a developer to supply some basic app configuration and source code and get the app running on K8s with little to no knowledge of the underlying K8s environment. In the workload definition, there is a label that is used to set the type of workload that will be created. Based on that label's value, an instance of the supply chain is created for the workload and will generate a set of K8s resources that are determined by the type. For example the <code>server</code> type will generate a K8s <code>Deployment</code> and <code>Service</code> , while if the <code>web</code> type is specified it will generate a Knative Service. By creating a custom workload type we can define which K8s resources get generated when that type is specified.</p>
<p>TAP provides a way to include new workload types via a setting in the <code>values.yaml</code> called <code>ootb_supply_chain_basic.supported_workloads</code>, more on this in the implementation section below. What's great about this is that this makes it easy to add a custom workload type without having to modify any supply chains or do any real "customization" that strays from the OOTB offerings. In addition to the TAP values file modification, a new <code>ClusterConfigurationTemplate</code> needs to be added. The <code>ClusterConfigurationTemplate</code> is where the K8s resources that will be generated for the type are defined.</p>
<p>Here is a high-level outline of the end-to-end process:</p>
<ol>
<li><p>A workload is defined with the custom workload type as the value for the label.</p>
</li>
<li><p>The workload is applied to the build cluster which instantiates an instance of the supply chain.</p>
</li>
<li><p>When the supply chain instance is created there is a selection that happens on the <code>app-config</code> step that looks at the workload type label and chooses the correct <code>ClusterConfigurationTemplate</code> , which in this case would be the custom one.</p>
</li>
<li><p>The supply chain eventually reaches the <code>app-config</code> step and the custom template stamps out a <code>ConfigMap</code> with the custom-defined K8s resources that will be eventually deployed on the TAP run clusters.</p>
</li>
</ol>
<p><strong>Diagram of this flow:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674083027011/983ba2b7-1853-4d16-8f17-acfd3c242ab4.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-the-use-case">The use case</h1>
<p>As mentioned in the introduction we will be creating a new workload type for a use case that isn't currently covered by the out-of-the-box types. For this use case, we have an app that needs to mount a volume. Specifically, this app needs to mount a K8s persistent volume to store some data.</p>
<h1 id="heading-implementation">Implementation</h1>
<h2 id="heading-create-a-new-cluster-configuration-template">Create a new Cluster Configuration Template</h2>
<p>The <code>ClusterConfigurationTemplate</code> resource is used to define the resulting delivery YAML. This YAML will consist of the Kubernetes manifests that are responsible for running the application.</p>
<p>Apply the below YAML into the TAP build cluster.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">carto.run/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterConfigTemplate</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">doc:</span> <span class="hljs-string">|
      This template consumes an input named config which contains a
      PodTemplateSpec and returns a ConfigMap which contains a
      "delivery.yml" which contains a manifests for a Kubernetes
      Deployment which will run the templated pod, and a "service.yml"
      Kubernetes Service to expose the pods on the network. This also supports 
      a workload param that allows for adding volumes
</span>  <span class="hljs-attr">name:</span> <span class="hljs-string">volumes-template</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">configPath:</span> <span class="hljs-string">.data</span>
  <span class="hljs-attr">healthRule:</span>
    <span class="hljs-attr">alwaysHealthy:</span> {}
  <span class="hljs-attr">params:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">default:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">8080</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">http</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">8080</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">ports</span>
  <span class="hljs-attr">ytt:</span> <span class="hljs-string">|
    #@ load("@ytt:data", "data")
    #@ load("@ytt:yaml", "yaml")
    #@ load("@ytt:struct","struct")
    #@ load("@ytt:assert", "assert")
    #@ load("@ytt:overlay", "overlay")
</span>
    <span class="hljs-comment">#@ def merge_labels(fixed_values):</span>
    <span class="hljs-comment">#@   labels = {}</span>
    <span class="hljs-comment">#@   if hasattr(data.values.workload.metadata, "labels"):</span>
    <span class="hljs-comment">#@    labels.update(data.values.workload.metadata.labels)</span>
    <span class="hljs-comment">#@   end</span>
    <span class="hljs-comment">#@   labels.update(fixed_values)</span>
    <span class="hljs-comment">#@  return labels</span>
    <span class="hljs-comment">#@ end</span>

    <span class="hljs-comment">#@ def intOrString(v):</span>
    <span class="hljs-comment">#@   return v if type(v) == "int" else int(v.strip()) if v.strip().isdigit() else v</span>
    <span class="hljs-comment">#@ end</span>

    <span class="hljs-comment">#@ def merge_ports(ports_spec,containers):</span>
    <span class="hljs-comment">#@   ports = {}</span>
    <span class="hljs-comment">#@   for c in containers:</span>
    <span class="hljs-comment">#@     for p in getattr(c,"ports", []):</span>
    <span class="hljs-comment">#@       ports[p.containerPort] = {"targetPort": p.containerPort,"port": p.containerPort, "name": getattr(p, "name", str(p.containerPort))}</span>
    <span class="hljs-comment">#@     end</span>
    <span class="hljs-comment">#@   end</span>
    <span class="hljs-comment">#@   for p in ports_spec:</span>
    <span class="hljs-comment">#@     targetPort = getattr(p,"containerPort", p.port)</span>
    <span class="hljs-comment">#@     type(targetPort) in ("string", "int") or fail("containerPort must be a string or int")</span>
    <span class="hljs-comment">#@     targetPort = intOrString(targetPort)</span>
    <span class="hljs-comment">#@    </span>
    <span class="hljs-comment">#@     port = p.port</span>
    <span class="hljs-comment">#@     type(port) in ("string", "int") or fail("port must be a string or int")</span>
    <span class="hljs-comment">#@     port = int(port)</span>
    <span class="hljs-comment">#@     ports[p.port] = {"targetPort": targetPort, "port": port, "name": getattr(p, "name", str(p.port))}</span>
    <span class="hljs-comment">#@   end</span>
    <span class="hljs-comment">#@  return ports.values()</span>
    <span class="hljs-comment">#@ end</span>

    <span class="hljs-comment">#@ def addVolumes():</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-comment">#@overlay/match by="name"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">workload</span>
        <span class="hljs-comment">#@overlay/match missing_ok=True</span>
        <span class="hljs-attr">volumeMounts:</span> <span class="hljs-comment">#@ data.values.params.volumes.volumeMounts</span>
      <span class="hljs-comment">#@overlay/match missing_ok=True</span>
      <span class="hljs-attr">volumes:</span> <span class="hljs-comment">#@ data.values.params.volumes.volumes</span>

    <span class="hljs-comment">#@ end</span>

    <span class="hljs-comment">#@ def delivery():</span>
    <span class="hljs-string">---</span>
    <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-comment">#@ data.values.workload.metadata.name</span>
      <span class="hljs-attr">annotations:</span>
      <span class="hljs-attr">kapp.k14s.io/update-strategy:</span> <span class="hljs-string">"fallback-on-replace"</span>
      <span class="hljs-attr">ootb.apps.tanzu.vmware.com/servicebinding-workload:</span> <span class="hljs-string">"true"</span>
      <span class="hljs-attr">labels:</span> <span class="hljs-comment">#@ merge_labels({ "app.kubernetes.io/component": "run","carto.run/workload-name": data.values.workload.metadata.name })</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">selector:</span>
        <span class="hljs-attr">matchLabels:</span> <span class="hljs-comment">#@ data.values.config.metadata.labels</span>
      <span class="hljs-attr">template:</span> <span class="hljs-comment">#@ overlay.apply(data.values.config,addVolumes())</span>
    <span class="hljs-string">---</span>
    <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-comment">#@ data.values.workload.metadata.name</span>
      <span class="hljs-attr">labels:</span> <span class="hljs-comment">#@ merge_labels({ "app.kubernetes.io/component": "run", "carto.run/workload-name": data.values.workload.metadata.name })</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">selector:</span> <span class="hljs-comment">#@ data.values.config.metadata.labels</span>
      <span class="hljs-attr">ports:</span>
      <span class="hljs-comment">#@ hasattr(data.values.params, "ports") and len(data.values.params.ports) or assert.fail("one or more ports param must be provided.")</span>
      <span class="hljs-comment">#@ declared_ports = {}</span>
      <span class="hljs-comment">#@ if "ports" in data.values.params:</span>
      <span class="hljs-comment">#@   declared_ports = data.values.params.ports</span>
      <span class="hljs-comment">#@ else:</span>
      <span class="hljs-comment">#@   declared_ports = struct.encode([{ "containerPort": 8080, "port": 8080, "name": "http"}])</span>
      <span class="hljs-comment">#@ end</span>
      <span class="hljs-comment">#@ for p in merge_ports(declared_ports, data.values.config.spec.containers):</span>
        <span class="hljs-bullet">-</span> <span class="hljs-comment">#@ p</span>
      <span class="hljs-comment">#@ end</span>
    <span class="hljs-comment">#@ end</span>

    <span class="hljs-string">---</span>
    <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">ConfigMap</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-comment">#@ data.values.workload.metadata.name + "-server"</span>
      <span class="hljs-attr">labels:</span> <span class="hljs-comment">#@ merge_labels({ "app.kubernetes.io/component": "config"})</span>
    <span class="hljs-attr">data:</span>
      <span class="hljs-attr">delivery.yml:</span> <span class="hljs-comment">#@ yaml.encode(delivery())</span>
</code></pre>
<p>Let's break this template down a bit.</p>
<ol>
<li><p>The resource takes as input a <code>PodTemplateSpec</code> from the previous step in the supply chain. This is accessible through the <code>data.values.config</code> .</p>
</li>
<li><p>The resource defines a ytt template that will be used to generate a <code>ConfigMap</code> that holds the <code>delivery.yaml</code> . The <code>delivery.yaml</code> contains the Kubernetes manifests to run the app.</p>
</li>
<li><p>The <code>delivery()</code> function - Defines a templated K8s deployment and service using input from the workload params and the previous step's <code>PodTemplateSpec</code> .</p>
</li>
<li><p>The <code>addVolumes()</code> function - this is used within the delivery function to modify the incoming <code>PodTemplateSpec</code> , it will take the volume params provided in the workload spec and add them to the <code>PodTemplateSpec</code> . <strong>This is the part that is doing most of the custom work to enable volumes.</strong></p>
</li>
<li><p>The <code>merge_ports()</code> function - this takes in any custom ports defined in the params in the workload spec and adds them to the service.</p>
</li>
<li><p>The <code>merge_labels()</code> function - this does a similar task to the merge_ports but this time with any labels passed in.</p>
</li>
</ol>
<h2 id="heading-update-the-tap-values-to-include-the-new-workload-type">Update the TAP values to include the new workload type</h2>
<p>To register this new workload type a new section needs to be added to the supply chain's configuration.</p>
<p>Edit the TAP values and add the below YAML. Notice this is being added to the <code>ootb_supply_chain_basic</code> section so just append this along with any other settings. After updating these settings, update the tap install using the Tanzu cli.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">ootb_supply_chain_basic:</span>
  <span class="hljs-attr">supported_workloads:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">web</span>
    <span class="hljs-attr">cluster_config_template_name:</span> <span class="hljs-string">config-template</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">server</span>
     <span class="hljs-attr">cluster_config_template_name:</span> <span class="hljs-string">server-template</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">worker</span>
    <span class="hljs-attr">cluster_config_template_name:</span> <span class="hljs-string">worker-template</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">server-with-volumes</span>
    <span class="hljs-attr">cluster_config_template_name:</span> <span class="hljs-string">volumes-template</span>
</code></pre>
<p>The settings above define four workload types, three of which are provided OOTB. The existing three need to be specified otherwise they will be removed. The fourth is the custom workload type that has been named <code>server-with-volumes</code> and is referencing the new <code>ClusterConfigurationTemplate</code> named <code>volumes-template</code> . These settings are what associate the workload type that will be used in the workload spec with the new template defined in the previous step.</p>
<h2 id="heading-create-a-workload-using-the-new-type">Create a workload using the new type</h2>
<p>Keeping with the thread of our new use case, for this workload we need to mount a persistent volume. Creating the volume is out of the scope of this post, but any K8s PVC will work. In this case, the PVC is just using the AWS EBS CSI driver to create a disk.</p>
<p>Apply the below YAML into the TAP build cluster</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">carto.run/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Workload</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app.kubernetes.io/part-of:</span> <span class="hljs-string">go-sample-pvc</span>
    <span class="hljs-attr">apps.tanzu.vmware.com/workload-type:</span> <span class="hljs-string">server-with-volumes</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">go-sample-pvc</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">params:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">volumes</span>
      <span class="hljs-attr">value:</span>
        <span class="hljs-attr">volumes:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">data-mount</span>
          <span class="hljs-attr">persistentVolumeClaim:</span>
            <span class="hljs-attr">claimName:</span> <span class="hljs-string">go-sample-data</span>
        <span class="hljs-attr">volumeMounts:</span>
         <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">data-mount</span>
           <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/sample-data</span>
  <span class="hljs-attr">source:</span>
    <span class="hljs-attr">git:</span>
      <span class="hljs-attr">ref:</span>
        <span class="hljs-attr">branch:</span> <span class="hljs-string">main</span>
      <span class="hljs-attr">url:</span> <span class="hljs-string">https://github.com/warroyo/tap-go-sample</span>
</code></pre>
<p>The main things to point out in the above YAML are the following:</p>
<ol>
<li><p><code>apps.tanzu.vmware.com/workload-type: server-with-volumes</code> this label is what is used for selecting the workload type. Notice that the <code>server-with-volumes</code> name matches the one we added to the TAP values.</p>
</li>
<li><p>The <code>volumes</code> param - This is where the volumes and volume mounts are defined. To keep it simple and flexible the format of these parameters is just using the same spec from the K8s pod spec for defining volumes. This means anything that can be done through <a target="_blank" href="https://kubernetes.io/docs/concepts/storage/volumes/">these docs</a>, can be added here.</p>
</li>
</ol>
<p>After deploying this workload the resulting K8s manifests will look like this. Notice that the volume and volume mounts have been added to the pod spec.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">go-sample-pvc</span>
  <span class="hljs-attr">annotations:</span> <span class="hljs-literal">null</span>
  <span class="hljs-attr">kapp.k14s.io/update-strategy:</span> <span class="hljs-string">fallback-on-replace</span>
  <span class="hljs-attr">ootb.apps.tanzu.vmware.com/servicebinding-workload:</span> <span class="hljs-string">"true"</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app.kubernetes.io/part-of:</span> <span class="hljs-string">go-sample-pvc</span>
    <span class="hljs-attr">apps.tanzu.vmware.com/has-tests:</span> <span class="hljs-string">"true"</span>
    <span class="hljs-attr">apps.tanzu.vmware.com/workload-type:</span> <span class="hljs-string">server-with-volumes</span>
    <span class="hljs-attr">app.kubernetes.io/component:</span> <span class="hljs-string">run</span>
    <span class="hljs-attr">carto.run/workload-name:</span> <span class="hljs-string">go-sample-pvc</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app.kubernetes.io/component:</span> <span class="hljs-string">run</span>
      <span class="hljs-attr">app.kubernetes.io/part-of:</span> <span class="hljs-string">go-sample-pvc</span>
      <span class="hljs-attr">apps.tanzu.vmware.com/has-tests:</span> <span class="hljs-string">"true"</span>
      <span class="hljs-attr">apps.tanzu.vmware.com/workload-type:</span> <span class="hljs-string">server-with-volumes</span>
      <span class="hljs-attr">carto.run/workload-name:</span> <span class="hljs-string">go-sample-pvc</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">annotations:</span>
        <span class="hljs-attr">conventions.carto.run/applied-conventions:</span> <span class="hljs-string">|-
          spring-boot-convention/auto-configure-actuators-check
          spring-boot-convention/app-live-view-appflavour-check
          appliveview-sample/app-live-view-appflavour-check
</span>        <span class="hljs-attr">developer.conventions/target-containers:</span> <span class="hljs-string">workload</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app.kubernetes.io/component:</span> <span class="hljs-string">run</span>
        <span class="hljs-attr">app.kubernetes.io/part-of:</span> <span class="hljs-string">go-sample-pvc</span>
        <span class="hljs-attr">apps.tanzu.vmware.com/has-tests:</span> <span class="hljs-string">"true"</span>
        <span class="hljs-attr">apps.tanzu.vmware.com/workload-type:</span> <span class="hljs-string">server-with-volumes</span>
        <span class="hljs-attr">carto.run/workload-name:</span> <span class="hljs-string">go-sample-pvc</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">dev.registry.pivotal.io/warroyo/go-sample-pvc-default@sha256:94eec8cb112d4e860ab6cc9095f40db0779889f7ee0a953f0876d4b4b29ef0ce</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">workload</span>
        <span class="hljs-attr">resources:</span> {}
        <span class="hljs-attr">securityContext:</span>
          <span class="hljs-attr">runAsUser:</span> <span class="hljs-number">1000</span>
        <span class="hljs-attr">volumeMounts:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/sample-data</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">data-mount</span>
      <span class="hljs-attr">serviceAccountName:</span> <span class="hljs-string">default</span>
      <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">data-mount</span>
        <span class="hljs-attr">persistentVolumeClaim:</span>
          <span class="hljs-attr">claimName:</span> <span class="hljs-string">go-sample-data</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">go-sample-pvc</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app.kubernetes.io/part-of:</span> <span class="hljs-string">go-sample-pvc</span>
    <span class="hljs-attr">apps.tanzu.vmware.com/has-tests:</span> <span class="hljs-string">"true"</span>
    <span class="hljs-attr">apps.tanzu.vmware.com/workload-type:</span> <span class="hljs-string">server-with-volumes</span>
    <span class="hljs-attr">app.kubernetes.io/component:</span> <span class="hljs-string">run</span>
    <span class="hljs-attr">carto.run/workload-name:</span> <span class="hljs-string">go-sample-pvc</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app.kubernetes.io/component:</span> <span class="hljs-string">run</span>
    <span class="hljs-attr">app.kubernetes.io/part-of:</span> <span class="hljs-string">go-sample-pvc</span>
    <span class="hljs-attr">apps.tanzu.vmware.com/has-tests:</span> <span class="hljs-string">"true"</span>
    <span class="hljs-attr">apps.tanzu.vmware.com/workload-type:</span> <span class="hljs-string">server-with-volumes</span>
    <span class="hljs-attr">carto.run/workload-name:</span> <span class="hljs-string">go-sample-pvc</span>
  <span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">targetPort:</span> <span class="hljs-number">8080</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">8080</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">http</span>
</code></pre>
<h1 id="heading-summary">Summary</h1>
<p>In summary, the steps in this post walked through adding a new configuration template that takes parameters from the workload spec and uses those to inject volumes into the resulting pods. We then associated that new configuration template with a new workload type that can be used by a developer when creating a workload. This approach could be used for many other use cases and is not limited to adding volumes. Hopefully, this shows how TAP can be extended to support all kinds of different workloads and application needs while also providing a good user experience for the developer.</p>
]]></content:encoded></item><item><title><![CDATA[Integrating  TAP with Azure DevOps Pipelines]]></title><description><![CDATA[TAP has an OOTB source code testing capability that makes use of Tekton pipelines to execute tests based on your workload types. However, many organizations have already implemented their testing processes in another tool like Jenkins or Azure DevOps...]]></description><link>https://blog.warroyo.com/integrating-tap-with-ado-pipelines</link><guid isPermaLink="true">https://blog.warroyo.com/integrating-tap-with-ado-pipelines</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Azure]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[vmware]]></category><category><![CDATA[TANZU]]></category><dc:creator><![CDATA[Will Arroyo]]></dc:creator><pubDate>Wed, 28 Dec 2022 21:26:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672259097481/4db891ea-1a4d-4c6a-abe5-4c2ff48663d4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>TAP has an OOTB source code testing capability that makes use of <a target="_blank" href="https://tekton.dev/docs/pipelines/pipelines/">Tekton pipelines</a> to execute tests based on your workload types. However, many organizations have already implemented their testing processes in another tool like Jenkins or Azure DevOps (ADO). As of TAP 1.3, you can <a target="_blank" href="https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.3/tap/GUID-scc-ootb-supply-chain-testing-with-jenkins.html?hWord=N4IghgNiBcIFYFMB2BrAlkgziAvkA">natively use Jenkins</a> for your source testing in the TAP supply chain. In talking with several customers and co-workers it was apparent that integrating with ADO would be very useful. In this post, we will walk through the steps to get the TAP source code testing capability working with Azure DevOps pipelines.</p>
<h2 id="heading-how-it-works">How it works</h2>
<p>Since TAP already has an integration with Jenkins, we will follow a similar pattern for implementing the ADO integration. Below is a breakdown of how source testing works in TAP with Jenkins. The diagram shows that there is a parameter for <code>testing_pipeline_matching_labels</code> in the workload definition, this is what is used to select the pipeline, via label selectors, that should be used for testing. From there a <code>Runnable</code> is created and uses the label selectors to associate the Jenkins Tekton pipeline. There is also a <code>ClusterTask</code> that the pipeline references; this is where the code exists for communicating with Jenkins. From there, when the supply chain executes it will create a <code>PipelineRun</code> with the parameters from the workload that will spawn a <code>TaskRun</code>.The task run executes the Jenkins job and returns the results. There are a few pieces I left out of the diagram to make it easier to follow but, overall this covers most of what happens.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671833957329/e0cf9782-53e4-4d7e-a277-db9f7e1a4d09.jpeg" alt="There are a few pieces I left out of the diagram to make it easier to follow but overall this covers most of what happens. " class="image--center mx-auto" /></p>
<p>Based on the above flow, there will be two things needed to implement an ADO equivalent: a custom <code>ClusterTask</code> and a <code>Pipeline</code>. With those defined, we will be able to use native TAP functionality to selectively run tests in ADO. These two resources are core components of Tekton, you can find the docs on them <a target="_blank" href="https://tekton.dev/docs/pipelines/pipelines/">here</a>.</p>
<h2 id="heading-implementation">Implementation</h2>
<h3 id="heading-create-a-simple-pipeline-in-ado">Create a simple pipeline in ADO</h3>
<p>Login to ADO and create a "new project" or use an existing project that you may have. If you are using the Azure CLI, run the below command.</p>
<pre><code class="lang-bash">az devops project create --name tap-ado-blog --org https://dev.azure.com/&lt;your-organization&gt;
</code></pre>
<p>After creating the new project there will also be a default repo created by the same name, e.g. <code>tap-ado-blog</code>. This is the repo that will be used for the pipeline in the next step. You can also create a new repo or use an existing one, just be sure to change the names in the next steps accordingly.</p>
<p>The below YAML will be used for the newly created pipeline. This sets up three parameters, two of which are required since the TAP supply chain passes the <code>source_url</code> and <code>source_revision</code> by default. The third is an optional parameter that shows how to pass additional parameters to the ADO pipeline in the workload YAML. The pipeline steps can be customized to handle any testing scenario needed.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">trigger:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">none</span>

<span class="hljs-attr">pr:</span> <span class="hljs-string">none</span> 

<span class="hljs-attr">pool:</span>
  <span class="hljs-attr">vmImage:</span> <span class="hljs-string">ubuntu-latest</span>

<span class="hljs-attr">parameters:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">source_url</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">source</span> <span class="hljs-string">url</span> <span class="hljs-string">to</span> <span class="hljs-string">clone</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">source_revision</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">revision</span> <span class="hljs-string">to</span> <span class="hljs-string">clone</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">example_param</span>
    <span class="hljs-attr">displayName:</span> <span class="hljs-string">example</span>
    <span class="hljs-attr">default:</span> <span class="hljs-string">""</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>

<span class="hljs-attr">steps:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">script:</span> <span class="hljs-string">echo</span> <span class="hljs-string">${{parameters.source_url}}</span> <span class="hljs-string">"succesfully triggered this build from TAP"</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">script:</span> <span class="hljs-string">echo</span> <span class="hljs-string">${{parameters.source_revision}}</span> <span class="hljs-string">"succesfully triggered this build from TAP"</span>
</code></pre>
<p>Next, create a new pipeline...</p>
<p><strong>From the UI:</strong> make the following selections <code>Pipelines-&gt;Create pipeline-&gt;Azure Repos Git-&gt;tap-ado-blog-&gt;Starter Pipeline</code>. Add the above YAML after selecting the "Starter Pipeline" and save it.</p>
<p><strong>Using the CLI:</strong> commit the above YAML to the newly created repo as <code>azure-pipelines.yml</code> , run the below commands.</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://&lt;your-org&gt;@dev.azure.com/&lt;your-org&gt;/tap-ado-blog/_git/tap-ado-blog 
<span class="hljs-built_in">cd</span> tap-ado-blog
touch azure-pipelines.yml
<span class="hljs-comment">#paste the contents from the above yaml into the new file</span>
git add .
git commit -am <span class="hljs-string">"adding pipelines"</span>
git push

az pipelines create --name <span class="hljs-string">'tap-ado-blog'</span> --description <span class="hljs-string">'Pipeline for TAP'</span> --repository tap-ado-blog  --branch main --repository-type tfsgit --org https://dev.azure.com/&lt;your-organization&gt; --project tap-ado-blog --yaml-path azure-pipelines.yml
</code></pre>
<h3 id="heading-create-a-pat-in-azure-devops">Create a PAT in Azure DevOps</h3>
<p>To execute the pipeline from TAP there needs to be a PAT created to auth against the API. If you are running TAP in Azure you could do something like role-based access control to provide auth but that is for another blog post.</p>
<p><strong>From the UI:</strong> select <code>Upper right settings-&gt;Personal Access Tokens-&gt;New Token-&gt;Full Access</code> , see <a target="_blank" href="https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&amp;tabs=Windows">here</a> for the full docs.</p>
<p><strong>Using the CLI:</strong> as of 12/28/2022, there is no way to get a PAT from the CLI except for going direct to the REST API.</p>
<p>Save the generated PAT for later use.</p>
<h3 id="heading-relocate-the-required-image-to-your-registry">Relocate the required image to your registry</h3>
<p>In the docs for TAP it will typically have you relocate your images to a registry during the installation. The new <code>ClusterTask</code> that will be created for this ADO integration will require a new image due to its Python dependency. Run the following command after exporting the required variables to relocate the image to your registry.</p>
<pre><code class="lang-bash">imgpkg copy -i python:3.7-slim --to-repo <span class="hljs-variable">${INSTALL_REGISTRY_HOSTNAME}</span>/<span class="hljs-variable">${INSTALL_REPO}</span>/tap-packages
</code></pre>
<p>After running the command you will see some output similar to</p>
<pre><code class="lang-bash">will <span class="hljs-built_in">export</span> index.docker.io/library/python@sha256:aa949f5f10e9b28e1f9561fff73d1a359fa8517d4e543451a714d1a4ecc61c56
</code></pre>
<p>To get the full path to the copied image, copy everything starting with the <code>@</code> symbol in your output and append it to the new repo path. The resulting full image path will look something like</p>
<pre><code class="lang-bash"><span class="hljs-variable">${INSTALL_REGISTRY_HOSTNAME}</span>/<span class="hljs-variable">${INSTALL_REPO}</span>/tap-packages@sha256:aa949f5f10e9b28e1f9561fff73d1a359fa8517d4e543451a714d1a4ecc61c56

<span class="hljs-comment"># for example: https://dev.registry.pivotal.io/warroyo/tap-packages@sha256:aa949f5f10e9b28e1f9561fff73d1a359fa8517d4e543451a714d1a4ecc61c56</span>
</code></pre>
<p>Depending on when you run this the SHA may be different, but the same process applies to get the full path to the image.</p>
<h3 id="heading-create-the-tap-resources">Create the TAP resources</h3>
<p><strong>NOTE:</strong> The next few steps will walk you through creating the required TAP resources. This should be done in the "build" cluster, or if you are using a "full profile", it will be in the single cluster since the build components are colocated. Ensure you are in the correct context when running these commands.</p>
<p>Create a Kubernetes secret to store the PAT from the previous section. This should be done in the "developer namespace".</p>
<pre><code class="lang-bash">kubectl -n &lt;developer-ns&gt; create secret generic ado-token --from-literal=pat=&lt;your-pat-here&gt;
</code></pre>
<p>Create a new <code>ClusterTask</code>, this will set up the code that will be executed to run the pipeline in Azure. In the Below YAML, replace <code>&lt;your-relocated-image-here&gt;</code> with the image from the above step. After modifying, apply this to the cluster with <code>kubectl</code>.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">tekton.dev/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterTask</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">ado-task</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">params:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">source-url</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">source-revision</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">secret-name</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">pipeline-id</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">project-name</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">org-name</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">default:</span> <span class="hljs-string">""</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">pipeline-params</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
  <span class="hljs-attr">results:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ado-pipeline-run-url</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">string</span>
  <span class="hljs-attr">steps:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">install-depends</span>
    <span class="hljs-attr">image:</span>  <span class="hljs-string">&lt;your-relocated-image-here&gt;</span>
    <span class="hljs-attr">script:</span> <span class="hljs-string">|
      pip install requests
</span>  <span class="hljs-bullet">-</span> <span class="hljs-attr">env:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ADO_API_TOKEN</span>
      <span class="hljs-attr">valueFrom:</span>
        <span class="hljs-attr">secretKeyRef:</span>
          <span class="hljs-attr">key:</span> <span class="hljs-string">pat</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">$(params.secret-name)</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">SOURCE_URL</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.source-url)</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">PIPELINE_PARAMS</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.pipeline-params)</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">SOURCE_REVISION</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.source-revision)</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">PIPELINE_ID</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.pipeline-id)</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ORG_NAME</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.org-name)</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">PROJECT_NAME</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.project-name)</span>
    <span class="hljs-attr">image:</span>  <span class="hljs-string">&lt;your-relocated-image-here&gt;</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">trigger-ado-build</span>
    <span class="hljs-attr">script:</span> <span class="hljs-string">|
      #!/usr/bin/env bash
      set -o errexit
      set -o pipefail
      pip install requests
</span>
      <span class="hljs-string">python3</span> <span class="hljs-string">&lt;&lt;</span> <span class="hljs-string">END</span>
      <span class="hljs-string">import</span> <span class="hljs-string">os</span>
      <span class="hljs-string">import</span> <span class="hljs-string">subprocess</span>
      <span class="hljs-string">import</span> <span class="hljs-string">logging</span>
      <span class="hljs-string">import</span> <span class="hljs-string">sys</span>
      <span class="hljs-string">import</span> <span class="hljs-string">time</span>
      <span class="hljs-string">import</span> <span class="hljs-string">json</span>
      <span class="hljs-string">import</span> <span class="hljs-string">requests</span>


      <span class="hljs-string">logging.basicConfig(level=logging.DEBUG)</span>

      <span class="hljs-string">org</span> <span class="hljs-string">=</span> <span class="hljs-string">os.getenv('ORG_NAME')</span>
      <span class="hljs-string">project</span> <span class="hljs-string">=</span> <span class="hljs-string">os.getenv('PROJECT_NAME')</span>
      <span class="hljs-string">pipeline</span> <span class="hljs-string">=</span> <span class="hljs-string">os.getenv('PIPELINE_ID')</span>
      <span class="hljs-string">token</span> <span class="hljs-string">=</span> <span class="hljs-string">os.getenv('ADO_API_TOKEN')</span>
      <span class="hljs-string">source_url</span> <span class="hljs-string">=</span> <span class="hljs-string">os.getenv('SOURCE_URL')</span>
      <span class="hljs-string">source_revision</span> <span class="hljs-string">=</span> <span class="hljs-string">os.getenv('SOURCE_REVISION')</span>
      <span class="hljs-string">pipeline_params</span> <span class="hljs-string">=</span> <span class="hljs-string">os.getenv('PIPELINE_PARAMS')</span>

      <span class="hljs-string">url</span> <span class="hljs-string">=</span> <span class="hljs-string">f'https://dev.azure.com/{org}/{project}/_apis/pipelines/{pipeline}/runs?api-version=7.0'</span>
      <span class="hljs-string">existing_params</span> <span class="hljs-string">=</span> {
          <span class="hljs-attr">"source_url":</span> <span class="hljs-string">f'</span>{<span class="hljs-string">source_url</span>}<span class="hljs-string">',
          "source_revision": f'</span>{<span class="hljs-string">source_revision</span>}<span class="hljs-string">'
      }

      input_params = {}
      if pipeline_params != "":
        input_params = json.loads(pipeline_params)

      existing_params.update(input_params)
      payload = json.dumps({
      "templateParameters": existing_params
      })

      headers = {
      '</span><span class="hljs-string">Content-Type':</span> <span class="hljs-string">'application/json'</span>
      }

      <span class="hljs-string">pipelineResponse</span> <span class="hljs-string">=</span> <span class="hljs-string">requests.request("POST",</span> <span class="hljs-string">url,</span> <span class="hljs-string">headers=headers,</span> <span class="hljs-string">data=payload,auth=('',token))</span> 
      <span class="hljs-string">logging.info(pipelineResponse.text)</span>
      <span class="hljs-comment">#throw error if not 200</span>
      <span class="hljs-string">pipelineResponse.raise_for_status()</span>

      <span class="hljs-comment">#check status of pipeline run and validate it succeeds</span>

      <span class="hljs-string">jsonResponse</span> <span class="hljs-string">=</span> <span class="hljs-string">pipelineResponse.json()</span>

      <span class="hljs-string">currentRun</span> <span class="hljs-string">=</span> <span class="hljs-string">jsonResponse['_links']['self']['href']</span>
      <span class="hljs-string">results_url</span> <span class="hljs-string">=</span> <span class="hljs-string">jsonResponse['_links']['web']['href']</span>
      <span class="hljs-string">f</span> <span class="hljs-string">=</span> <span class="hljs-string">open("$(results.ado-pipeline-run-url.path)",</span> <span class="hljs-string">"w"</span><span class="hljs-string">)</span>
      <span class="hljs-string">f.write(results_url)</span>
      <span class="hljs-string">f.close()</span>


      <span class="hljs-string">running</span> <span class="hljs-string">=</span> <span class="hljs-literal">True</span>
      <span class="hljs-attr">while running:</span>
        <span class="hljs-string">response</span> <span class="hljs-string">=</span> <span class="hljs-string">requests.get(currentRun,</span> <span class="hljs-string">headers=headers,</span> <span class="hljs-string">auth=('',token),</span> <span class="hljs-string">timeout=300)</span>
        <span class="hljs-string">response.raise_for_status()</span>
        <span class="hljs-string">result</span> <span class="hljs-string">=</span> <span class="hljs-string">response.json()</span>
        <span class="hljs-string">if</span> <span class="hljs-string">result['state']</span> <span class="hljs-type">!=</span> <span class="hljs-attr">'completed':</span>
          <span class="hljs-string">logging.info(f"pipeline</span> <span class="hljs-string">state</span> <span class="hljs-string">is</span> {<span class="hljs-string">result</span>[<span class="hljs-string">'state'</span>]}<span class="hljs-string">,</span> <span class="hljs-string">entering</span> <span class="hljs-string">sleep</span> <span class="hljs-string">for</span> <span class="hljs-number">5</span> <span class="hljs-string">seconds")</span>
          <span class="hljs-string">time.sleep(5)</span>
        <span class="hljs-string">elif</span> <span class="hljs-string">result['result']</span> <span class="hljs-string">==</span> <span class="hljs-attr">'succeeded':</span>
          <span class="hljs-string">logging.info(f"pipeline</span> <span class="hljs-string">was</span> <span class="hljs-string">successful,</span> <span class="hljs-string">exiting")</span>
          <span class="hljs-string">sys.exit(os.EX_OK)</span>
        <span class="hljs-attr">else:</span>
          <span class="hljs-string">logging.info(f"pipeline</span> <span class="hljs-string">result</span> <span class="hljs-string">is</span> {<span class="hljs-string">result</span>[<span class="hljs-string">'result'</span>]}<span class="hljs-string">,</span> <span class="hljs-string">check</span> <span class="hljs-string">ADO")</span>
          <span class="hljs-string">sys.exit(os.EX_SOFTWARE)</span>
      <span class="hljs-string">END</span>
</code></pre>
<p>As you will notice, the main portion of the above file is a script. I chose Python for this since it was much easier to read than the Bash equivalent. Taking a deeper look into what is happening in the script we see the following:</p>
<ul>
<li><p>parameters are being passed in from the supply chain</p>
</li>
<li><p>a REST call is made to ADO to execute the pipeline</p>
</li>
<li><p>a while loop is used to continuously execute a REST call to ADO until the status is <code>succeeded</code> or <code>failed</code></p>
</li>
</ul>
<p>Next create a new <code>Pipeline</code> that will reference the <code>ClusterTask</code> and have the required labels so that it can be selected by the supply chain. This should also be created in the "developer namespace".</p>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">tekton.dev/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pipeline</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">developer-defined-ado-pipeline</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">&lt;developer-ns&gt;</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-comment">#! This label should be provided to the Workload so that</span>
    <span class="hljs-comment">#! the supply chain can find this pipeline</span>
    <span class="hljs-attr">apps.tanzu.vmware.com/pipeline:</span> <span class="hljs-string">ado-pipeline</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">results:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ado-pipeline-run-url</span>   <span class="hljs-comment">#! To show the job URL on the</span>
    <span class="hljs-comment">#! Tanzu Application Platform GUI</span>
    <span class="hljs-attr">value:</span> <span class="hljs-string">$(tasks.ado-task.results.ado-pipeline-run-url)</span>
  <span class="hljs-attr">params:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">source-url</span>        <span class="hljs-comment">#! Required</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">source-revision</span>   <span class="hljs-comment">#! Required</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">secret-name</span>       <span class="hljs-comment">#! Required</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">project-name</span>       <span class="hljs-comment">#! Required</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">pipeline-id</span>       <span class="hljs-comment">#! Required</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">org-name</span>           <span class="hljs-comment">#! Required</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">pipeline-params</span>
    <span class="hljs-attr">default:</span> <span class="hljs-string">""</span>
  <span class="hljs-attr">tasks:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ado-task</span>
    <span class="hljs-attr">taskRef:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">ado-task</span>
      <span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterTask</span>
    <span class="hljs-attr">params:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">source-url</span>
        <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.source-url)</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">source-revision</span>
        <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.source-revision)</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">secret-name</span>
        <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.secret-name)</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">pipeline-id</span>
        <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.pipeline-id)</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">org-name</span>
        <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.org-name)</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">project-name</span>
        <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.project-name)</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">pipeline-params</span>
        <span class="hljs-attr">value:</span> <span class="hljs-string">$(params.pipeline-params)</span>
</code></pre>
<p>Finally, create or modify a workload to use the new labels and parameters that will trigger the ADO testing pipeline. The below workload can be used since the GitHub repo is public and can be cloned without credentials. Just replace the <code>org-name</code> and <code>pipeline-id</code> with the ones from your account. The <code>pipeline-id</code> can be found in the URL when looking at the pipeline in the browser.</p>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">carto.run/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Workload</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app.kubernetes.io/part-of:</span> <span class="hljs-string">company-api-ado</span>
    <span class="hljs-attr">apps.tanzu.vmware.com/has-tests:</span> <span class="hljs-string">"true"</span>
    <span class="hljs-attr">apps.tanzu.vmware.com/workload-type:</span> <span class="hljs-string">web</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">company-api-ado</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">env:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">BP_KEEP_FILES</span>
        <span class="hljs-attr">value:</span> <span class="hljs-string">"docs/*"</span>
  <span class="hljs-attr">source:</span>
    <span class="hljs-attr">git:</span>
      <span class="hljs-attr">ref:</span>
        <span class="hljs-attr">branch:</span> <span class="hljs-string">main</span>
      <span class="hljs-attr">url:</span> <span class="hljs-string">https://github.com/warroyo/tap-go-sample</span>
  <span class="hljs-attr">params:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">testing_pipeline_matching_labels</span>
      <span class="hljs-attr">value:</span>
        <span class="hljs-attr">apps.tanzu.vmware.com/pipeline:</span> <span class="hljs-string">ado-pipeline</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">testing_pipeline_params</span>
      <span class="hljs-attr">value:</span>
        <span class="hljs-attr">project-name:</span> <span class="hljs-string">tap-ado-blog</span>
        <span class="hljs-attr">secret-name:</span> <span class="hljs-string">ado-token</span>
        <span class="hljs-attr">org-name:</span> <span class="hljs-string">&lt;your-org&gt;</span>
        <span class="hljs-attr">pipeline-id:</span> <span class="hljs-string">&lt;pipeline-id&gt;</span> 
        <span class="hljs-attr">pipeline-params:</span> <span class="hljs-string">"{\"newparam\": \"testing\"}"</span>
</code></pre>
<p>Once this workload is created the <code>source-tester</code> step should now execute the pipeline in ADO.</p>
<h3 id="heading-final-result">Final Result</h3>
<p>The final result is illustrated in the diagram below. The workload creates a supply chain and as part of that the <code>source-tester</code> step is executed. The <code>source-tester</code> triggers the ADO pipeline defined in the workload and will either fail and stop the supply chain or succeeded and continue the supply chain.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672264816043/4e26333c-b75f-42b1-a6a6-c8ec2f947c7e.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In summary, the steps in this blog added an option to our TAP supply chain to use ADO pipelines for testing code. One really nice thing about this is that it didn't require any modifications to the supply chains or any core TAP components. Through the native label selection that TAP provides we were able to add another testing option in addition to the OOTB options. This also does not need to be limited to testing code in the ADO pipeline. In a future post, I will cover how the same approach could be used to add steps to the supply chain that will run some arbitrary steps in an ADO Pipeline. This should give an idea of the possibilities that exist for integrating TAP with other toolsets.</p>
]]></content:encoded></item><item><title><![CDATA[About Me]]></title><description><![CDATA[Hey! I am Will Arroyo.
I am a Solution Engineer at VMware living in Denver and working on all things Cloud Native with a heavy focus on Kubernetes.
In the past, I have helped build large-scale infrastructure automation systems as well as helped moder...]]></description><link>https://blog.warroyo.com/about-me</link><guid isPermaLink="true">https://blog.warroyo.com/about-me</guid><category><![CDATA[introduction]]></category><dc:creator><![CDATA[Will Arroyo]]></dc:creator><pubDate>Mon, 03 Jan 2022 22:58:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/632552254393a6975900e9ad426906e0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey! I am Will Arroyo.</p>
<p>I am a Solution Engineer at VMware living in Denver and working on all things Cloud Native with a heavy focus on Kubernetes.</p>
<p>In the past, I have helped build large-scale infrastructure automation systems as well as helped modernize software deployments across enterprises.</p>
<p>I am also passionate about cooking and food in general so you may see some posts about food mixed into this blog.</p>
]]></content:encoded></item></channel></rss>