Install custom OS packages in TKGi nodes
search cancel

Install custom OS packages in TKGi nodes

book

Article ID: 394333

calendar_today

Updated On: 04-16-2025

Products

VMware Tanzu Kubernetes Grid Integrated Edition

Issue/Introduction

This article describes two approaches to configure custom OS packages in TKGi VMs.

Generally speaking, if a package is installed in a node and, afterwards, the node gets recreated, for example due to an upgrade, the recreated VM won't include the extra installed packages.

To circumvent this situation and make the installation of packages persistent, follow one of the approaches outlined below.

Note: any custom configuration made in the nodes, including installing non-default packages, is out of the scope of general Broadcom Support.

Resolution

Approach #1 - Bosh OS Config

The recommended way to install OS packages persistently in TKGi nodes is through Bosh OS Configs.

These configs allow us to make persistent OS-level modifications in Bosh-managed deployments. A complete list of available jobs can be found here.

To install OS packages in TKGi nodes, a suitable job to configure would be post-deploy-script.

An example of this job is given in the link above and looks as follows:

name: post-deploy-script

templates:
  post-deploy.sh: bin/post-deploy

packages: []

properties:
  script:
    description: Script that is run during post-deploy to allow additional setup of environment, run as root user.
    example: |-
        #!/bin/bash
        apt-get update && apt-get install wget git tmux -y

The job will run automatically after the nodes/VMs have been provisioned and will install all the packages specified after the above "apt-get install" command.

If any further custom configuration is needed, for example adding custom repositories from where packages will be pulled, this can also be included in the post-deploy-script.

To configure the above:

  1. Upload the os-conf release to Bosh Director. A list of all the versions with their upload-release commands can be found in os-conf Release.

    For example, for os-conf v23.0.0:
    # bosh upload-release --sha1 d20772d8ce6e781ceb13cac7df5950bfa4330ba1 "https://bosh.io/d/github.com/cloudfoundry/os-conf-release?v=23.0.0"

  2. Configure the post-deploy-script os-conf job as a Director Runtime Config following this example and this reference.

    Note: Runtime Configs will be applied to all VMs managed by Bosh Director. If you need to install the OS packages in just a subset of clusters and VMs/nodes, it's important that you configure the Runtime Config appropriately making use of the corresponding Runtime Config include and exclude rules. Wrong Runtime Config configuration can result in undesirable updates in clusters and VMs/nodes.

    Example of Runtime Config setup:

    1. Create a runtime.yml file:
      # vim runtime.yml

      releases:
      - name: "os-conf"
        version: "23.0.0"
      addons:
      - name: os-configuration
        jobs:
        - name: post-deploy-script
          release: os-conf
          properties:
            script: |-
                #!/bin/bash
                apt-get update && apt-get install wget git tmux -y
        include:
          deployments: [<service-instance_XXXXXXXXXX>]                                        # Optional, you can define which deployments (TKGi clusters) this runtime config will be applied to.
          instance_groups: [<master and/or worker, as defined in the deployment manifest>]    # Optional, you can define which instance_groups (cluster nodes, i.e. masters/workers) this runtime config will be applied to.
        exclude:    
          deployments: [<service-instance_XXXXXXXXXX>]                                        # Optional, you can define which deployments (TKGi clusters) this runtime config will not be applied to.
          instance_groups: [<master and/or worker, as defined in the deployment manifest>]    # Optional, you can define which instance_groups (cluster nodes, i.e. masters/workers) this runtime config will not be applied to.

    2. Update the default runtime config:
      # bosh update-runtime-config runtime.yml

    3. Verify the runtime config:
      # bosh runtime-config

    4. Upgrade the related clusters:
      # tkgi upgrade-cluster <cluster-name>

Approach #2 - DaemonSet

If Approach #1 is not suitable, this approach consists in deploying a DaemonSet that will perform the installation of the packages in all the nodes in the cluster.

Please note that Approach #1 is the preferred one as it uses Bosh native capabilities, while this approach relies on custom pods running in the clusters, consuming extra resources and more prone to causing unexpected issues.

  1. Create the DaemonSet definition file:
    # vim daemonset.yaml

    For example:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: node-init
      namespace: node-init
    spec:
      selector:
        matchLabels:
          name: node-init
      template:
        metadata:
          labels:
            name: node-init
        spec:
          nodeSelector:                 ## this field is optional, if you want to install packages just in certain nodes
            <node-key>: "<node-value"   ## this field is optional, if you want to install packages just in certain nodes
          initContainers:
            - name: node-init
              image: <registry>/busybox:latest
              securityContext:
                privileged: true
              command: ["/bin/sh"]
              args: ["-c", "apt-get update && apt-get install wget git tmux -y"]
          containers:
            - name: sleep
              image: <registry>/pause:3.9

    Note: the "pause" image can be pulled from  "projects.registry.vmware.com/tkg/pause:3.9" and copied to the registry.

  2. Deploy the DaemonSet in the cluster. From the correct Kubernetes context:
    # kubectl apply -f daemonset.yaml

  3. Verify all pods have been created successfully:
    # kubectl get po -n node-init