Sample script to deploy VPAIF-N DL workloads on a Deep Learning VM
search cancel

Sample script to deploy VPAIF-N DL workloads on a Deep Learning VM

book

Article ID: 388813

calendar_today

Updated On:

Products

VMware Private AI Foundation Private AI Foundation

Issue/Introduction

The attached zip bundle contains a collection of cloud-init scripts that deploying DL workloads on Deep Learning VM through the vSphere Client UI and VM Service.
The bundle includes a variety of sample scripts tailored for different DL frameworks and use cases.

Environment

VMware Private AI Foundation 9.0

Resolution

Each script is organised in its respective directory and contains two files: a cloud-init script and a config.json file.

DL Workload Folder
NVIDIA RAG ./nvidia-rag/
CUDA Sample ./cuda-sample/
DCGM Exporter ./dcgm-exporter/
PyTorch ./pytorch/
TensorFlow ./tensorflow/
Triton Inference Server ./triton-inference-server/
  • For each specific use case, the associated cloud-init script is provided as a cloud-config.yaml file located in its corresponding directory.
  • To deploy the workload, encode the cloud-config.yaml cloud-init script in base64 format and assign the resulting value to the user-data OVF parameter of the deep learning VM image.
  • Workload-specific configurable parameters are defined in the respective config.json files. After applying any required modifications, encode the file content in Base64 and assign it to the config-json OVF parameter.
  • For comprehensive guidance on deploying and configuring the Deep Learning VM and DL workloads, refer to the VMware Private AI Foundation 9.0 documentation.

Attachments

vpaifn-dl-workloads-cloud-init-sample-script.zip get_app