Prerequisites
The following Azure environment information should be prepared in advance.
You also need a Linux box with internet access (to https://raw.githubusercontent.com at least) and helm CLI tool installed.
Steps
1) Get credentials and kubeconfig with tkgi CLI tool and set current Kubernets context to the cluster on which the CSI driver will be installed.
2) Create a secret on the cluster with credentials for accessing your Azure environment. Replace placeholders in the following text with the real value for your Azure env and copy/paste/execute the text in the terminal with right Kubernetes context being set. For example, replace ${tenant_id} with your Tenant ID.
cloud_config=$( cat <<CONF | base64 | awk '{printf $0}'; echo { "cloud":"AzurePublicCloud", "tenantId": "${tenant_id}", "subscriptionId": "${subscription_id}", "resourceGroup": "${resource_group}", "location": "${location}", "aadClientId": "${client_id}", "aadClientSecret": "${client_secret}", "useManagedIdentityExtension": false, "useInstanceMetadata": true, "vmType": "standard", "vnetName": "${virtual_network}", "vnetResourceGroup": "${resource_group}", "cloudProviderBackoff": true } CONF ) cat <<SEC | kubectl apply -f - apiVersion: v1 data: cloud-config: "${cloud_config}" kind: Secret metadata: name: azure-cloud-provider namespace: kube-system type: Opaque SEC
3) Create cluster role and role bindings. If there is azuredisk storage class defined on cluster, please replace "azurefile" with "azuredisk" in the yaml text below and apply again.
cat <<PSP | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: name: psp:privileged rules: - apiGroups: - extensions resourceNames: - pks-privileged resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: psp-privileged-role-binding namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: psp:privileged subjects: - kind: ServiceAccount name: csi-azurefile-controller-sa namespace: kube-system - kind: ServiceAccount name: csi-azurefile-node-sa namespace: kube-system PSP
4) Install Azure file CSI driver. If there is azuredisk storage class defined on cluster, please also install Azure disk CSI driver). As of this writing, the latest "${azure_file_csi_driver_version}" is v1.28.0 and "${azure_disk_csi_driver_version}" is v1.28.1.
echo "Install Azure file csi driver............" helm repo add azurefile-csi-driver https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/master/charts helm repo update azurefile-csi-driver helm upgrade --install azurefile-csi-driver --version=${azure_file_csi_driver_version} \ --namespace kube-system \ --set linux.kubelet=/var/vcap/data/kubelet \ --set windows.enabled=false \ azurefile-csi-driver/azurefile-csi-driver echo "Install Azure disk csi driver............" helm repo add azuredisk-csi-driver https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/charts helm repo update azuredisk-csi-driver helm upgrade --install azuredisk-csi-driver --version=${azure_disk_csi_driver_version} \ --namespace kube-system \ --set linux.kubelet=/var/vcap/data/kubelet \ --set windows.enabled=false \ azuredisk-csi-driver/azuredisk-csi-driver
5) Check state of Kubernetes resources (e.g. deployment/pods) for Azure file/disk CSI driver in kube-system namespace and do necessary troubleshooting if they is any problem.