Greenplum Streaming Server on Kubernetes helm chart image not working
search cancel

Greenplum Streaming Server on Kubernetes helm chart image not working

book

Article ID: 408285

calendar_today

Updated On:

Products

VMware Tanzu Greenplum

Issue/Introduction

When trying to install Tanzu Greenplum Streaming Server on Kubernetes we get the following error with the .yaml file default configuration of root user 'runAsNonRoot: true':

ERROR: container has runAsNonRoot and image will run as root

 

When we change the .yaml file default configuration we encounter another error:

ERROR: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/manager": stat /manager: no such file or directory: unknown

 

Resolution

PSA has been enabled on the TKG 2.5.2 cluster with Restricted Pod Security Policies, https://kubernetes.io/docs/concepts/security/pod-security-standards/

This can be confirmed by viewing the /etc/kubernetes/admission-control-config.yaml file on any of the Control Plane nodes.

kind: AdmissionConfiguration
apiVersion: apiserver.config.k8s.io/v1
plugins:
    - name: PodSecurity
      configuration:
        kind: PodSecurityConfiguration
        apiVersion: pod-security.admission.config.k8s.io/v1beta1
        defaults:
            enforce-version: v1.24
            audit: restricted
            audit-version: v1.24
            warn: restricted
            warn-version: v1.24
        exemptions:
            namespaces:
                - kube-system
                - tkg-system

 

One option is to modify the default policy to be less restrictive and for a quick test, you can modify this file and restart kube-apiserver.

For a permanent change, you can modify the kcp object if its a legacy cluster but for a classy cluster, you would need to modify the cluster namespace, see:

https://techdocs.broadcom.com/us/en/vmware-tanzu/standalone-components/tanzu-kubernetes-grid/2-5/tkg/workload-security-psa.html

If you want to modify the default policy and need assistance please open a case with the TKGm team for assistance.

If is recommended to keep the default policy in the cluster and modify the workloads instead, that involves just adding a label to the namespace where the workload is installed. In this case, its the gpss-operator namespace and we can set the privileged policy as below:

kubectl label ns gpss-operator pod-security.kubernetes.io/enforce=privileged –overwrite
kubectl label ns gpss-operator pod-security.kubernetes.io/warn=privileged –overwrite

 

You may also need to add these labels:

kubectl label ns gpss-operator pod-security.kubernetes.io/enforce-version=latest –overwrite
kubectl label ns gpss-operator pod-security.kubernetes.io/warn-version=latest –overwrite