TKGm - Kapp-Controller app not Reconciling in particular namespace
search cancel

TKGm - Kapp-Controller app not Reconciling in particular namespace

book

Article ID: 384713

calendar_today

Updated On:

Products

Tanzu Kubernetes Runtime

Issue/Introduction

From the management cluster the kapp-controller app is Reconcile failed

<namespace>      <namespace>-kapp-controller             Reconcile failed: Deploying: Error (see .status.usefulErrorMessage for details)   3m9s           504d

And you see below error, from the problem kapp-controller app.

    Updated At:          2024-xx-xxTxx:xx:xxZ
  Friendly Description:  Reconcile failed: Deploying: Error (see .status.usefulErrorMessage for details)
  Observed Generation:   4
  Template:
    Exit Code:  0
    Stderr:     resolve | final: ghcr.io/carvel-dev/kapp-controller@sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -> xxx.xxx.xx:5000/tkg/packages/core/kapp-controller@sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

    Updated At:          2024-11-29T09:41:12Z
  Useful Error Message:  kapp: Error: Ownership errors:
- Resource 'namespace/tanzu-package-repo-global (v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'serviceaccount/kapp-controller-sa (v1) namespace: tkg-system' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'rolebinding/pkgserver-auth-reader (rbac.authorization.k8s.io/v1) namespace: kube-system' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'configmap/kapp-controller-config (v1) namespace: tkg-system' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'clusterrole/kapp-controller-user-role (rbac.authorization.k8s.io/v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'service/packaging-api (v1) namespace: tkg-system' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'deployment/kapp-controller (apps/v1) namespace: tkg-system' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'clusterrole/kapp-controller-cluster-role (rbac.authorization.k8s.io/v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'namespace/tkg-system (v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'customresourcedefinition/internalpackagemetadatas.internal.packaging.carvel.dev (apiextensions.k8s.io/v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'clusterrolebinding/pkg-apiserver:system:auth-delegator (rbac.authorization.k8s.io/v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'clusterrolebinding/kapp-controller-cluster-role-binding (rbac.authorization.k8s.io/v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'customresourcedefinition/packageinstalls.packaging.carvel.dev (apiextensions.k8s.io/v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'apiservice/v1alpha1.data.packaging.carvel.dev (apiregistration.k8s.io/v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'customresourcedefinition/packagerepositories.packaging.carvel.dev (apiextensions.k8s.io/v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'customresourcedefinition/apps.kappctrl.k14s.io (apiextensions.k8s.io/v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'
- Resource 'customresourcedefinition/internalpackages.internal.packaging.carvel.dev (apiextensions.k8s.io/v1) cluster' is already associated with a different label 'kapp.k14s.io/app=xxxxxxxxxxxxxxxxxxx'

Environment

TKGm

Cause

The issue is due to the configmap of kapp controller app in the problem context / namespace was previously deleted.
After deletion the management cluster recreated the configmap of kapp controller app in the problem context / namespace with a labelValue different to the prior installation thus causing the kapp app to Reconcile failed status.

Resolution

Switch to the namespace's context and check the configmap of kapp controller in the problem namespace using below command:

kubectl get configmap -n <namespace> -oyaml | less

The output will look something like below:

apiVersion: v1
data:
  spec: '{"labelKey":"kapp.k14s.io/app","labelValue":"yyyyyyyyyyyyyyyyyyy","lastChange":{"startedAt":"0001-01-01T00:00:00Z","finishedAt":"0001-01-01T00:00:00Z"},"usedGKs":[]}'
kind: ConfigMap
metadata:
  annotations:
    kapp.k14s.io/app-changes-use-app-label: ""
  creationTimestamp: "2024-xx-xxTxx:xx:xxZ"
  labels:
    kapp.k14s.io/is-app: ""
  name: <namespace>-kapp-controller.app
  namespace: default
  resourceVersion: "aaaaaaaaa"
  uid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx


-- note that the labelValue is currently set to yyyyyyyyyyyyyyyyyyy


Next edit the configmap with the below command and replace the labelValue yyyyyyyyyyyyyyyyyyy with labelValue from the original error ie - xxxxxxxxxxxxxxxxxxx:

kubectl edit configmap -n <namespace> 


Change as per below:

apiVersion: v1
data:
  spec: '{"labelKey":"kapp.k14s.io/app","labelValue":"xxxxxxxxxxxxxxxxxxx","lastChange":{"startedAt":"0001-01-01T00:00:00Z","finishedAt":"0001-01-01T00:00:00Z"},"usedGKs":[]}'
kind: ConfigMap
metadata:
  annotations:
    kapp.k14s.io/app-changes-use-app-label: ""
  creationTimestamp: "2024-xx-xxTxx:xx:xxZ"
  labels:
    kapp.k14s.io/is-app: ""
  name: <namespace>-kapp-controller.app
  namespace: default
  resourceVersion: "aaaaaaaaa"
  uid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx


-- note that the labelValue is currently set to xxxxxxxxxxxxxxxxxxx

Type :wq! to write and quit the edit of the configmap.

Now switch context back to the management cluster and check the kapp app as per below command:

kubectl get app -A | grep -Ei kapp | grep -Ei <namespace>


You should then see the kapp app for the problem namespace now having the status Reconcile Succeeded.