If you observed an issue where Image Scanner results are not displayed in TAP-GUI, please follow the below steps to troubleshoot the issue.
ImageVulnerabilityScan (completes)
↓
AMR Observer (watches & generates CloudEvents)
↓
AMR CloudEvent Handler (receives events)
↓
Metadata Store (stores scan data)
↓
TAP-GUI (queries & displays results)
# 1. Is AMR Observer installed?
tanzu package installed list -A | grep amr
# 2. Are pods running?
kubectl get pods -n amr-observer-system
kubectl get pods -n metadata-store
# 3. Is AMR Observer processing scans? (MOST IMPORTANT CHECK)
kubectl logs -n amr-observer-system deployment/amr-observer-controller-manager | grep "Generated cloudevents successfully"
# 4. Is CloudEvent Handler receiving events?
kubectl logs -n metadata-store deployment/amr-cloudevent-handler | grep "Processing Authorization"
# 5. Is Metadata Store ingesting data?
kubectl logs -n metadata-store deployment/metadata-store-app --container=metadata-store-app | grep "CreateArtifactGroup"
|
Check |
Expected Output |
What It Means |
|
AMR Observer pods |
1/1 Running |
AMR Observer is operational |
|
"Generated cloudevents" |
At least 1 entry per scan |
AMR Observer is watching and processing scans |
|
"Processing Authorization" |
Multiple entries |
CloudEvent Handler is receiving events |
|
"CreateArtifactGroup" |
Entries with 201 Created |
Metadata Store is ingesting scan data |
$ tanzu package installed list -A | grep amr
# (empty result)
$ kubectl logs -n amr-observer-system deployment/amr-observer-controller-manager | grep "Generated cloudevents"
# (empty result)
Possible causes:
$ kubectl logs -n amr-observer-system deployment/amr-observer-controller-manager --tail=100 | grep "503"
{"error":"request failed with status 503"}
Possible causes:
# Restart AMR Observer
kubectl rollout restart deployment/amr-observer-controller-manager -n amr-observer-system
# Watch logs in real-time
kubectl logs -n amr-observer-system deployment/amr-observer-controller-manager -f
# In another terminal, trigger a new scan
kubectl delete imagevulnerabilityscan -n argo test-workload-2-trivy-scan-XXXXX
# Wait for supply chain to recreate it
# Restart CloudEvent Handler
kubectl rollout restart deployment/amr-cloudevent-handler -n metadata-store
# Check if service exists
kubectl get svc -n metadata-store amr-cloudevent-handler
Run these in order to trace the data flow:
# Step 1: Verify scan completed
kubectl get imagevulnerabilityscan -n argo -l carto.run/workload-name=test-workload-2
# Step 2: Check AMR Observer saw it
kubectl logs -n amr-observer-system deployment/amr-observer-controller-manager --tail=500 | grep -A2 "test-workload-2"
# Step 3: Check CloudEvent Handler received it
kubectl logs -n metadata-store deployment/amr-cloudevent-handler --tail=100
# Step 4: Check Metadata Store ingested it
kubectl logs -n metadata-store deployment/metadata-store-app --container=metadata-store-app | grep -E "CreateImage|CreateArtifactGroup"
# Step 5: Query Metadata Store directly. Replace sha256:######## with the sha from your image-provider step.
kubectl port-forward -n metadata-store svc/metadata-store-app 8443:8443 &
curl -k -X POST https://localhost:8443/api/v1/artifact-groups/_search \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $(kubectl get secrets -n metadata-store metadata-store-read-write-client -o json | jq -r '.data.token' | base64 -d)" \
-d '{"all":true,"digests":["sha256:########"],"shas":[]}' | jq .
Please reach out to VMware Tanzu Support with the above outputs if these steps can't resolve the issue.