K8s job does not fail when using an incorrect image name
search cancel

K8s job does not fail when using an incorrect image name

book

Article ID: 418034

calendar_today

Updated On:

Products

CA Automic Workload Automation - Automation Engine

Issue/Introduction

It is observed that when you start a K8s job with an incorrect image name, the job remains active on the Automic side, but in K8s it fails.

Resolution

This is expected behavior as the K8s Integration manages a job, not a pod. Note that a job can start multiple pods, of which some may fail, while others run correctly. These individual statuses are not tracked.

In case the job only launches a single pod, a possible workaround could be using the parameter activeDeadlineSeconds

For example:

apiVersion: batch/v1
kind: Job
metadata:
  name: test-job1
spec:
  activeDeadlineSeconds: 10
  template:
    spec:
      containers:
      - name: busybox-job
        image: busybox1234
        command: ["sh", "-c", "echo Hello, Kubernetes Job! && sleep 30"]
      restartPolicy: Never

In this definition, the pod will fail because of the non-existing image name, but the job will not remain active on the Automic side because no active pod is detected.