How to custom health check settings for API Developer Portal container
search cancel

How to custom health check settings for API Developer Portal container

book

Article ID: 237006

calendar_today

Updated On:

Products

CA API Developer Portal

Issue/Introduction

Here is the default health check settings for portal_tenant-provisioner container

====
docker inspect $(docker ps | grep portal_tenant-provisioner | awk '{ print $1 }') | grep -A 9 \""Healthc"

            "Healthcheck": {
                "Test": [
                    "CMD-SHELL",
                    "\"/opt/app/healthcheck.sh\""
                ],
                "Interval": 30000000000, // 30s 
                "Timeout": 3000000000 // 3s 
            },
====

 

Planned Config Changes: 

portal_tenant-provisioner
- Start Period ( 0 to 300s)
- Retry (3 to 10)
- Interval (30s to 60s)
- timeout (3s to 10s)

 

Environment

Release : 5.0.2

Component : API PORTAL

Resolution

This will need customization, the idea is,

-- use dockerfile with custom healthcheck options to build new image

-- run command "./portal.sh keep" to restart portal and dump the portal yml files, one of them is docker-compose.yml

-- custom docker-compose.yml to use the new image 

-- custom the ./portal.sh to use the new docker-compose.yml

 

For example ( tests done on portal 4.5)

1. create dockerfile

# vi tenantdockfile

FROM apim-portal.packages.ca.com/apim-portal/tenant-provisioning-service:4.5

HEALTHCHECK --interval=60s --timeout=10s --retries=10 --start-period=300s \
      CMD "/opt/app/healthcheck.sh"

 

2. build custom image

# docker build -t apim-portal.packages.ca.com/apim-portal/tenant-provisioning-service:docfhealthcheck -f tenantdockfile .

(don't miss the dot at the end of the command line)

3. compare old image and new image on healthcheck settings

# docker images|grep tenant

to find the old image ID and new image ID

 

#docker image inspect <oldimage ID>|more

scroll down and it shows healthcheck setting is,

"Healthcheck": {
    "Test": [
        "CMD-SHELL",
        "\"/opt/app/healthcheck.sh\""
    ],
    "Interval": 30000000000,
    "Timeout": 3000000000
},    

 

#docker image inspect <newimage ID>|more

scroll down and it shows healthcheck setting is,

"Healthcheck": {
    "Test": [
        "CMD-SHELL",
        "\"/opt/app/healthcheck.sh\""
    ],
    "Interval": 60000000000,
    "Timeout": 10000000000,
    "StartPeriod": 300000000000,
    "Retries": 10
},

 

*************

After dump the docker-compose.yml file (with command "./portal.sh keep"),  find these 2 lines,

  tenant-provisioner:
    image: apim-portal.packages.ca.com/apim-portal/tenant-provisioning-service:4.5

 

replace it as following lines to use the new image,

  tenant-provisioner:
    image: apim-portal.packages.ca.com/apim-portal/tenant-provisioning-service:docfhealthcheck

 

**********

In ./portal.sh file, find following line,

YML="$(single docker-compose.yml $1)"

replace it as following line to use the custom yml file,

YML=`cat ./docker-compose.yml`

 

*****************

Run ./portal.sh , after it starts up, run "docker service ls", it should show that the new image is applied,

k5ndpninhg66   portal_tenant-provisioner   replicated   1/1        apim-portal.packages.ca.com/apim-portal/tenant-provisioning-service:docfhealthcheck 

*****************

Double check the healthcheck settings of the container.

# docker inspect $(docker ps | grep portal_tenant-provisioner | awk '{ print $1 }') | grep -A 9 \""Healthc"
            "Healthcheck": {
                "Test": [
                    "CMD-SHELL",
                    "\"/opt/app/healthcheck.sh\""
                ],
                "Interval": 60000000000,
                "Timeout": 10000000000,
                "StartPeriod": 300000000000,
                "Retries": 10
            },

 

**********************

 

 

NOTE: all changes are tested under portal 4.5, so the image tag is related to 4.5, need to be replaced with correct image name/tag.

NOTE2: please understand such customization is not in the scope of support, this is just for an example, we support will not be responsible for any issue due to such changes.