ALERT: Some images may not load properly within the Knowledge Base Article. If you see a broken image, please right-click and select 'Open image in a new tab'. We apologize for this inconvenience.

Install Api Portal 4.4 on Openshift

book

Article ID: 145744

calendar_today

Updated On:

Products

CA API Developer Portal

Issue/Introduction

I need help to understand why after deploying Portal 4.4 on an Openshift cluster, the packages are not downloaded from the repository,

This is my cluster Openshift :

NAME         STATUS    ROLES     AGE       VERSION
ca-infra0    Ready     infra     1d        v1.11.0+d4cacc0
ca-master0   Ready     master    1d        v1.11.0+d4cacc0
ca-node0     Ready     compute   1d        v1.11.0+d4cacc0
ca-node1     Ready     compute   1d        v1.11.0+d4cacc0

This is my list pods, remain in  "ImagePullBackOff" :

NAME                                 READY     STATUS                  RESTARTS   AGE
analytics-server-6dd64f6795-lrxzx    0/1       Init:ImagePullBackOff   0          1d
apim-85f499fdcf-fzjn5                0/1       Init:ImagePullBackOff   0          1d
authenticator-7956b5966d-gnmr7       0/1       Init:ImagePullBackOff   0          1d
broker-0                             0/1       Pending                 0          1d
coordinator-0                        0/1       Pending                 0          1d
db-upgrade-2qm72                     0/1       ImagePullBackOff        0          1d
dispatcher-768cdf678b-86q54          0/1       ImagePullBackOff        0          1d
ingestion-server-ccd7ddb54-k7xxc     0/1       ImagePullBackOff        0          1d
middlemanager-0                      0/1       ImagePullBackOff        0          1d
portal-data-6d87848587-54bpc         0/1       Init:ImagePullBackOff   0          1d
portal-enterprise-d8dbc75cd-zt5wn    0/1       Init:ImagePullBackOff   0          1d
portaldb-7575d9db7d-xsbqw            0/1       ErrImagePull            0          1d
pssg-c8c759b49-bxb6z                 0/1       Pending                 0          1d
rabbitmq-65579867bb-gskcf            0/1       ImagePullBackOff        0          1d
rbac-upgrade-tqhkk                   0/1       ImagePullBackOff        0          1d
solr-7c6965bdbc-lcw2g                0/1       ImagePullBackOff        0          1d
tenant-provisioner-87b96647f-tzf8x   0/1       Init:ImagePullBackOff   0          1d

This is my file "docker-secret.yaml" the data: field not validated, it seems that a base64 json is needed for access to repository :
apiVersion: v1
kind: Secret
metadata:
  name: bintray
type: kubernetes.io/dockerconfigjson
data:

 

Cause

docker-secret.yml is empty :

First of all, I can see in the values.yml pullSecret is commented out,, also I don't see the data in the docker-secret.yml file

image:
pullPolicy: Always
#pullSecrets: bintray

% helm version

Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

Environment

Release : 4.4

Component : API PORTAL

Resolution

The customer commented on the “pullsecrets” to overcome the error that the deployment released.

Now completing the value "data:" in the docker-secret.yaml file with the instructions you gave me ".dockerconfigjson: etc ..." the deployment is successful.

But some PODs I’m state “Failed” :

NAME                                  READY     STATUS      RESTARTS   AGE

analytics-server-7d96bbb4b6-cwc8g     1/1       Running     0          21m

apim-85b7b5f75d-9gp5x                 0/1       Running     3          21m

authenticator-6dd4dc9b7f-4rfdt        1/1       Running     0          21m

broker-0                              0/1       Running     5          21m

coordinator-0                         0/1       Running     6          21m

db-upgrade-vcz6k                      0/1       Completed   0          21m

dispatcher-7744567b89-jldt8           1/1       Running     0          21m

ingestion-server-5ff84b9b9-86jc6      1/1       Running     0          21m

middlemanager-0                       0/1       Running     5          21m

portal-data-bc45b9f94-njt8c           1/1       Running     0          21m

portal-enterprise-55bdb67fb9-kwbqw    1/1       Running     0          21m

portaldb-75c7959845-mh2tc             1/1       Running     0          21m

pssg-785877cb7b-44rsq                 0/1       Running     5          21m

rabbitmq-8ffc8f64f-sqhhg              0/1       Running     1          21m

rbac-upgrade-bspcz                    0/1       Completed   0          21m

solr-78dd8c85db-vvfxh                 1/1       Running     0          21m

tenant-provisioner-77ccfb69fd-jmkfn   1/1       Running     0          21m


This debug is Openshift console :

create Pod kafka-0 in StatefulSet kafka failed error: pods "kafka-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{1010}: 1010 is not an allowed group]

create Pod historical-0 in StatefulSet historical failed error: pods "historical-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{1010}: 1010 is not an allowed group]

create Pod zookeeper-0 in StatefulSet zookeeper failed error: pods "zookeeper-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{1010}: 1010 is not an allowed group]

create Pod minio-0 in StatefulSet minio failed error: pods "minio-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{1010}: 1010 is not an allowed group]

The error you mentioned is basically, means the default security context constraints (SCC) (normally, it is 'restricted', you can check
by openshift command 'oc get scc' does not allow to use hostPath volumes and SYS_ADMIN/SYS_RESOURCE capabilities.

https://docs.openshift.com/container-platform/3.6/admin_guide/manage_scc.html

the above link gives you the context.