Pull an Image from a Private Registry fails - ImagePullBackOff - kubernetes

On our K8S Worker node with below command have created "secret" to pull images from our private (Nexus) registry.
kubectl create secret docker-registry regcred --docker-server=https://nexus-server/nexus/ --docker-username=admin --docker-password=password --docker-email=user#company.com
Created my-private-reg-pod.yaml in K8S Worker node, It has below.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: private-reg-container
image: nexus-server:4546/ubuntu-16:version-1
imagePullSecrets:
- name: regcred
Created pod with below command
kubectl create -f my-private-reg-pod.yaml
kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pod 0/1 ImagePullBackOff 0 27m
kubectl describe pod test-pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/test-pod to k8s-worker01
Warning Failed 26m (x6 over 28m) kubelet, k8s-worker01 Error: ImagePullBackOff
Normal Pulling 26m (x4 over 28m) kubelet, k8s-worker01 Pulling image "sonatype:4546/ubuntu-16:version-1"
Warning Failed 26m (x4 over 28m) kubelet, k8s-worker01 Failed to pull image "nexus-server:4546/ubuntu-16:version-1": rpc error: code = Unknown desc = Error response from daemon: Get https://nexus-server.domain.com/nexus/v2/ubuntu-16/manifests/ver-1: no basic auth credentials
Warning Failed 26m (x4 over 28m) kubelet, k8s-worker01 Error: ErrImagePull
Normal BackOff 3m9s (x111 over 28m) kubelet, k8s-worker01 Back-off pulling image "nexus-server:4546/ubuntu-16:version-1"
On terminal nexus login works
docker login nexus-server:4546
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Something i am missing with this section?

Since my docker login to nexus succeeded on terminal, So i have deleted my secret and created with kubectl create secret generic regcred \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson it worked.

Related

Pod cannot mount Persistent Volume created by ozone CSI provisioner

I am using kubernetes to deploy ozone (a sub for hdfs), and basically followed instructions from here and here (just a few steps).
First I created few pvs with hostpath to my local dir, then I slightly edited the yamls from ozone/kubernetes/example/ozone by changing nfs claim to host path claim:
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: manual
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
selector:
matchLabels:
type: local
and I commented out the nodeAffinity settings in datanode-stateful.yaml since my kubernetes only had master node.
The deployment was succesful.
Then I applied the csi and pv-test as the instructions in csi protocol said, the pv (bucket in s3v) was automatically established, and pvc did bound the pv, but the test pod stopped at containerCreating.
Attaching the pv-test pod desc:
Name: ozone-csi-test-webserver-778c8c87b7-rngfk
Namespace: default
Priority: 0
Node: k8s-master/192.168.100.202
Start Time: Fri, 18 Jun 2021 14:23:54 +0800
Labels: app=ozone-csi-test-webserver
pod-template-hash=778c8c87b7
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/ozone-csi-test-webserver-778c8c87b7
Containers:
web:
Container ID:
Image: python:3.7.3-alpine3.8
Image ID:
Port: <none>
Host Port: <none>
Args:
python
-m
http.server
--directory
/www
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gqknv (ro)
/www from webroot (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
webroot:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ozone-csi-test-webserver
ReadOnly: false
default-token-gqknv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gqknv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 7m7s (x58 over 122m) kubelet, k8s-master MountVolume.SetUp failed for volume "pvc-1913bd70-09fd-4eba-a459-73fe3bd397b8" : rpc error: code = Unknown desc =
Warning FailedMount 31s (x54 over 120m) kubelet, k8s-master Unable to mount volumes for pod "ozone-csi-test-webserver-778c8c87b7-rngfk_default(b1a59143-00b9-47f6-94fe-1845c29aee93)": timeout expired waiting for volumes to attach or mount for pod "default"/"ozone-csi-test-webserver-778c8c87b7-rngfk". list of unmounted volumes=[webroot]. list of unattached volumes=[webroot default-token-gqknv]
Attach events for the whole process:
7m51s Normal SuccessfulCreate statefulset/s3g create Claim data-s3g-0 Pod s3g-0 in StatefulSet s3g success
7m51s Warning FailedScheduling pod/s3g-0 pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
7m51s Normal SuccessfulCreate statefulset/scm create Pod scm-0 in StatefulSet scm successful
7m51s Normal SuccessfulCreate statefulset/om create Pod om-0 in StatefulSet om successful
7m51s Warning FailedScheduling pod/om-0 pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
7m51s Warning FailedScheduling pod/datanode-0 pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
7m51s Normal SuccessfulCreate statefulset/datanode create Pod datanode-0 in StatefulSet datanode successful
7m51s Normal SuccessfulCreate statefulset/datanode create Claim data-datanode-0 Pod datanode-0 in StatefulSet datanode success
7m51s Normal SuccessfulCreate statefulset/scm create Claim data-scm-0 Pod scm-0 in StatefulSet scm success
7m51s Normal SuccessfulCreate statefulset/s3g create Pod s3g-0 in StatefulSet s3g successful
7m51s Normal SuccessfulCreate statefulset/om create Claim data-om-0 Pod om-0 in StatefulSet om success
7m51s Warning FailedScheduling pod/scm-0 pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
7m50s Normal Scheduled pod/s3g-0 Successfully assigned default/s3g-0 to hadoop104
7m50s Normal Scheduled pod/datanode-0 Successfully assigned default/datanode-0 to hadoop103
7m50s Normal Scheduled pod/scm-0 Successfully assigned default/scm-0 to hadoop104
7m50s Normal Scheduled pod/om-0 Successfully assigned default/om-0 to hadoop103
7m49s Normal Created pod/datanode-0 Created container datanode
7m49s Normal Started pod/datanode-0 Started container datanode
7m49s Normal Pulled pod/datanode-0 Container image "apache/ozone:1.1.0" already present on machine
7m48s Normal SuccessfulCreate statefulset/datanode create Claim data-datanode-1 Pod datanode-1 in StatefulSet datanode success
7m48s Warning FailedScheduling pod/datanode-1 pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
7m48s Normal Pulled pod/scm-0 Container image "apache/ozone:1.1.0" already present on machine
7m48s Normal Created pod/scm-0 Created container init
7m48s Normal Started pod/scm-0 Started container init
7m48s Normal Pulled pod/s3g-0 Container image "apache/ozone:1.1.0" already present on machine
7m48s Normal Created pod/s3g-0 Created container s3g
7m48s Normal Started pod/s3g-0 Started container s3g
7m48s Normal SuccessfulCreate statefulset/datanode create Pod datanode-1 in StatefulSet datanode successful
7m46s Normal Scheduled pod/datanode-1 Successfully assigned default/datanode-1 to hadoop104
7m45s Normal Created pod/datanode-1 Created container datanode
7m45s Normal Pulled pod/datanode-1 Container image "apache/ozone:1.1.0" already present on machine
7m44s Normal Created pod/scm-0 Created container scm
7m44s Normal Started pod/scm-0 Started container scm
7m44s Normal Started pod/datanode-1 Started container datanode
7m44s Normal Pulled pod/scm-0 Container image "apache/ozone:1.1.0" already present on machine
7m43s Warning FailedScheduling pod/datanode-2 pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
7m43s Normal SuccessfulCreate statefulset/datanode create Pod datanode-2 in StatefulSet datanode successful
7m43s Normal SuccessfulCreate statefulset/datanode create Claim data-datanode-2 Pod datanode-2 in StatefulSet datanode success
7m42s Normal Scheduled pod/datanode-2 Successfully assigned default/datanode-2 to hadoop103
7m38s Normal Pulled pod/datanode-2 Container image "apache/ozone:1.1.0" already present on machine
7m38s Normal Created pod/datanode-2 Created container datanode
7m38s Normal Started pod/datanode-2 Started container datanode
7m23s Normal ScalingReplicaSet deployment/csi-provisioner Scaled up replica set csi-provisioner-5649bc9474 to 1
7m23s Warning FailedCreate daemonset/csi-node Error creating: pods "csi-node-" is forbidden: error looking up service account default/csi-ozone: serviceaccount "csi-ozone" not found
7m22s Normal Scheduled pod/csi-node-nbfnw Successfully assigned default/csi-node-nbfnw to hadoop104
7m22s Normal Scheduled pod/csi-provisioner-5649bc9474-n5jf2 Successfully assigned default/csi-provisioner-5649bc9474-n5jf2 to hadoop103
7m22s Normal SuccessfulCreate replicaset/csi-provisioner-5649bc9474 Created pod: csi-provisioner-5649bc9474-n5jf2
7m22s Normal Scheduled pod/csi-node-c97fz Successfully assigned default/csi-node-c97fz to hadoop103
7m22s Normal SuccessfulCreate daemonset/csi-node Created pod: csi-node-c97fz
7m22s Normal SuccessfulCreate daemonset/csi-node Created pod: csi-node-nbfnw
7m14s Normal Pulling pod/csi-node-c97fz Pulling image "quay.io/k8scsi/csi-node-driver-registrar:v1.0.2"
7m14s Normal Pulling pod/csi-provisioner-5649bc9474-n5jf2 Pulling image "quay.io/k8scsi/csi-provisioner:v1.0.1"
7m13s Normal Pulling pod/csi-node-nbfnw Pulling image "quay.io/k8scsi/csi-node-driver-registrar:v1.0.2"
6m56s Warning Unhealthy pod/om-0 Liveness probe failed: dial tcp 10.244.1.7:9862: connect: connection refused
6m56s Normal Killing pod/om-0 Container om failed liveness probe, will be restarted
6m55s Normal Created pod/om-0 Created container om
6m55s Normal Started pod/om-0 Started container om
6m55s Normal Pulled pod/om-0 Container image "apache/ozone:1.1.0" already present on machine
6m48s Normal Pulled pod/csi-provisioner-5649bc9474-n5jf2 Successfully pulled image "quay.io/k8scsi/csi-provisioner:v1.0.1"
6m48s Normal Started pod/csi-provisioner-5649bc9474-n5jf2 Started container ozone-csi
6m48s Normal Created pod/csi-provisioner-5649bc9474-n5jf2 Created container ozone-csi
6m48s Normal Pulled pod/csi-provisioner-5649bc9474-n5jf2 Container image "apache/ozone:1.1.0" already present on machine
6m48s Normal Started pod/csi-provisioner-5649bc9474-n5jf2 Started container csi-provisioner
6m48s Normal Created pod/csi-provisioner-5649bc9474-n5jf2 Created container csi-provisioner
6m45s Normal Pulled pod/csi-node-nbfnw Successfully pulled image "quay.io/k8scsi/csi-node-driver-registrar:v1.0.2"
6m44s Normal Started pod/csi-node-nbfnw Started container driver-registrar
6m44s Normal Started pod/csi-node-nbfnw Started container csi-node
6m44s Normal Created pod/csi-node-nbfnw Created container csi-node
6m44s Normal Created pod/csi-node-nbfnw Created container driver-registrar
6m44s Normal Pulled pod/csi-node-nbfnw Container image "apache/ozone:1.1.0" already present on machine
6m25s Normal Pulled pod/csi-node-c97fz Successfully pulled image "quay.io/k8scsi/csi-node-driver-registrar:v1.0.2"
6m25s Normal Pulled pod/csi-node-c97fz Container image "apache/ozone:1.1.0" already present on machine
6m25s Normal Started pod/csi-node-c97fz Started container csi-node
6m25s Normal Created pod/csi-node-c97fz Created container csi-node
6m17s Normal Created pod/csi-node-c97fz Created container driver-registrar
6m17s Normal Pulled pod/csi-node-c97fz Container image "quay.io/k8scsi/csi-node-driver-registrar:v1.0.2" already present on machine
6m17s Normal Started pod/csi-node-c97fz Started container driver-registrar
6m3s Normal Provisioning persistentvolumeclaim/ozone-csi-test-webserver External provisioner is provisioning volume for claim "default/ozone-csi-test-webserver"
6m3s Normal ScalingReplicaSet deployment/ozone-csi-test-webserver Scaled up replica set ozone-csi-test-webserver-7cbdc5d65c to 1
6m3s Normal SuccessfulCreate replicaset/ozone-csi-test-webserver-7cbdc5d65c Created pod: ozone-csi-test-webserver-7cbdc5d65c-dpzhc
6m3s Normal ExternalProvisioning persistentvolumeclaim/ozone-csi-test-webserver waiting for a volume to be created, either by external provisioner "org.apache.hadoop.ozone" or manually created by system administrator
6m2s Warning FailedScheduling pod/ozone-csi-test-webserver-7cbdc5d65c-dpzhc pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
6m1s Normal ProvisioningSucceeded persistentvolumeclaim/ozone-csi-test-webserver Successfully provisioned volume pvc-cd01c58d-793f-41ce-9e12-057ade02e07c
5m59s Normal Scheduled pod/ozone-csi-test-webserver-7cbdc5d65c-dpzhc Successfully assigned default/ozone-csi-test-webserver-7cbdc5d65c-dpzhc to hadoop104
97s Warning FailedMount pod/ozone-csi-test-webserver-7cbdc5d65c-dpzhc Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[webroot default-token-l9lng]: timed out waiting for the condition
94s Warning FailedMount pod/ozone-csi-test-webserver-7cbdc5d65c-dpzhc MountVolume.SetUp failed for volume "pvc-cd01c58d-793f-41ce-9e12-057ade02e07c" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Unknown desc =

Could the status be "Running" if I run "kubectl get pod freebox"?

Apply the following YAML file into a Kubernetes cluster:
apiVersion: v1
kind: Pod
metadata:
name: freebox
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
Could the status be "Running" if I run kubectl get pod freebox? Why?
If formatting errors are ignored , no pod wont be in running status :
controlplane $ kubectl get pods freebox
NAME READY STATUS RESTARTS AGE
freebox 0/1 CrashLoopBackOff 3 81s
Becuase if you look at Dockerfile of busy box , The CMD argument "sh" which will complete immediately so pod gets restarted ( becuase default restart policy is always')
https://hub.docker.com/layers/busybox/library/busybox/latest/images/sha256-bc02457f8f5a4a3cd931028ec76c7468cfa8b44d7d89c4a91df1fd82285da681?context=explore
ADD file ... in /708.51 KB
CMD ["sh"]
see the describe of the pod as following :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned default/freebox to node01
Normal Pulled 7s (x2 over 8s) kubelet, node01 Container image "busybox:latest" already present on machine
Normal Created 6s (x2 over 7s) kubelet, node01 Created container busybox
Normal Started 6s (x2 over 7s) kubelet, node01 Started container busybox
Warning BackOff 5s (x2 over 6s) kubelet, node01 Back-off restarting failed container
the busybox image need to run a command for running.
add the command in the .spec.containers section under the busybox container
apiVersion: v1
kind: Pod
metadata:
name: freebox
spec:
containers:
- name: busybox
command:
- sleep
- 4800
image: busybox:latest
imagePullPolicy: IfNotPresent

Installing kubernetes-dashboard with helm fails

I've just created a new kubernetes cluster. The only thing I have done beyond set up the cluster is install Tiller using helm init and install kubernetes dashboard through helm install stable/kubernetes-dashboard.
The helm install command seems to be successful and helm ls outputs:
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
exhaling-ladybug 1 Thu Oct 24 16:56:49 2019 DEPLOYED kubernetes-dashboard-1.10.0 1.10.1 default
However after waiting a few minutes the deployment is still not ready.
Running kubectl get pods shows that the pod's status as CrashLoopBackOff.
NAME READY STATUS RESTARTS AGE
exhaling-ladybug-kubernetes-dashboard 0/1 CrashLoopBackOff 10 31m
The description for the pod shows the following events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31m default-scheduler Successfully assigned default/exhaling-ladybug-kubernetes-dashboard to nodes-1
Normal Pulling 31m kubelet, nodes-1 Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
Normal Pulled 31m kubelet, nodes-1 Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
Normal Started 30m (x4 over 31m) kubelet, nodes-1 Started container kubernetes-dashboard
Normal Pulled 30m (x4 over 31m) kubelet, nodes-1 Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
Normal Created 30m (x5 over 31m) kubelet, nodes-1 Created container kubernetes-dashboard
Warning BackOff 107s (x141 over 31m) kubelet, nodes-1 Back-off restarting failed container
And the logs show the following panic message
panic: secrets is forbidden: User "system:serviceaccount:default:exhaling-ladybug-kubernetes-dashboard" cannot create resource "secrets" in API group "" in the namespace "kube-system"
Am I doing something wrong? Why is it trying to create a secret somewhere it cannot?
Is it possible to setup without giving the dashboard account cluster-admin permissions?
Check this out mate:
https://akomljen.com/installing-kubernetes-dashboard-per-namespace/
You can create your own roles if you want to.
By default i have puted namespace equals default, but if is other you need to replace for yours
kubectl create serviceaccount exhaling-ladybug-kubernetes-dashboard
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=default:exhaling-ladybug-kubernetes-dashboard
based on the error you have posted what is happineening is:
1. helm is trying is install dashboard but by default it was picking up the namespace you have provided.
For solving that:
1. either you create roles based on the namespace you are trying to install, by default namespace should be: default
2. just install the helm chart in the proper location which is required by helm chart, in your case you can do:
helm install stable/kubernetes-dashboard --name=kubernetes-dashboard --namespace=kube-system
Try creating clusterrole
kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard

Error response from daemon: Get https://armdocker.rnd.se/v1/_ping: Not Found

I am using Kubernetes version 1.10.
I am trying to pull an image from a local docker repo. I already have the correct secret created.
[root#node1 ~]# kubectl get secret
NAME TYPE DATA AGE
arm-docker kubernetes.io/dockerconfigjson 1 10m
Checking the secret in detail gives me the correct auth token
[root#node1 ~]# kubectl get secret arm-docker --output="jsonpath={.data.\.dockerconfigjson}" | base64 -d
{"auths":{"armdocker.rnd.se":{"username":"<MY-USERNAME>","password":"<MY-PASSWORD>","email":"<MY-EMAIL>","auth":"<CORRECT_AUTH_TOKEN>"}}}
But when I create a Pod, Im getting the following ERROR :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13s default-scheduler Successfully assigned authorization-backend-deployment-8fd5fc8d4-msxvd to node6
Normal SuccessfulMountVolume 13s kubelet, node6 MountVolume.SetUp succeeded for volume "default-token-w7vlf"
Normal BackOff 4s (x4 over 10s) kubelet, node6 Back-off pulling image "armdocker.rnd.se/proj/authorization_backend:3.6.15"
Warning Failed 4s (x4 over 10s) kubelet, node6 Error: ImagePullBackOff
Normal Pulling 1s (x2 over 12s) kubelet, node6 pulling image "armdocker.rnd.se/proj/authorization_backend:3.6.15"
Warning Failed 1s (x2 over 12s) kubelet, node6 Failed to pull image "armdocker.rnd.se/proj/authorization_backend:3.6.15": rpc error: code = Unknown desc = Error response from daemon: Get https://armdocker.rnd.se/v1/_ping: Not Found
Warning Failed 1s (x2 over 12s) kubelet, node6 Error: ErrImagePull
Why is it looking for /v1/_ping ? Can I disable this somehow ?
Im unable to understand what is the problem here.
Once defined your secret, you need to use it inside your pod (you didn't whether you used it).
kind: Pod
...
spec:
imagePullSecrets:
- name: arm-docker

Configure Kubernetes on Azure ACS to pull images from Artifactory

I try to configure Kubernetes to pull images from our private Artifactory Docker repo.
First I configured a secret with kubectl:
kubectl create secret docker-registry artifactorysecret --docker-server=ourcompany.jfrog.io/path/list/docker-repo/ --docker-username=artifactory-user --docker-password=artipwd --docker-email=myemail
After creating a pod using kubectl with
apiVersion: v1
kind: Pod
metadata:
name: base-infra
spec:
containers:
- name: api-gateway
image: api-gateway
imagePullSecrets:
- name: artifactorysecret
I get a "ImagePullBackOff" error in Kubernetes:
3m 3m 1 default-scheduler Normal
Scheduled Successfully assigned consort-base-infra to k8s-agent-ab2f29b2-2
3m 0s 5 kubelet, k8s-agent-ab2f29b2-2 spec.containers{api-gateway} Normal
Pulling pulling image "api-gateway"
2m <invalid> 5 kubelet, k8s-agent-ab2f29b2-2 spec.containers{api-gateway} Warning
Failed Failed to pull image "api-gateway": rpc error: code = 2 desc = Error: image library/api-gateway:latest not found
2m <invalid> 5 kubelet, k8s-agent-ab2f29b2-2 Warning
FailedSync Error syncing pod, skipping: failed to "StartContainer" for "api-gateway" with ErrImagePull: "rpc error: code = 2 desc = Error: image library/api-gateway:latest not found"
2m <invalid> 17 kubelet, k8s-agent-ab2f29b2-2 spec.containers{api-gateway} Normal BackOff
Back-off pulling image "api-gateway"
2m <invalid> 17 kubelet, k8s-agent-ab2f29b2-2 Warning FailedSync
Error syncing pod, skipping: failed to "StartContainer" for "api-gateway" with ImagePullBackOff: "Back-off pulling image \"api-gateway\""
There is of course a latest version in the repo. I don't know what I'm missing here. It seems Kubernetes is able to log in to the repo...
Ok - I found out to connect Artifactory thanks to Pull image Azure Container Registry - Kubernetes
There are two things to pay attention to:
1) in the secret definition don't forget https:// in the server-attribute:
kubectl create secret docker-registry regsecret --docker-server=https://our-repo.jfrog.io --docker-username=myuser --docker-password=<your-pword> --docker-email=<your-email>
2) in the deployment descriptor use the full image path and specify the secret (or append it to the default ServiceAccount):
apiVersion: v1
kind: Pod
metadata:
name: consort-base-infra-art
spec:
containers:
- name: api-gateway
image: our-repo.jfrog.io/api-gateway
imagePullSecrets:
- name: regsecret