Skaffold and Gitlab image repositories - kubernetes

This is my Skaffold config.
apiVersion: skaffold/v2beta19
kind: Config
metadata:
name: my-api
build:
local:
useDockerCLI: true
artifacts:
- image: registry.gitlab.com/lodiee/my-api
docker:
buildArgs:
DATABASE_HOST: '{{.DATABASE_HOST}}'
deploy:
helm:
releases:
- name: my-api
artifactOverrides:
image: registry.gitlab.com/lodiee/my-api
imageStrategy:
helm: {}
remoteChart: bitnami/node
setValueTemplates:
image.tag: '{{.DIGEST_HEX}}'
setValues:
image.repository: registry.gitlab.com/lodiee/my-api
When I run Skaffold dev, everything works fine until I see an ImagePullBackoff error in pod logs. Here is a partial output of the log:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 102s default-scheduler Successfully assigned default/my-api-node-55cf68d989-7vltw to kind-control-plane
Normal BackOff 21s (x5 over 99s) kubelet Back-off pulling image "docker.io/registry.gitlab.com/lodiee/my-api:60ef7d4a463e269de9e176afbdd2362fd50870e494ff7a8abf25ece16c0d100c"
Warning Failed 21s (x5 over 99s) kubelet Error: ImagePullBackOff
Normal Pulling 6s (x4 over 101s) kubelet Pulling image "docker.io/registry.gitlab.com/lodiee/my-api:60ef7d4a463e269de9e176afbdd2362fd50870e494ff7a8abf25ece16c0d100c"
Warning Failed 4s (x4 over 100s) kubelet Failed to pull image "docker.io/registry.gitlab.com/lodiee/my-api:60ef7d4a463e269de9e176afbdd2362fd50870e494ff7a8abf25ece16c0d100c": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/registry.gitlab.com/lodiee/my-api:60ef7d4a463e269de9e176afbdd2362fd50870e494ff7a8abf25ece16c0d100c": failed to resolve reference "docker.io/registry.gitlab.com/lodiee/my-api:60ef7d4a463e269de9e176afbdd2362fd50870e494ff7a8abf25ece16c0d100c": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
As you can see from the output, kubernetes adds docker.io prefix in front of my image repo address and so it can't find the image at that address.
Is this the normal behavior? Also this is a local development setup and I use skaffold dev to launch it, why does k8s try to go to that address?
Thank you.

Related

kubernetes cannot pull a public image

kubernetes cannot pull a public image. Standard images like nginx are downloading successfully, but my pet project is not downloading. I'm using minikube for launch kubernetes-cluster
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway-deploumnet
labels:
app: api-gateway
spec:
replicas: 3
selector:
matchLabels:
app: api-gateway
template:
metadata:
labels:
app: api-gateway
spec:
containers:
- name: api-gateway
image: creatorsprodhouse/api-gateway:latest
imagePullPolicy: Always
ports:
- containerPort: 80
when I try to create a deployment I get an error that kubernetes cannot download my public image.
$ kubectl get pods
result:
NAME READY STATUS RESTARTS AGE
api-gateway-deploumnet-599c784984-j9mf2 0/1 ImagePullBackOff 0 13m
api-gateway-deploumnet-599c784984-qzklt 0/1 ImagePullBackOff 0 13m
api-gateway-deploumnet-599c784984-csxln 0/1 ImagePullBackOff 0 13m
$ kubectl logs api-gateway-deploumnet-599c784984-csxln
result
Error from server (BadRequest): container "api-gateway" in pod "api-gateway-deploumnet-86f6cc5b65-xdx85" is waiting to start: trying and failing to pull image
What could be the problem? The standard images are downloading but my public one is not. Any help would be appreciated.
EDIT 1
$ api-gateway-deploumnet-599c784984-csxln
result:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m22s default-scheduler Successfully assigned default/api-gateway-deploumnet-849899786d-mq4td to minikube
Warning Failed 3m8s kubelet Failed to pull image "creatorsprodhouse/api-gateway:latest": rpc error: code = Unknown desc = context deadline exceeded
Warning Failed 3m8s kubelet Error: ErrImagePull
Normal BackOff 3m7s kubelet Back-off pulling image "creatorsprodhouse/api-gateway:latest"
Warning Failed 3m7s kubelet Error: ImagePullBackOff
Normal Pulling 2m53s (x2 over 8m21s) kubelet Pulling image "creatorsprodhouse/api-gateway:latest"
EDIT 2
If I try to download a separate docker image, it's fine
$ docker pull creatorsprodhouse/api-gateway:latest
result:
Digest: sha256:e664a9dd9025f80a3dd60d157ce1464d4df7d0f8a00538e6a137d44f9f9f12aa
Status: Downloaded newer image for creatorsprodhouse/api-gateway:latest
docker.io/creatorsprodhouse/api-gateway:latest
EDIT 3
After advice to restart minikube
$ minikube stop
$ minikube delete --purge
$ minikube start --cni=calico
I started the pods.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m28s default-scheduler Successfully assigned default/api-gateway-deploumnet-849899786d-bkr28 to minikube
Warning FailedCreatePodSandBox 4m27s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1" network for pod "api-gateway-deploumnet-849899786d-bkr28": networkPlugin cni failed to set up pod "api-gateway-deploumnet-849899786d-bkr28_default" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1" network for pod "api-gateway-deploumnet-849899786d-bkr28": networkPlugin cni failed to teardown pod "api-gateway-deploumnet-849899786d-bkr28_default" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.34 -j CNI-57e7da7379b524635074e6d0 -m comment --comment name: "crio" id: "7e112c92e24199f268ec9c6f3a6db69c2572c0751db9fd57a852d1b9b412e0a1" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-57e7da7379b524635074e6d0':No such file or directory
Try `iptables -h' or 'iptables --help' for more information.
I could not solve the problem in the ways I was suggested. However, it worked when I ran minikube with a different driver
$ minikube start --driver=none
--driver=none means that the cluster will run on your host instead of the standard --driver=docker which runs the cluster in docker.
It is better to run minikube with --driver=docker as it is safer and easier, but it didn't work for me as I could not download my images. For me personally it is ok to use --driver=none although it is a bit dangerous.
In general, if anyone knows what the problem is, please answer my question. In the meantime you can try to run minikube cluster on your host with the command I mentioned above.
In any case, thank you very much for your attention!

kubectl ImagePullBackOff due to secret

I'm creating kubevirt in minikube, initially kubevirt-operator.yaml fails with ImagePullBackOff. After I added secret in the yaml
imagePullSecrets:
- name: regcred
containers:
all my virt-operator* started to run. virt-api* pods still shows ImagePullBackOff.
The error comes out as
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27m default-scheduler Successfully assigned kubevirt/virt-api-787487d9cd-t68qf to minikube
Normal Pulling 25m (x4 over 27m) kubelet Pulling image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0"
Warning Failed 25m (x4 over 27m) kubelet Failed to pull image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0": rpc error: code = Unknown desc = Error response from daemon: pull access denied for us-ashburn-1.ocir.io/xxx/virt-api, repository does not exist or may require 'docker login': denied: Anonymous users are only allowed read access on public repos
Warning Failed 25m (x4 over 27m) kubelet Error: ErrImagePull
Warning Failed 25m (x6 over 27m) kubelet Error: ImagePullBackOff
Normal BackOff 2m26s (x106 over 27m) kubelet Back-off pulling image "us-ashburn-1.ocir.io/xxx/virt-api:v0.54.0"
Manually, I can pull the same image with docker login. Any help would be much appreciated. Thanks
This docker image looks like it is in a private registry(and from oracle). And I assume the regcred is not correct. Can you login there with docker login? if so you can create regcred secret like this
$ kubectl create secret docker-registry regcred --docker-server=<region-key>.ocir.io --docker-username='<tenancy-namespace>/<oci-username>' --docker-password='<oci-auth-token>' --docker-email='<email-address>'
Also check this oracle tutorial: https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-and-registry/index.html
Here you can find the steps to implement secret values to the cluster.
If you are using a private registry, check that your secret exists and the secret is correct. Your secret should also be in the same namespace.
Your Minikube is a VM not your localhost. You try this
Open Terminal
eval $(minikube docker-env)
docker build .
kubectl create -f deployment.yaml
just valid this terminal. if closed terminal again open terminal and write eval $(minikube docker-env)
eval $(minikube docker-env) this code build image in Minikube
Also, try to login docker on all nodes by using docker login.
There is also a lengthy blog post describing how to debug image pull back-off in depth here
Hey if you look here. I think you can find some helpful documentation.
What they are doing is, they are upload the dockerconfig file which has login credentials as a secret and then referring to that in the deployment.
You could try to follow these steps and do something similar. Let me know if it works

Migrate helm chart from public repository to private

I'm trying to deploy some services using helm chart on AWS EKS.
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install postgresql --version 8.6.4 bitnami/postgresql
After I run helm command, pod is not created because connection to docker.io is blocked in the EKS.
$ kubectl describe po
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned airflow/postgresql-postgresql-0 to xxxxxx.compute.internal
Normal SuccessfulAttachVolume 12m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-e3e438ef-50a4-4c19-a788-3d3755b89bae"
Normal Pulling 9m50s (x4 over 12m) kubelet Pulling image "docker.io/bitnami/postgresql:11.7.0-debian-10-r26"
Warning Failed 9m35s (x4 over 11m) kubelet Failed to pull image "docker.io/bitnami/postgresql:11.7.0-debian-10-r26": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 9m35s (x4 over 11m) kubelet Error: ErrImagePull
Warning Failed 9m23s (x6 over 11m) kubelet Error: ImagePullBackOff
Normal BackOff 113s (x36 over 11m) kubelet Back-off pulling image "docker.io/bitnami/postgresql:11.7.0-debian-10-r26"
How can I change repository domain from docker.io to my ECR repository?
I want to push this chart to my ECR repository.
I want to make chart to pull the image not from docker.io but from my ECR repository.
How about try to override the repository domain value in values.yaml of the bitnami/postgresql using --set. Refer here for more details.
image:
registry: docker.io
repository: bitnami/postgresql
tag: 11.10.0-debian-10-r24
You may override above image.registry value as follows.
$ helm install postgresql \
--set image=your-registry/bitnami/postgresql:11.10.0-debian-10-r24 \
--version 8.6.4 bitnami/postgresql

Pull an Image from a Private Registry fails - ImagePullBackOff

On our K8S Worker node with below command have created "secret" to pull images from our private (Nexus) registry.
kubectl create secret docker-registry regcred --docker-server=https://nexus-server/nexus/ --docker-username=admin --docker-password=password --docker-email=user#company.com
Created my-private-reg-pod.yaml in K8S Worker node, It has below.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: private-reg-container
image: nexus-server:4546/ubuntu-16:version-1
imagePullSecrets:
- name: regcred
Created pod with below command
kubectl create -f my-private-reg-pod.yaml
kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pod 0/1 ImagePullBackOff 0 27m
kubectl describe pod test-pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/test-pod to k8s-worker01
Warning Failed 26m (x6 over 28m) kubelet, k8s-worker01 Error: ImagePullBackOff
Normal Pulling 26m (x4 over 28m) kubelet, k8s-worker01 Pulling image "sonatype:4546/ubuntu-16:version-1"
Warning Failed 26m (x4 over 28m) kubelet, k8s-worker01 Failed to pull image "nexus-server:4546/ubuntu-16:version-1": rpc error: code = Unknown desc = Error response from daemon: Get https://nexus-server.domain.com/nexus/v2/ubuntu-16/manifests/ver-1: no basic auth credentials
Warning Failed 26m (x4 over 28m) kubelet, k8s-worker01 Error: ErrImagePull
Normal BackOff 3m9s (x111 over 28m) kubelet, k8s-worker01 Back-off pulling image "nexus-server:4546/ubuntu-16:version-1"
On terminal nexus login works
docker login nexus-server:4546
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Something i am missing with this section?
Since my docker login to nexus succeeded on terminal, So i have deleted my secret and created with kubectl create secret generic regcred \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson it worked.

Configure Kubernetes on Azure ACS to pull images from Artifactory

I try to configure Kubernetes to pull images from our private Artifactory Docker repo.
First I configured a secret with kubectl:
kubectl create secret docker-registry artifactorysecret --docker-server=ourcompany.jfrog.io/path/list/docker-repo/ --docker-username=artifactory-user --docker-password=artipwd --docker-email=myemail
After creating a pod using kubectl with
apiVersion: v1
kind: Pod
metadata:
name: base-infra
spec:
containers:
- name: api-gateway
image: api-gateway
imagePullSecrets:
- name: artifactorysecret
I get a "ImagePullBackOff" error in Kubernetes:
3m 3m 1 default-scheduler Normal
Scheduled Successfully assigned consort-base-infra to k8s-agent-ab2f29b2-2
3m 0s 5 kubelet, k8s-agent-ab2f29b2-2 spec.containers{api-gateway} Normal
Pulling pulling image "api-gateway"
2m <invalid> 5 kubelet, k8s-agent-ab2f29b2-2 spec.containers{api-gateway} Warning
Failed Failed to pull image "api-gateway": rpc error: code = 2 desc = Error: image library/api-gateway:latest not found
2m <invalid> 5 kubelet, k8s-agent-ab2f29b2-2 Warning
FailedSync Error syncing pod, skipping: failed to "StartContainer" for "api-gateway" with ErrImagePull: "rpc error: code = 2 desc = Error: image library/api-gateway:latest not found"
2m <invalid> 17 kubelet, k8s-agent-ab2f29b2-2 spec.containers{api-gateway} Normal BackOff
Back-off pulling image "api-gateway"
2m <invalid> 17 kubelet, k8s-agent-ab2f29b2-2 Warning FailedSync
Error syncing pod, skipping: failed to "StartContainer" for "api-gateway" with ImagePullBackOff: "Back-off pulling image \"api-gateway\""
There is of course a latest version in the repo. I don't know what I'm missing here. It seems Kubernetes is able to log in to the repo...
Ok - I found out to connect Artifactory thanks to Pull image Azure Container Registry - Kubernetes
There are two things to pay attention to:
1) in the secret definition don't forget https:// in the server-attribute:
kubectl create secret docker-registry regsecret --docker-server=https://our-repo.jfrog.io --docker-username=myuser --docker-password=<your-pword> --docker-email=<your-email>
2) in the deployment descriptor use the full image path and specify the secret (or append it to the default ServiceAccount):
apiVersion: v1
kind: Pod
metadata:
name: consort-base-infra-art
spec:
containers:
- name: api-gateway
image: our-repo.jfrog.io/api-gateway
imagePullSecrets:
- name: regsecret