Private repository passing through kubernetes yaml file - kubernetes

We have tried to setup hivemq manifest file. We have hivemq docker image in our private repository
Step1: I have logged into the private repository like
docker login "private repo name"
It was success
After that I have tried to create manifest file for that like below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hivemq
spec:
replicas: 1
template:
metadata:
labels:
name: hivemq1
spec:
containers:
- env:
xxxxx some envronment values I have passed
name: hivemq
image: privatereponame:portnumber/directoryname/hivemq:
ports:
- containerPort: 1883
Its successfully creating, but I am getting the below issues. Could you please help any one to solve this issue.
hivemq-4236597916-mkxr4 0/1 ImagePullBackOff 0 1h
Logs:
Error from server (BadRequest): container "hivemq16" in pod "hivemq16-1341290525-qtkhb" is waiting to start: InvalidImageName
Some times I am getting that kind of issues
Error from server (BadRequest): container "hivemq" in pod "hivemq-4236597916-mkxr4" is waiting to start: trying and failing to pull image

In order to use a private docker registry with Kubernetes it's not enough to docker login.
You need to add a Kubernetes docker-registry Secret with your credentials as described here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/. Also in that article is imagePullSecrets setting you have to add to your yaml deployment file, referencing that secret.

I just fixed this on my machine, kubectl v1.9.0 failed to create the secret properly. Upgrading to v1.9.1, deleting the secret, recreating it resolved the issue for me. https://github.com/kubernetes/kubernetes/issues/57427

Related

How to configure microk8s kubernetes to use private container's in https://hub.docker.com/?

microk8s document "Working with a private registry" leaves me unsure what to do. The Secure registry portion says Kubernetes does it one way (no indicating whether or not Kubernetes' way applies to microk8), and microk8s uses containerd inside its implementation.
My YAML file contains a reference to a private container on dockerhub.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blaw
spec:
replicas: 1
selector:
matchLabels:
app: blaw
strategy:
type: Recreate
template:
metadata:
labels:
app: blaw
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
When I microk8s kubectl apply this file and do a microk8s kubectl describe, I get:
Warning Failed 16m (x4 over 18m) kubelet Failed to pull image "johngrabner/py_blaw_service:v0.3.10": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/johngrabner/py_blaw_service:v0.3.10": failed to resolve reference "docker.io/johngrabner/py_blaw_service:v0.3.10": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I have verified that I can download this repo from a console doing a docker pull command.
Pods using public containers work fine in microk8s.
The file /var/snap/microk8s/current/args/containerd-template.toml already contains something to make dockerhub work since public containers work. Within this file, I found
# 'plugins."io.containerd.grpc.v1.cri".registry' contains config related to the registry
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
The above does not appear related to authentication.
On the internet, I found instructions to create a secret to store credentials, but this does not work either.
microk8s kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/john/.docker/config.json --type=kubernetes.io/dockerconfigjson
While you have created the secret you have to then setup your deployment/pod to use that secret in order to download the image. This can be achieved with imagePullSecrets as described on the microk8s document you mentioned.
Since you already created your secret you just have reference it in your deployment:
...
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
imagePullSecrets:
- name: regcred
...
For more reading check how to Pull an Image from a Private Registry.

Error while trying to do a kubernetes deployment from a private registry

I am trying to deploy a docker image from a Private repository using Kubernetes and seeing the below error
Waiting: CrashLoopBackoff
You need to pass image pull secret to kubernetes.
Get docker login json
Create a k8s secret with this json
Refer a secret from a pod
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: k8s-secret-name
Docs: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
Usually, the bad state caused by an image pull called ImagePullBackOff, so I suggest kubectl get events to check the root cause.
The issue got resolved.
I have deleted the Registry , re-created the Registry and tried deploying a different docker image. I could successfully deploy and also could test the deployed application.
Regards,
Ravikiran.M

ErrImagePull on kubernetes

I suffer from this error when I deploy a pod.
the image lies on google container registry within the same project as the cluster
i can pull the image from the registry on my local computer
i cannot pull the image if I ssh into the instance
From the docs it states that this should work out of the box. I checked and storage read access is indeed there.
Here's the config:
apiVersion: v1
kind: ReplicationController
metadata:
name: luigi
spec:
replicas: 1
selector:
app: luigi
template:
metadata:
name: luigi
labels:
app: luigi
spec:
containers:
- name: scheduler
image: eu.gcr.io/bi/luigi/scheduler:latest
command: ['/usr/src/app/run_scheduler.sh']
- name: worker
image: eu.gcr.io/bi/luigi/scheduler:latest
command: ['/usr/src/app/run_worker.sh']
Describe gives me:
Failed to pull image "eu.gcr.io/bi/luigi/scheduler:latest": rpc error: code = Unknown desc = Error response from daemon: repository eu.gcr.io/bi/luigi/scheduler not found: does not exist or no pull access
From the error message, it seems to be caused by the absence of the credentials to download the image from the docker registry. Please note that this access credentials are "client specific". In this case, when kubernetes (kubelet to be specific) is the client and it needs the imagepullsecret to present the necessary credentials.
Please add the imagepullsecret with required credentials and it should work

ImagePullSecrets GCR

I am having an issue configuring GCR with ImagePullSecrets in my deployment.yaml file. It cannot download the container due to permission
Failed to pull image "us.gcr.io/optimal-jigsaw-185903/syncope-deb": rpc error: code = Unknown desc = Error response from daemon: denied: Permission denied for "latest" from request "/v2/optimal-jigsaw-185903/syncope-deb/manifests/latest".
I am sure that I am doing something wrong but I followed this tutorial (and others like it) but with still no luck.
https://ryaneschinger.com/blog/using-google-container-registry-gcr-with-minikube/
The pod logs are equally useless:
"syncope-deb" in pod "syncope-deployment-64479cdcf5-cng57" is waiting to start: trying and failing to pull image
My deployment looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: syncope-deployment
namespace: default
spec:
# 3 Pods should exist at all times.
replicas: 1
# Keep record of 2 revisions for rollback
revisionHistoryLimit: 2
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: syncope-deb
spec:
imagePullSecrets:
- name: mykey
containers:
- name: syncope-deb
# Run this image
image: us.gcr.io/optimal-jigsaw-185903/syncope-deb
ports:
- containerPort: 9080
Any I have a key in my default namespace called "mykey" that looks like (Edited out the Secure Data):
{"https://gcr.io":{"username":"_json_key","password":"{\n \"type\": \"service_account\",\n \"project_id\": \"optimal-jigsaw-185903\",\n \"private_key_id\": \"EDITED_TO_PROTECT_THE_INNOCENT\",\n \"private_key\": \"-----BEGIN PRIVATE KEY-----\\EDITED_TO_PROTECT_THE_INNOCENT\\n-----END PRIVATE KEY-----\\n\",\n \"client_email\": \"bobs-service#optimal-jigsaw-185903.iam.gserviceaccount.com\",\n \"client_id\": \"109145305665697734423\",\n \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n \"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/bobs-service%40optimal-jigsaw-185903.iam.gserviceaccount.com\"\n}","email":"redfalconinc#gmail.com","auth":"EDITED_TO_PROTECT_THE_INNOCENT"}}
I even loaded that user up with the permissions of:
Editor Cloud Container
Builder Cloud Container
Builder Editor Service
Account Actor Service
Account Admin Storage
Admin Storage Object
Admin Storage Object Creator
Storage Object Viewer
Any help would be appreciated as I am spending a lot of time on seemingly a very simple problem.
The issue is most likely caused by you using a secret of type dockerconfigjson and having valid dockercfg in it. The kubectl command changed at some point that causes this.
Can you check what it is marked as dockercfg or dockerconfigjson and then check if its valid dockerconfigjson.
The json you have provided is dockercfg (not the new format)
See https://github.com/kubernetes/kubernetes/issues/12626#issue-100691532 for info about the formats

minikube can't access remote Harbor registry

Even start minikube using minikube start --insecure-registry "<HARBOR_HOST_IP>", when tried to run a deployment yaml file which include image path like <HARBOR_HOST_IP>/app/server, got error:
Failed to pull image "[HARBOR_IP]/app/server": rpc error: code = 2 desc = Error response from daemon: {"message":"Get https://[HARBOR_IP]/v1/_ping: dial tcp [HARBOR_IP]:443: getsockopt: connection refused"}
Error syncing pod
How to set insecure-registry correctly in minikube?
Edit
Tag current docker image with 80 port:
docker tag server <HARBOR_HOST_IP>:80/app/server
Push it to Harbor registry server:
docker push <HARBOR_HOST_IP>:80/app/server
Unfortunately remote Harbor host denied:
The push refers to a repository [<HARBOR_HOST_IP>:80/app/server]
00491a929c2e: Preparing
ec4cc3fab4be: Preparing
e7d3ac95d998: Preparing
8bb050c3d78d: Preparing
4aa9e88e4148: Preparing
978b58726b5e: Waiting
2b0fb280b60d: Waiting
denied: requested access to the resource is denied
Even added <HARBOR_HOST_IP>:80 to local insecure-registries list.
It is working if you always define port 80 when you communicate with your docker registry which works on port 80.
Build an image:
docker build -t <REGISTRY_IP>:80/<name> <path>
Push it to registry:
docker push <REGISTRY_IP>:80/<name>
Start minikube with this insecure registry:
minikube start --insecure-registry <REGISTRY_IP>:80
Create deployment:
kubectl create -f test.yaml
where test.yaml is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
spec:
template:
metadata:
labels:
app: test
spec:
containers:
- image: 192.168.1.11:80/<name>
name: test
imagePullPolicy: Always
Have you added secret in kubernetes?
Please refer the link to add secret: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
Then use the secret in kubernetes yaml.
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred