Kubernetes containerd - failed to pull image from private registry - kubernetes

I setup kubernetes V1.20.1 with containerd instead of Docker. Now I failed to pull Docker images from my private registry (Harbor).
I already changed the /etc/containerd/config.toml like this:
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.foo.com"]
endpoint = ["https://registry.foo.com"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.foo.com"]
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.foo.com".auth]
username = "admin"
password = "Harbor12345"
But this did not work. The pull failed with the message:
Failed to pull image "registry.foo.com/library/myimage:latest": rpc error: code = Unknown
desc = failed to pull and unpack image "registry.foo.com/library/myimage:latest": failed to
resolve reference "registry.foo.com/library/myimage:latest": unexpected status code
[manifests latest]: 401 Unauthorized
My Harbor registry is available via HTTPS with a Let's Encrypt certificate. So https should not be the problem here.
Even if I try to create a docker-secret this did not work:
kubectl create secret docker-registry registry.foo.com --docker-server=https://registry.foo.com --docker-username=admin --docker-password=Harbor12345 --docker-email=info#foo.com
Can anybody give me an example how to configure a private registry in Kubernetes with containerd?

Set imagePullSecrets in the pod/deployment specification:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: registry.foo.com
More info: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Create file, put username:password in it and get the base64 code of it:
touch pass.txt
nano pass.txt
# write like that => username:password
base64 pass.txt
# get the base64 code: cmxxxxxxxxyyyyyyCg==
nano /etc/containerd/config.toml (use auth="", instead of using username/password):
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.foo.com"]
endpoint = ["https://registry.foo.com"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.foo.com".tls]
insecure_skip_verify=true
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.foo.com".auth]
auth ="cmxxxxxxxxyyyyyyCg=="
Restart containerd service:
sudo systemctl restart containerd.service
Pull image directly from node:
sudo crictl pull registry.foo.com/imageName:Tag

Related

Docker pull rate request limit from Kubernetes environment

I logged in using docker login command in my machine.
then I tried to run the kubectl command to apply a yaml file:
kubectl apply -f manifests/1_helloworld_deploy.yaml
but this failed with error :
Warning Failed 20s (x2 over 35s) kubelet Failed to pull image "nginx:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:89ea560b277f54022cf0b2e718d83a9377095333f8890e31835f615922071ddc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Now, I already logged in using the docker username and account, still I'm getting a pull rate error.
What should I do to make this working for .yaml file also?
When you login to docker on a machine , its credentials will be saved in a file named ~/.docker/config.json
We need need to explicitly instruct these credentials to be used while pulling images .
For that we need to create a secret with contents of ~/.docker/config.json and mention that as imagePullSecrets in the yaml file.
Following is a sample procedure :
kubectl create secret docker-registry my-secret --from-file=.dockerconfigjson=/root/.docker/config.json
In the podspec update it as image pull secret as following :
apiVersion: v1
kind: Pod
metadata:
name: boo
spec:
containers:
- name: boo
image: busybox
imagePullSecrets:
- name: my-secret
Detailed Documentation

How to configure microk8s kubernetes to use private container's in https://hub.docker.com/?

microk8s document "Working with a private registry" leaves me unsure what to do. The Secure registry portion says Kubernetes does it one way (no indicating whether or not Kubernetes' way applies to microk8), and microk8s uses containerd inside its implementation.
My YAML file contains a reference to a private container on dockerhub.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blaw
spec:
replicas: 1
selector:
matchLabels:
app: blaw
strategy:
type: Recreate
template:
metadata:
labels:
app: blaw
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
When I microk8s kubectl apply this file and do a microk8s kubectl describe, I get:
Warning Failed 16m (x4 over 18m) kubelet Failed to pull image "johngrabner/py_blaw_service:v0.3.10": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/johngrabner/py_blaw_service:v0.3.10": failed to resolve reference "docker.io/johngrabner/py_blaw_service:v0.3.10": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I have verified that I can download this repo from a console doing a docker pull command.
Pods using public containers work fine in microk8s.
The file /var/snap/microk8s/current/args/containerd-template.toml already contains something to make dockerhub work since public containers work. Within this file, I found
# 'plugins."io.containerd.grpc.v1.cri".registry' contains config related to the registry
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
The above does not appear related to authentication.
On the internet, I found instructions to create a secret to store credentials, but this does not work either.
microk8s kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/john/.docker/config.json --type=kubernetes.io/dockerconfigjson
While you have created the secret you have to then setup your deployment/pod to use that secret in order to download the image. This can be achieved with imagePullSecrets as described on the microk8s document you mentioned.
Since you already created your secret you just have reference it in your deployment:
...
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
imagePullSecrets:
- name: regcred
...
For more reading check how to Pull an Image from a Private Registry.

Pod cannot pull image from private docker registry

I am having some real trouble getting my pods to pull images from a private docker registry that I have setup and am able to authenticate to (I can do docker login https://my.website.com/ and I get Login Succeeded without having to put in my username:password) (I am able to run docker pull my.website.com:5000/human/forum and see all the layers being downloaded.) .
I use https://github.com/bazelbuild/rules_k8s#aliasing-eg-k8s_deploy where I specify the namespace to be "default".
I made sure to put "HTTPS://my.website.com:5000/V2/" (in lowercase) in the auth section in the docker config file before I generated the regcred secret.
Notice that I specify the imagePullSecrets below:
# deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: angular-bazel-example-prod
spec:
replicas: 1
template:
metadata:
labels:
app: angular-bazel-example-prod
spec:
containers:
- name: angular-bazel-example
image: human/forum:dev
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred # Notice
I made sure to update my certificate authority certificates:
cp /etc/docker/certs.d/my.website.com\:5000/ca.crt /usr/local/share/ca-certificates/my.website.registry.com/
sudo update-ca-certificates
I see sudo curl --user testuser:testpassword --cacert /usr/local/share/ca-certificates/my.website.registry.com/ca.crt -X GET https://mywebsite.com:5000/v2/_catalog
> {"repositories":["human/forum"]}
I see sudo curl --user testuser:testpassword --cacert /usr/local/share/ca-certificates/mywebsite.registry.com/ca.crt -X GET https://mywebsite.com:5000/v2/human/forum/tags/list
> {"name":"a/repository","tags":["dev"]}
There must be a way to troubleshoot this but I don't know how.
One thing I am curious about is
kubectl describe pod my-first-pod...
...
Volumes:
default-token-mtz9g:
Type: Secret
Where can I find this volume? I can't kubectl exec into a container because none is running.. because the pod can't pull the image.
Do you have any ideas on how I could troubleshoot this?
Thank you!
Slackware
Create a kubernetes secret to access the custom repository. One way is to provide the server, user and password manually. Documentation link.
kubectl create secret docker-registry $YOUR_REGISTRY_NAME --docker-server=https://$YOUR_SERVER_DNS/v2/ --docker-username=$YOUR_USER --docker-password=$YOUR_PASSWORD --docker-email=whatever#gmail.com --namespace=default
Then use it in your yaml
...
imagePullSecrets:
- name: $YOUR_REGISTRY_NAME
Regarding the default-token that you see mounted, it belongs to the service account that your pod uses to talk to the kubernetes api. You can find the service account by running kubectl get sa and kubectl describe sa $DEFAULT_SERVICE_ACCOUNT_ID. Find the token by running kubectl get secrets and kubectl describe secret $SECRET_ID. To clarify this service account and token have nothing to do with the docker registry unless you specify it. To include the registry in the service account follow this guide link

Kubernetes image pull from nexus3 docker host is failed

We have created a nexus3 docker host private registry on CentOS machine and same ip details updated on daemon.json under docker folder.
Docker pull and push is working fine.
Same image while trying to kubernetes deploy is failing with image pull state.
$ Kubectl run deployname --image=nexus3provaterepo:port/image
Before we create secret entries via command $ Kubectl create secret with same inform of user ID and password, like docker login -u userid -p passwd
Here my problem is image pull is failing from nexus3 docker host.
Please suggest me how to verify login via kubernetes command and resolve this pull image issue.
Looking yours suggestions, Thanks in advance
So when pulling from private repos you need to specify an imagePullSecret like such:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
# Specify the secret with your users credentials
imagePullSecrets:
- name: regcred
You would then use the kubectl apply -f functionality, I am not actually sure you can use this in the imperative cli version of running a deployment but all the doucmentation on this can be found at here

Pulling Images from GCR into GKE

Today is my first day playing with GCR and GKE. So apologies if my question sounds childish.
So I have created a new registry in GCR. It is private. Using this documentation, I got hold of my Access Token using the command
gcloud auth print-access-token
#<MY-ACCESS_TOKEN>
I know that my username is oauth2accesstoken
On my local laptop when I try
docker login https://eu.gcr.io/v2
Username: oauth2accesstoken
Password: <MY-ACCESS_TOKEN>
I get:
Login Successful
So now its time to create a docker-registry secret in Kubernetes.
I ran the below command:
kubectl create secret docker-registry eu-gcr-io-registry --docker-server='https://eu.gcr.io/v2' --docker-username='oauth2accesstoken' --docker-password='<MY-ACCESS_TOKEN>' --docker-email='<MY_EMAIL>'
And then my Pod definition looks like:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: eu.gcr.io/<my-gcp-project>/<repo>/<my-app>:latest
ports:
- containerPort: 8090
imagePullSecrets:
- name: eu-gcr-io-registry
But when I spin up the pod, I get the ERROR:
Warning Failed 4m (x4 over 6m) kubelet, node-3 Failed to pull image "eu.gcr.io/<my-gcp-project>/<repo>/<my-app>:latest": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I verified my secrets checking the YAML file and doing a base64 --decode on the .dockerconfigjson and it is correct.
So what have I missed here ?
If your GKE cluster & GCR registry are in the same project: You don't need to configure authentication. GKE clusters are authorized to pull from private GCR registries in the same project with no config. (Very likely you're this!)
If your GKE cluster & GCR registry are in different GCP projects: Follow these instructions to give "service account" of your GKE cluster access to read private images in your GCR cluster: https://cloud.google.com/container-registry/docs/access-control#granting_users_and_other_projects_access_to_a_registry
In a nutshell, this can be done by:
gsutil iam ch serviceAccount:[PROJECT_NUMBER]-compute#developer.gserviceaccount.com:objectViewer gs://[BUCKET_NAME]
where [BUCKET_NAME] is the GCS bucket storing your GCR images (like artifacts.[PROJECT-ID].appspot.com) and [PROJECT_NUMBER] is the numeric GCP project ID hosting your GKE cluster.