How to pull ACR image from k3s pods - kubernetes

I have customized the coredns image and pushed it to my azure container registry (ACR). Now in default coredns pod that is coming after k3s installation, I want to use my_azure_acr_repo/proj/customize-coredns:latest image instead of rancher/coredns-coredns:1.8.3. So I edited the coredns deployment kubectl edit deploy coredns -n kube-system and replaced my acr image with rancher one. But now coredns pod is not able to pull my acr image and giving error in pod description:
Failed to pull image "my_azure_acr_repo/proj/customize-coredns:latest": rpc error:
code = Unknown desc = failed to pull and unpack image "my_azure_acr_repo/proj/customize-coredns:latest":
failed to resolve reference "my_azure_acr_repo/proj/customize-coredns:latest": failed to
authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized
How can I authenticate acr image, so that pod should pull it ?

That's because your container is not authorized to pull image from your private ACR.
First you've to create secret so that you can access your ACR, then pass that secret in your deployment using imagePullSecrets
you can create secret by this command, make sure to replace your credential variables
kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
For ACR it will be something like this
kubectl create secret docker-registry regkey --docker-server=https://myregistry.azurecr.io --docker-username=ACR_USERNAME --docker-password=ACR_PASSWORD --docker-email=ANY_EMAIL_ADDRESS
your deployment spec
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: regkey
More info related to this.
https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod

Related

Docker pull rate request limit from Kubernetes environment

I logged in using docker login command in my machine.
then I tried to run the kubectl command to apply a yaml file:
kubectl apply -f manifests/1_helloworld_deploy.yaml
but this failed with error :
Warning Failed 20s (x2 over 35s) kubelet Failed to pull image "nginx:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:89ea560b277f54022cf0b2e718d83a9377095333f8890e31835f615922071ddc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Now, I already logged in using the docker username and account, still I'm getting a pull rate error.
What should I do to make this working for .yaml file also?
When you login to docker on a machine , its credentials will be saved in a file named ~/.docker/config.json
We need need to explicitly instruct these credentials to be used while pulling images .
For that we need to create a secret with contents of ~/.docker/config.json and mention that as imagePullSecrets in the yaml file.
Following is a sample procedure :
kubectl create secret docker-registry my-secret --from-file=.dockerconfigjson=/root/.docker/config.json
In the podspec update it as image pull secret as following :
apiVersion: v1
kind: Pod
metadata:
name: boo
spec:
containers:
- name: boo
image: busybox
imagePullSecrets:
- name: my-secret
Detailed Documentation

How to configure microk8s kubernetes to use private container's in https://hub.docker.com/?

microk8s document "Working with a private registry" leaves me unsure what to do. The Secure registry portion says Kubernetes does it one way (no indicating whether or not Kubernetes' way applies to microk8), and microk8s uses containerd inside its implementation.
My YAML file contains a reference to a private container on dockerhub.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blaw
spec:
replicas: 1
selector:
matchLabels:
app: blaw
strategy:
type: Recreate
template:
metadata:
labels:
app: blaw
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
When I microk8s kubectl apply this file and do a microk8s kubectl describe, I get:
Warning Failed 16m (x4 over 18m) kubelet Failed to pull image "johngrabner/py_blaw_service:v0.3.10": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/johngrabner/py_blaw_service:v0.3.10": failed to resolve reference "docker.io/johngrabner/py_blaw_service:v0.3.10": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I have verified that I can download this repo from a console doing a docker pull command.
Pods using public containers work fine in microk8s.
The file /var/snap/microk8s/current/args/containerd-template.toml already contains something to make dockerhub work since public containers work. Within this file, I found
# 'plugins."io.containerd.grpc.v1.cri".registry' contains config related to the registry
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
The above does not appear related to authentication.
On the internet, I found instructions to create a secret to store credentials, but this does not work either.
microk8s kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/john/.docker/config.json --type=kubernetes.io/dockerconfigjson
While you have created the secret you have to then setup your deployment/pod to use that secret in order to download the image. This can be achieved with imagePullSecrets as described on the microk8s document you mentioned.
Since you already created your secret you just have reference it in your deployment:
...
spec:
containers:
- image: johngrabner/py_blaw_service:v0.3.10
name: py-transcribe-service
imagePullSecrets:
- name: regcred
...
For more reading check how to Pull an Image from a Private Registry.

Kubernetes image pull from nexus3 docker host is failed

We have created a nexus3 docker host private registry on CentOS machine and same ip details updated on daemon.json under docker folder.
Docker pull and push is working fine.
Same image while trying to kubernetes deploy is failing with image pull state.
$ Kubectl run deployname --image=nexus3provaterepo:port/image
Before we create secret entries via command $ Kubectl create secret with same inform of user ID and password, like docker login -u userid -p passwd
Here my problem is image pull is failing from nexus3 docker host.
Please suggest me how to verify login via kubernetes command and resolve this pull image issue.
Looking yours suggestions, Thanks in advance
So when pulling from private repos you need to specify an imagePullSecret like such:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
# Specify the secret with your users credentials
imagePullSecrets:
- name: regcred
You would then use the kubectl apply -f functionality, I am not actually sure you can use this in the imperative cli version of running a deployment but all the doucmentation on this can be found at here

kubectl pod fails to pull down an AWS ECR image

step 1 sudo $(aws ecr get-login --no-include-email --region xx-xxxx-x)
step 2 curl -LSs https://github.com/fermayo/ecr-k8s-secret/raw/master/gen-secret.sh | bash -
step 3 kubectl describe secret aws-ecr-credentials
Name: aws-ecr-credentials
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/dockerconfigjson
Data
.dockerconfigjson: 32 bytes
step 4 kubectl describe pod x
Warning Failed 5s kubelet, ip-10-46-250-151 Failed to pull image "my-account.dkr.ecr.us-east-1.amazonaws.com/my-image:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://my-account.dkr.ecr.us-east-1.amazonaws.com/my-image/latest: no basic auth credentials
Why can't the pod pull down the image?
Created a script that pulls the token from AWS-ECR
ACCOUNT=xxxxxxxxxxxx
REGION=xx-xxxx-x
SECRET_NAME=${REGION}-ecr-registry
EMAIL=email#email.com
#
#
TOKEN=`aws ecr --region=$REGION get-authorization-token --output text \
--query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`
#
# Create or replace registry secret
#
kubectl delete secret --ignore-not-found $SECRET_NAME
kubectl create secret docker-registry $SECRET_NAME \
--docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
and created a Linux cronjob to run this every 10 hours
Your Deployment manifest will need to specify that the container registry credentials are in a secret. This is as simple as adding imagePullSecrets:
apiVersion: v1
kind: Deployment
metadata:
name: deployment-name
spec:
containers:
- image: your-registry/image/name:tag
imagePullSecrets:
- name: secret-name
I too was banging my head on this and realized it was a region mismatch. I was getting my token from us-east-2 when the image is located in us-west-2.
Snippet from https://docs.aws.amazon.com/AmazonECR/latest/userguide/common-errors-docker.html#error-403
There are times when you may receive an HTTP 403 (Forbidden) error, or the error message no basic auth credentials from the docker push command, even if you have successfully authenticated to Docker using the aws ecr get-login command. The following are some known causes of this issue:
You have authenticated to a different region Authentication requests are tied to specific regions, and cannot be used across regions. For example, if you obtain an authorization token from US West (Oregon), you cannot use it to authenticate against your repositories in US East (N. Virginia). To resolve the issue, ensure that you are using the same region for both authentication and docker push command calls.

Pulling Images from GCR into GKE

Today is my first day playing with GCR and GKE. So apologies if my question sounds childish.
So I have created a new registry in GCR. It is private. Using this documentation, I got hold of my Access Token using the command
gcloud auth print-access-token
#<MY-ACCESS_TOKEN>
I know that my username is oauth2accesstoken
On my local laptop when I try
docker login https://eu.gcr.io/v2
Username: oauth2accesstoken
Password: <MY-ACCESS_TOKEN>
I get:
Login Successful
So now its time to create a docker-registry secret in Kubernetes.
I ran the below command:
kubectl create secret docker-registry eu-gcr-io-registry --docker-server='https://eu.gcr.io/v2' --docker-username='oauth2accesstoken' --docker-password='<MY-ACCESS_TOKEN>' --docker-email='<MY_EMAIL>'
And then my Pod definition looks like:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: eu.gcr.io/<my-gcp-project>/<repo>/<my-app>:latest
ports:
- containerPort: 8090
imagePullSecrets:
- name: eu-gcr-io-registry
But when I spin up the pod, I get the ERROR:
Warning Failed 4m (x4 over 6m) kubelet, node-3 Failed to pull image "eu.gcr.io/<my-gcp-project>/<repo>/<my-app>:latest": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I verified my secrets checking the YAML file and doing a base64 --decode on the .dockerconfigjson and it is correct.
So what have I missed here ?
If your GKE cluster & GCR registry are in the same project: You don't need to configure authentication. GKE clusters are authorized to pull from private GCR registries in the same project with no config. (Very likely you're this!)
If your GKE cluster & GCR registry are in different GCP projects: Follow these instructions to give "service account" of your GKE cluster access to read private images in your GCR cluster: https://cloud.google.com/container-registry/docs/access-control#granting_users_and_other_projects_access_to_a_registry
In a nutshell, this can be done by:
gsutil iam ch serviceAccount:[PROJECT_NUMBER]-compute#developer.gserviceaccount.com:objectViewer gs://[BUCKET_NAME]
where [BUCKET_NAME] is the GCS bucket storing your GCR images (like artifacts.[PROJECT-ID].appspot.com) and [PROJECT_NUMBER] is the numeric GCP project ID hosting your GKE cluster.