Pulling image from private container registry (Harbor) in Kubernetes - kubernetes

I am using Harbor (https://goharbor.io/) for private container registry. I run the Harbor using docker compose, and it is working fine. I can push/ pull images to this private registry using a VM. I already used 'docker login' command to login into this Harbor repository.
For Kubernetes, I am using k3s.
Now, I want to create a pod in Kubernetes using the image in this Harbor private repository. I referred to Harbor & Kubernetes documentations (https://goharbor.io/docs/1.10/working-with-projects/working-with-images/pulling-pushing-images/) & (https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) to pull the image.
As mentioned in Harbor documentation:
Kubernetes users can easily deploy pods with images stored in Harbor.
The settings are similar to those of any other private registry. There
are two issues to be aware of:
When your Harbor instance is hosting HTTP and the certificate is
self-signed, you must modify daemon.json on each work node of your
cluster. For information, see
https://docs.docker.com/registry/insecure/#deploy-a-plain-http-registry.
If your pod references an image under a private project, you must
create a secret with the credentials of a user who has permission to
pull images from the project. For information, see
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/.
I created the daemon.json file in /etc/docker:
{
"insecure-registries" : "my-harbor-server:443"
}
As mentioned in Kubernetes documentation, I created the Secret using this command:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Then I used a file called pod.yml to create pod (using kubectl apply -f pod.yml):
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: my-harbor-server/my-project/mayapp:v1.0
imagePullSecrets:
- name: regcred
However, when I checked the pod status, it is showing 'ImagePullBackOff'. The pod logs shows:
Error from server (BadRequest): container "myapp" in pod "myapp" is waiting to start: trying and failing to pull image
Is there any other step that I have to do to pull this image from Harbor private repository into Kubernetes? What is the reason that I cannot pull this image from Harbor private repository into Kubernetes?

The /etc/docker/daemon.json file configures the docker engine. If your CRI is not the docker shim, them this file will not apply to Kubernetes. For k3s, that is configured using /etc/rancher/k3s/registries.yaml. See https://rancher.com/docs/k3s/latest/en/installation/private-registry/ for details on configuring this file. It needs to be performed on each host.

Related

How do I grant permission to my Kubernetes cluster to pull images from gcr.io?

In Kubernetes container repository I have my permission set to Private:
When I create a pod on my cluster I get the the pod status ending in ImagePullBackOff and when I describe the pod I see:
Failed to pull image "gcr.io/REDACTED": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/REDACTED, repository does not exist or may require 'docker login': denied: Permission denied for "v11" from request "/v2/REDACTED/manifests/v11".
I am certainly logged in.
docker login
Authenticating with existing credentials...
Login Succeeded
Now if I enable public access (top image) on my Container Repository things work fine and the pod deploys correctly. But I don't want my repository to be public. What is the correct way to keep my container repository private and still be able to deploy. I'm pretty sure this used to work a couple weeks ago unless I messed up something with my service account although I don't know how to find out which service account is being used for these permissions.
If your GKE version is > 1.15, and the Container Registry is in the same project, and GKE uses the default Compute Engine service account (SA) it should work out of the box.
If you are running the registry in another project, or using a different service account, you should give to the SA the right permissions (e.g., roles/artifactregistry.reader)
A step by step tutorial, with all the different cases, it is present in the official documentation: https://cloud.google.com/artifact-registry/docs/access-control#gcp
To use gcr.io or any other private artifact registry, you'll need to create a Secret of type docker-registry in k8s cluster. The secret will contain credential details of your registry:
kubectl create secret docker-registry <secret-name> \
--docker-server=<server-name> \
--docker-username=<user-name> \
--docker-password=<user-password> \
--docker-email=<user-email-id>
After this, you will need to specify the above secret in imagePullSecrets property of your manifest so that k8s able to authenticate and pull the image.
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: default
spec:
containers:
- name: pod1
image: gcr.io/pod1:latest
imagePullSecrets:
- name: myregistrykey
Check out this tutorial from container-solutions and official k8s doc.
GKE uses the service account attached to the node pools to grant access to the registry, however, you must be sure that the OAuth scope for your cluster is set to https://www.googleapis.com/auth/devstorage.read_only as well.

GKE pulling images from a private repository inside GCP

I've set up a private container registry that it is integrated with bitbucket successfully. However, I am not able to pull the images from my GKE Cluster.
I created a service account with the role "Project Viewer", and a json key for this account. Then I created the secret in the cluster/namespace running
kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/code/bitbucket/miappsrl/miappnodeapi/secrets/registry/miapp-staging-e94050365be1.json)" \
--docker-email=agusmiappgcp#gmail.com
And in the deployment file I added
...
imagePullSecrets:
- name: gcr-json-key
...
But when I apply the deployment I get
ImagePullBackOff
And when I do a kubectl describe pod <pod_name> I see
Failed to pull image "gcr.io/miapp-staging/miappnodeapi": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 169.254.169.254:53: no such host
I can't realize what I am missing, I understand it can resolve the dns inside the cluster, but not sure what I should add
If a GKE Cluster is setup as private you need to setup the DNS to reach container Registry, from documentation:
To support GKE private clusters that use Container Registry or Artifact Registry inside a service perimeter, you first need to configure your DNS server so requests to registry addresses resolve to restricted.googleapis.com, the restricted VIP. You can do so using Cloud DNS private DNS zones.
Verify if you setup your cluster as private.

GCP credentials with skaffold for local development

What is the standard approach to getting GCP credentials into k8s pods when using skaffold for local development?
When I previously used docker compose and aws it was easy to volume mount the ~/.aws folder to the container and everything just worked. Is there an equivalent solution for skaffold and gcp?
When I previously used docker compose and aws it was easy to volume
mount the ~/.aws folder to the container and everything just worked.
Is there an equivalent solution for skaffold and gcp?
You didn't mention what kind of kubernetes cluster you have deployed locally but if you use Minikube it can be actually achieved in a very similar way.
Supposed you have already initialized your Cloud SDK locally by running:
gcloud auth login
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project name>
gcloud config set project <project name>
so you can run your gcloud commands on the local machine on which Minikube is installed. You can easily delagate this access to your Pods created either by Skakffold or manually on Minikube.
You just need to start your Minikube as follows:
minikube start --mount=true --mount-string="$HOME/.config/gcloud/:/home/docker/.config/gcloud/"
To make things simple I'm mounting local Cloud SDK config directory into Minikube host using as a mount point /home/docker/.config/gcloud/.
Once it is available on Minikube host VM, it can be easily mounted into any Pod. We can use one of the Cloud SDK docker images available here or any other image that comes with Cloud SDK preinstalled.
Sample Pod to test this out may look like the one below:
apiVersion: v1
kind: Pod
metadata:
name: cloud-sdk-pod
spec:
containers:
- image: google/cloud-sdk:alpine
command: ['sh', '-c', 'sleep 3600']
name: cloud-sdk-container
volumeMounts:
- mountPath: /root/.config/gcloud
name: gcloud-volume
volumes:
- name: gcloud-volume
hostPath:
# directory location on host
path: /home/docker/.config/gcloud
# this field is optional
type: Directory
After connecting to the Pod by running:
kubectl exec -ti cloud-sdk-pod -- /bin/bash
we'll be able to execute any gcloud commands as we are able to execute them on our local machine.
If you're using Skaffold with a Minikube cluster then you can use their gcp-auth addon which does exactly this, described here in detail.

minikube and azure container registry

I am trying to get images in minikube from the Azure container registry. This keeps failing because not it says unauthorized.
unauthorized: authentication required
I used kubectl create secret to add the credentials for the registry but it keeps failing.
what I tried so far:
I added the URL with and without https
I added the admin user and made a new service principle
I tried to add the secret to the default service account in hope something was wrong with the yaml
used minikube ssh to see if could use docker login and docker pull (that worked).
Getting bit desperate what I can try next? how can I troubleshoot this better?
The kubectl create secret command should have produced a ~/.dockercfg file, which is used to authenticate with the registry for subsequent docker push and docker pull requests.
I suspect you may have created your secret in the wrong namespace, given that your docker login and docker pull commands worked.
Pods can only reference image pull secrets in their own namespace, so this process needs to be done one time per namespace.
https://kubernetes.io/docs/concepts/containers/images/#using-azure-container-registry-acr
You can create a secret and save it at your kubernetes context (in your case minikube context).
Firstly be sure that you are pointing to the right context by running kubectl config get-contexts and kubectl config use-context minikube to change to minikube context.
Create a secret in minikube context by running kubectl create secret docker-registry acr-secret --docker-server=<your acr server>.azurecr.io --docker-username=<your acr username> --docker-password=<your acr password>. This command generates acr-secret.
Import the acr-secret into deployment yml files by using the imagePullSecrets inside each template.spec tag:
imagePullSecrets:
- name: acr-secret
I think there is a much simpler answer. First you need to install the Azure CLI and login
az login
After that, you can get the credentials for your Azure Container Registry
az acr login -n yoursupercoolregistry.azurecr.io --expose-token
This will give you something like that:
{
"accessToken": "averylongtoken",
"loginServer": "yoursupercoolregistry.azurecr.io"
}
With this information, you can enable registry-creds and configure a Docker Registry (and not a Azure Container Registry)
minikube addons enable registry-creds
minikube addons configure registry-creds
You have to enter your server URL and 00000000-0000-0000-0000-000000000000 as client ID.
Do you want to enable Docker Registry? [y/n]: y
-- Enter docker registry server url: yoursupercoolregistry.azurecr.io
-- Enter docker registry username: 00000000-0000-0000-0000-000000000000
And at the end paste the token in the command line, hit enter and you should get this
✅ registry-creds was successfully configured
And then add imagePullSecrets to your file
spec:
containers:
- name: supercoolcontainer
image: yoursupercoolregistry.azurecr.io/supercoolimage:1.0.2
imagePullSecrets:
- name: dpr-secret

How to specify docker credentials for Kubernetes static pods with CRI enabled?

When starting up a Kubernetes cluster, I load etcd plus the core kubernetes processes - kube-proxy, kube-apiserver, kube-controller-manager, kube-scheduler - as static pods from a private registry. This has worked in the past by ensuring that the $HOME environment variable is set to "/root" for kubelet, and then having /root/.docker/config.json defined with the credentials for the private docker registry.
When attempting to run Kubernetes 1.6, with CRI enabled, I get errors in the kubelet log saying it cannot pull the pause:3.0 container from my private docker registry due to no authentication.
Setting --enable-cri=false on the kubelet command line works, but when CRI is enabled, it doesn't seem to use the /root/.docker/config file for authentication.
Is there some new way to provide the docker credentials needed to load static pods when running with CRI enabled?
In 1.6, I managed to make it work with the following recipe in https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
You need to specify newly created myregistrykey as the credential under imagePullSecrets field in the pod spec.
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
It turns out that there is a deficiency in the CRI capabilities in Kubernetes 1.6. With CRI, the "pause" container - now called the "Pod Sandbox Image" is treated as a special case - because it is an "implementation detail" of the container runtime whether or not you even need it. In the 1.6 release, the credentials applied for other containers, from /root/.docker/config.json, for example, are not used when trying to pull the Pod Sandbox Image.
Thus, if you are trying to pull this image from a private registry, the CRI logic doesn't associate the credentials with the pull request. There is now a Kubernetes issue (#45738) to address this, targeted for 1.7.
In the meantime, an easy workaround is to pre-pull the "Pause" container into the node's local docker image cache before starting up the kubelet process.