minikube and azure container registry - minikube

I am trying to get images in minikube from the Azure container registry. This keeps failing because not it says unauthorized.
unauthorized: authentication required
I used kubectl create secret to add the credentials for the registry but it keeps failing.
what I tried so far:
I added the URL with and without https
I added the admin user and made a new service principle
I tried to add the secret to the default service account in hope something was wrong with the yaml
used minikube ssh to see if could use docker login and docker pull (that worked).
Getting bit desperate what I can try next? how can I troubleshoot this better?

The kubectl create secret command should have produced a ~/.dockercfg file, which is used to authenticate with the registry for subsequent docker push and docker pull requests.
I suspect you may have created your secret in the wrong namespace, given that your docker login and docker pull commands worked.
Pods can only reference image pull secrets in their own namespace, so this process needs to be done one time per namespace.
https://kubernetes.io/docs/concepts/containers/images/#using-azure-container-registry-acr

You can create a secret and save it at your kubernetes context (in your case minikube context).
Firstly be sure that you are pointing to the right context by running kubectl config get-contexts and kubectl config use-context minikube to change to minikube context.
Create a secret in minikube context by running kubectl create secret docker-registry acr-secret --docker-server=<your acr server>.azurecr.io --docker-username=<your acr username> --docker-password=<your acr password>. This command generates acr-secret.
Import the acr-secret into deployment yml files by using the imagePullSecrets inside each template.spec tag:
imagePullSecrets:
- name: acr-secret

I think there is a much simpler answer. First you need to install the Azure CLI and login
az login
After that, you can get the credentials for your Azure Container Registry
az acr login -n yoursupercoolregistry.azurecr.io --expose-token
This will give you something like that:
{
"accessToken": "averylongtoken",
"loginServer": "yoursupercoolregistry.azurecr.io"
}
With this information, you can enable registry-creds and configure a Docker Registry (and not a Azure Container Registry)
minikube addons enable registry-creds
minikube addons configure registry-creds
You have to enter your server URL and 00000000-0000-0000-0000-000000000000 as client ID.
Do you want to enable Docker Registry? [y/n]: y
-- Enter docker registry server url: yoursupercoolregistry.azurecr.io
-- Enter docker registry username: 00000000-0000-0000-0000-000000000000
And at the end paste the token in the command line, hit enter and you should get this
✅ registry-creds was successfully configured
And then add imagePullSecrets to your file
spec:
containers:
- name: supercoolcontainer
image: yoursupercoolregistry.azurecr.io/supercoolimage:1.0.2
imagePullSecrets:
- name: dpr-secret

Related

How do I grant permission to my Kubernetes cluster to pull images from gcr.io?

In Kubernetes container repository I have my permission set to Private:
When I create a pod on my cluster I get the the pod status ending in ImagePullBackOff and when I describe the pod I see:
Failed to pull image "gcr.io/REDACTED": rpc error: code = Unknown desc = Error response from daemon: pull access denied for gcr.io/REDACTED, repository does not exist or may require 'docker login': denied: Permission denied for "v11" from request "/v2/REDACTED/manifests/v11".
I am certainly logged in.
docker login
Authenticating with existing credentials...
Login Succeeded
Now if I enable public access (top image) on my Container Repository things work fine and the pod deploys correctly. But I don't want my repository to be public. What is the correct way to keep my container repository private and still be able to deploy. I'm pretty sure this used to work a couple weeks ago unless I messed up something with my service account although I don't know how to find out which service account is being used for these permissions.
If your GKE version is > 1.15, and the Container Registry is in the same project, and GKE uses the default Compute Engine service account (SA) it should work out of the box.
If you are running the registry in another project, or using a different service account, you should give to the SA the right permissions (e.g., roles/artifactregistry.reader)
A step by step tutorial, with all the different cases, it is present in the official documentation: https://cloud.google.com/artifact-registry/docs/access-control#gcp
To use gcr.io or any other private artifact registry, you'll need to create a Secret of type docker-registry in k8s cluster. The secret will contain credential details of your registry:
kubectl create secret docker-registry <secret-name> \
--docker-server=<server-name> \
--docker-username=<user-name> \
--docker-password=<user-password> \
--docker-email=<user-email-id>
After this, you will need to specify the above secret in imagePullSecrets property of your manifest so that k8s able to authenticate and pull the image.
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: default
spec:
containers:
- name: pod1
image: gcr.io/pod1:latest
imagePullSecrets:
- name: myregistrykey
Check out this tutorial from container-solutions and official k8s doc.
GKE uses the service account attached to the node pools to grant access to the registry, however, you must be sure that the OAuth scope for your cluster is set to https://www.googleapis.com/auth/devstorage.read_only as well.

Pulling image from private container registry (Harbor) in Kubernetes

I am using Harbor (https://goharbor.io/) for private container registry. I run the Harbor using docker compose, and it is working fine. I can push/ pull images to this private registry using a VM. I already used 'docker login' command to login into this Harbor repository.
For Kubernetes, I am using k3s.
Now, I want to create a pod in Kubernetes using the image in this Harbor private repository. I referred to Harbor & Kubernetes documentations (https://goharbor.io/docs/1.10/working-with-projects/working-with-images/pulling-pushing-images/) & (https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) to pull the image.
As mentioned in Harbor documentation:
Kubernetes users can easily deploy pods with images stored in Harbor.
The settings are similar to those of any other private registry. There
are two issues to be aware of:
When your Harbor instance is hosting HTTP and the certificate is
self-signed, you must modify daemon.json on each work node of your
cluster. For information, see
https://docs.docker.com/registry/insecure/#deploy-a-plain-http-registry.
If your pod references an image under a private project, you must
create a secret with the credentials of a user who has permission to
pull images from the project. For information, see
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/.
I created the daemon.json file in /etc/docker:
{
"insecure-registries" : "my-harbor-server:443"
}
As mentioned in Kubernetes documentation, I created the Secret using this command:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Then I used a file called pod.yml to create pod (using kubectl apply -f pod.yml):
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: my-harbor-server/my-project/mayapp:v1.0
imagePullSecrets:
- name: regcred
However, when I checked the pod status, it is showing 'ImagePullBackOff'. The pod logs shows:
Error from server (BadRequest): container "myapp" in pod "myapp" is waiting to start: trying and failing to pull image
Is there any other step that I have to do to pull this image from Harbor private repository into Kubernetes? What is the reason that I cannot pull this image from Harbor private repository into Kubernetes?
The /etc/docker/daemon.json file configures the docker engine. If your CRI is not the docker shim, them this file will not apply to Kubernetes. For k3s, that is configured using /etc/rancher/k3s/registries.yaml. See https://rancher.com/docs/k3s/latest/en/installation/private-registry/ for details on configuring this file. It needs to be performed on each host.

KOPS reload ssh access key to cluster

I want to restart my Kubernetes access ssh key using commands from this website:
https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access
so those:
kops delete secret --name <clustername> sshpublickey admin
kops create secret --name <clustername> sshpublickey admin -i ~/.ssh/newkey.pub
kops update cluster --yes
And when I type last command "kops update cluster --yes" I get that error:
completed cluster failed validation: spec.spec.kubeProxy.enabled: Forbidden: kube-router requires kubeProxy to be disabled
Does Anybody have any idea what can I change those secret key without disabling kubeProxy?
This problem comes from having set
spec:
networking:
kuberouter: {}
but not
spec:
kubeProxy:
enabled: false
in the cluster spec.
Export the config using kops get -o yaml > myspec.yaml, edit the config according to the error above. Then you can apply the spec using kops replace -f myspec.yaml.
It is considered a best practice to check the above yaml into version control to track any changes done to the cluster configuration.
Once the cluster spec has been amended, the new ssh key should work as well.
What version of kubernetes are you running? If you are running the latests one 1.18.xx the user its not admin but ubuntu.
One other thing that you could do is to first edit the cluster and set the spect of kubeproxy to enabled fist . Run kops update cluster and rolling update and then do the secret delete and creation.

GCP credentials with skaffold for local development

What is the standard approach to getting GCP credentials into k8s pods when using skaffold for local development?
When I previously used docker compose and aws it was easy to volume mount the ~/.aws folder to the container and everything just worked. Is there an equivalent solution for skaffold and gcp?
When I previously used docker compose and aws it was easy to volume
mount the ~/.aws folder to the container and everything just worked.
Is there an equivalent solution for skaffold and gcp?
You didn't mention what kind of kubernetes cluster you have deployed locally but if you use Minikube it can be actually achieved in a very similar way.
Supposed you have already initialized your Cloud SDK locally by running:
gcloud auth login
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project name>
gcloud config set project <project name>
so you can run your gcloud commands on the local machine on which Minikube is installed. You can easily delagate this access to your Pods created either by Skakffold or manually on Minikube.
You just need to start your Minikube as follows:
minikube start --mount=true --mount-string="$HOME/.config/gcloud/:/home/docker/.config/gcloud/"
To make things simple I'm mounting local Cloud SDK config directory into Minikube host using as a mount point /home/docker/.config/gcloud/.
Once it is available on Minikube host VM, it can be easily mounted into any Pod. We can use one of the Cloud SDK docker images available here or any other image that comes with Cloud SDK preinstalled.
Sample Pod to test this out may look like the one below:
apiVersion: v1
kind: Pod
metadata:
name: cloud-sdk-pod
spec:
containers:
- image: google/cloud-sdk:alpine
command: ['sh', '-c', 'sleep 3600']
name: cloud-sdk-container
volumeMounts:
- mountPath: /root/.config/gcloud
name: gcloud-volume
volumes:
- name: gcloud-volume
hostPath:
# directory location on host
path: /home/docker/.config/gcloud
# this field is optional
type: Directory
After connecting to the Pod by running:
kubectl exec -ti cloud-sdk-pod -- /bin/bash
we'll be able to execute any gcloud commands as we are able to execute them on our local machine.
If you're using Skaffold with a Minikube cluster then you can use their gcp-auth addon which does exactly this, described here in detail.

Using GKE service account credentials with kubectl

I am trying to invoke kubectl from within my CI system. I wish to use a google cloud service account for authentication. I have a secret management system in place that injects secrets into my CI system.
However, my CI system does not have gcloud installed, and I do not wish to install that. It only contains kubectl. Is there any way that I can use a credentials.json file containing a gcloud service account (not a kubernetes service account) directly with kubectl?
The easiest way to skip the gcloud CLI is to probably use the --token option. You can get a token with RBAC by creating a service account and tying it to a ClusterRole or Role with either a ClusterRoleBinding or RoleBinding.
Then from the command line:
$ kubectl --token <token-from-your-service-account> get pods
You still will need a context in your ~/.kube/config:
- context:
cluster: kubernetes
name: kubernetes-token
Otherwise, you will have to use:
$ kubectl --insecure-skip-tls-verify --token <token-from-your-service-account> -s https://<address-of-your-kube-api-server>:6443 get pods
Note that if you don't want the token to show up on the logs you can do something like this:
$ kubectl --token $(cat /path/to/a/file/where/the/token/is/stored) get pods
Also, note that this doesn't prevent you from someone running ps -Af on your box and grabbing the token from there, for the lifetime of the kubectl process (It's a good idea to rotate the tokens)
Edit:
You can use the --token-auth-file=/path/to/a/file/where/the/token/is/stored with kubectl to avoid passing it through $(cat /path/to/a/file/where/the/token/is/stored)