What is the standard approach to getting GCP credentials into k8s pods when using skaffold for local development?
When I previously used docker compose and aws it was easy to volume mount the ~/.aws folder to the container and everything just worked. Is there an equivalent solution for skaffold and gcp?
When I previously used docker compose and aws it was easy to volume
mount the ~/.aws folder to the container and everything just worked.
Is there an equivalent solution for skaffold and gcp?
You didn't mention what kind of kubernetes cluster you have deployed locally but if you use Minikube it can be actually achieved in a very similar way.
Supposed you have already initialized your Cloud SDK locally by running:
gcloud auth login
gcloud container clusters get-credentials <cluster name> --zone <zone> --project <project name>
gcloud config set project <project name>
so you can run your gcloud commands on the local machine on which Minikube is installed. You can easily delagate this access to your Pods created either by Skakffold or manually on Minikube.
You just need to start your Minikube as follows:
minikube start --mount=true --mount-string="$HOME/.config/gcloud/:/home/docker/.config/gcloud/"
To make things simple I'm mounting local Cloud SDK config directory into Minikube host using as a mount point /home/docker/.config/gcloud/.
Once it is available on Minikube host VM, it can be easily mounted into any Pod. We can use one of the Cloud SDK docker images available here or any other image that comes with Cloud SDK preinstalled.
Sample Pod to test this out may look like the one below:
apiVersion: v1
kind: Pod
metadata:
name: cloud-sdk-pod
spec:
containers:
- image: google/cloud-sdk:alpine
command: ['sh', '-c', 'sleep 3600']
name: cloud-sdk-container
volumeMounts:
- mountPath: /root/.config/gcloud
name: gcloud-volume
volumes:
- name: gcloud-volume
hostPath:
# directory location on host
path: /home/docker/.config/gcloud
# this field is optional
type: Directory
After connecting to the Pod by running:
kubectl exec -ti cloud-sdk-pod -- /bin/bash
we'll be able to execute any gcloud commands as we are able to execute them on our local machine.
If you're using Skaffold with a Minikube cluster then you can use their gcp-auth addon which does exactly this, described here in detail.
Related
I want to run Kubernetes locally so I can try it out on a small Java program I have written.
I installed WSL2 on my Windows 11 laptop, Docker Desktop, and enabled Kubernetes in Docker Settings.
There are a number of SO questions with the same error but I do not see any of them regarding Windows 11 and Docker Desktop.
I open a terminal, type wsl to open a linux terminal. Then I issue the command:
$ kubectl get pods
The connection to the server 127.0.0.1:49994 was refused - did you specify the right host or port?
but I see
Using Docker Desktop and Kubernetes on Linux Ubuntu, I got the same error, but also with Docker Desktop being unable to start normally because I already had a Docker Installation on my machine, resulting in the docker context being set to the default Docker environment instead of the required Docker Desktop.
Confirm the following first:
Make sure kubectl is correctly installed and ~/.kube/config file exists and is correctly configured on your machine because it holds the configuration of the cluster info and the port to connect to, which both are used by kubectl.
Check the context with
kubectl config view
If not set to current-context: docker-desktop, for example
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
contexts:
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
current-context: docker-desktop
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
then Set the docker context to Docker Desktop on your machine
kubectl config use-context docker-desktop
If that doesn't solve your issue maybe you have to check for specific Windows 11 Docker Desktop Kubernetes configuration/features
Check also:
Docker Desktop Windows FAQs
Local K8s - 1.24.2
Rancher - 2.6.6
The rancher is installed on a virtual machine on the same subnet as K8s.
Install rancher single with volume:
docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /opt/rancher:/var/lib/rancher --privileged rancher/rancher:latest
After im trying to add existing cluster, because im havent no self-signed certificates, im use:
% curl --insecure -sfL https://<My-IP>:8446/v3/import/dtdsrjczmb2bg79r82x9qd8g9gjc8d5fl86drc8m9zhpst2d9h6pfn.yaml | kubectl apply -f -
namespace/cattle-system created serviceaccount/cattle created
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding
created secret/cattle-credentials-3035218 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.extensions/cattle-cluster-agent created
daemonset.extensions/cattle-node-agent created
ALL need things are created.
Then k8s trying to start pod and pod cant start with error - "starting" time="2022-07-11T13:19:14Z" level=fatal msg="looking up cattle-system/cattle ca/token: no secret exists for service account cattle-system/cattle""
I checked the cattle service account doesn't have the secret. Looks the function above expecting ServiceAccountRootCAKey and ServiceAccountTokenKey. How can I get this info from my Rancher server? Can I add this secret into the service account, will that solve the issue?
I am using Harbor (https://goharbor.io/) for private container registry. I run the Harbor using docker compose, and it is working fine. I can push/ pull images to this private registry using a VM. I already used 'docker login' command to login into this Harbor repository.
For Kubernetes, I am using k3s.
Now, I want to create a pod in Kubernetes using the image in this Harbor private repository. I referred to Harbor & Kubernetes documentations (https://goharbor.io/docs/1.10/working-with-projects/working-with-images/pulling-pushing-images/) & (https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) to pull the image.
As mentioned in Harbor documentation:
Kubernetes users can easily deploy pods with images stored in Harbor.
The settings are similar to those of any other private registry. There
are two issues to be aware of:
When your Harbor instance is hosting HTTP and the certificate is
self-signed, you must modify daemon.json on each work node of your
cluster. For information, see
https://docs.docker.com/registry/insecure/#deploy-a-plain-http-registry.
If your pod references an image under a private project, you must
create a secret with the credentials of a user who has permission to
pull images from the project. For information, see
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/.
I created the daemon.json file in /etc/docker:
{
"insecure-registries" : "my-harbor-server:443"
}
As mentioned in Kubernetes documentation, I created the Secret using this command:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
Then I used a file called pod.yml to create pod (using kubectl apply -f pod.yml):
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: my-harbor-server/my-project/mayapp:v1.0
imagePullSecrets:
- name: regcred
However, when I checked the pod status, it is showing 'ImagePullBackOff'. The pod logs shows:
Error from server (BadRequest): container "myapp" in pod "myapp" is waiting to start: trying and failing to pull image
Is there any other step that I have to do to pull this image from Harbor private repository into Kubernetes? What is the reason that I cannot pull this image from Harbor private repository into Kubernetes?
The /etc/docker/daemon.json file configures the docker engine. If your CRI is not the docker shim, them this file will not apply to Kubernetes. For k3s, that is configured using /etc/rancher/k3s/registries.yaml. See https://rancher.com/docs/k3s/latest/en/installation/private-registry/ for details on configuring this file. It needs to be performed on each host.
I am trying to get images in minikube from the Azure container registry. This keeps failing because not it says unauthorized.
unauthorized: authentication required
I used kubectl create secret to add the credentials for the registry but it keeps failing.
what I tried so far:
I added the URL with and without https
I added the admin user and made a new service principle
I tried to add the secret to the default service account in hope something was wrong with the yaml
used minikube ssh to see if could use docker login and docker pull (that worked).
Getting bit desperate what I can try next? how can I troubleshoot this better?
The kubectl create secret command should have produced a ~/.dockercfg file, which is used to authenticate with the registry for subsequent docker push and docker pull requests.
I suspect you may have created your secret in the wrong namespace, given that your docker login and docker pull commands worked.
Pods can only reference image pull secrets in their own namespace, so this process needs to be done one time per namespace.
https://kubernetes.io/docs/concepts/containers/images/#using-azure-container-registry-acr
You can create a secret and save it at your kubernetes context (in your case minikube context).
Firstly be sure that you are pointing to the right context by running kubectl config get-contexts and kubectl config use-context minikube to change to minikube context.
Create a secret in minikube context by running kubectl create secret docker-registry acr-secret --docker-server=<your acr server>.azurecr.io --docker-username=<your acr username> --docker-password=<your acr password>. This command generates acr-secret.
Import the acr-secret into deployment yml files by using the imagePullSecrets inside each template.spec tag:
imagePullSecrets:
- name: acr-secret
I think there is a much simpler answer. First you need to install the Azure CLI and login
az login
After that, you can get the credentials for your Azure Container Registry
az acr login -n yoursupercoolregistry.azurecr.io --expose-token
This will give you something like that:
{
"accessToken": "averylongtoken",
"loginServer": "yoursupercoolregistry.azurecr.io"
}
With this information, you can enable registry-creds and configure a Docker Registry (and not a Azure Container Registry)
minikube addons enable registry-creds
minikube addons configure registry-creds
You have to enter your server URL and 00000000-0000-0000-0000-000000000000 as client ID.
Do you want to enable Docker Registry? [y/n]: y
-- Enter docker registry server url: yoursupercoolregistry.azurecr.io
-- Enter docker registry username: 00000000-0000-0000-0000-000000000000
And at the end paste the token in the command line, hit enter and you should get this
✅ registry-creds was successfully configured
And then add imagePullSecrets to your file
spec:
containers:
- name: supercoolcontainer
image: yoursupercoolregistry.azurecr.io/supercoolimage:1.0.2
imagePullSecrets:
- name: dpr-secret
When starting up a Kubernetes cluster, I load etcd plus the core kubernetes processes - kube-proxy, kube-apiserver, kube-controller-manager, kube-scheduler - as static pods from a private registry. This has worked in the past by ensuring that the $HOME environment variable is set to "/root" for kubelet, and then having /root/.docker/config.json defined with the credentials for the private docker registry.
When attempting to run Kubernetes 1.6, with CRI enabled, I get errors in the kubelet log saying it cannot pull the pause:3.0 container from my private docker registry due to no authentication.
Setting --enable-cri=false on the kubelet command line works, but when CRI is enabled, it doesn't seem to use the /root/.docker/config file for authentication.
Is there some new way to provide the docker credentials needed to load static pods when running with CRI enabled?
In 1.6, I managed to make it work with the following recipe in https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
You need to specify newly created myregistrykey as the credential under imagePullSecrets field in the pod spec.
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: awesomeapps
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
It turns out that there is a deficiency in the CRI capabilities in Kubernetes 1.6. With CRI, the "pause" container - now called the "Pod Sandbox Image" is treated as a special case - because it is an "implementation detail" of the container runtime whether or not you even need it. In the 1.6 release, the credentials applied for other containers, from /root/.docker/config.json, for example, are not used when trying to pull the Pod Sandbox Image.
Thus, if you are trying to pull this image from a private registry, the CRI logic doesn't associate the credentials with the pull request. There is now a Kubernetes issue (#45738) to address this, targeted for 1.7.
In the meantime, an easy workaround is to pre-pull the "Pause" container into the node's local docker image cache before starting up the kubelet process.