docker desktop kubernetes can't pull from private registry - kubernetes

I'm running docker desktop with the built-in kubernetes cluster. I've got an image in an on-prem gitlab instance. I create a project API key and on the local machine I can do a docker push gitlab.myserver.com/group/project:latest and similarly pull the image after doing a docker login gitlab.myserver.com with the project bot username and the API key.
I create a kubernetes secret with kubectl create secret docker-registry myserver --docker-server=gitlab.myserver.com --docker-username=project_42_bot --docker-password=API_KEY
I then create a pod:
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- args:
- data_generator.py
image: gitlab.myserver.com/group/project:latest
imagePullPolicy: Always
name: foo
imagePullSecrets:
- name: myserver
but I get an access forbidden on the pull.

Try to create a secret for registry login by running the command below :
kubectl create secret docker-registry <secret_name> --docker-server=<your.registry.domain.name> --docker-username=<user> --docker-password=<password> --docker-email=<your_email>
And then use this secret for deployment:
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: server
spec:
imagePullSecrets:
- name: <secret_name>
Refer to the documentation and github link for more information.

Related

How to pass a Secret in Vault to imagePullSecrets to access an image in private registry in Kubernetes

I created a Secret in Vault in Kubernetes instead of using K8s API. And would like to use the Secret in Vault to pull images from a private registry. I am unable to find a way to do so. The following is the example code, assuming I used all the labels for Vault access by the Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
containers:
- name: test-app-exporter
image: index.docker.io/xxxxx/test-app-exporter:3
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: [myregistrykey--Secret From Vault]
Try using https://kubernetes.io/docs/tasks/kubelet-credential-provider/kubelet-credential-provider/ so kubelet can run vault executable to get the image pull credentials and kubelet/container runtime can pull the image using the credentials
I had the same issue. You can use an external secret with the External Secret Operator, more info here.
If you create a external secret, you can reference the kubernetes secret in imagePullSecrets, but the actual value will be stored in vault, and synced. I found this links very useful:
this one about the creation of a external secret linked with Vault
https://www.digitalocean.com/community/tutorials/how-to-access-vault-secrets-inside-of-kubernetes-using-external-secrets-operator-eso
this one about the configuration of the secret as a dockerconfigjson secret type https://external-secrets.io/v0.7.1/guides/common-k8s-secret-types/#dockerconfigjson-example
You can use the external secret or write a custom CRD solution also which ideally should sync or copy the secret from the vault to the Kubernetes secret.
Option: 1
You have multiple options to use External Secret
steps for external secret
helm repo add external-secrets
helm install k8s-external-secrets external-secrets/kubernetes-external-secrets
Example ref
apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
name: imagepull-secret
namespace: default
spec:
kvVersion: 1
backendType: vault
vaultMountPoint: kubernetes
vaultRole: user-role
data:
- name: IMAGE_PULL
...
Option : 2
i have used this CRD personally and easy way to integrate Vault-CRD
Installation guide & Dockercfg
Create a secret in vault first
Create vault-crd YAML file
Example
apiVersion: "koudingspawn.de/v1"
kind: Vault
metadata:
name: image-secret
spec:
path: "secret/docker-hub"
type: "DOCKERCFG"
Apply YAML file to the cluster
it will fetch the secret from vault and create new secret in Kubernetes and you can directly use that secret to Deployment

imagePullSecrets on default service account don't seem to work

I am basically trying to pull GCR images from Azure kubernetes cluster.
I have the folowing for my default service account:
kubectl get serviceaccounts default -o yaml
apiVersion: v1
imagePullSecrets:
- name: gcr-json-key-stg
kind: ServiceAccount
metadata:
creationTimestamp: "2019-12-24T03:42:15Z"
name: default
namespace: default
resourceVersion: "151571"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 7f88785d-05de-4568-b050-f3a5dddd8ad1
secrets:
- name: default-token-gn9vb
If I add the same imagePullSecret to individual deployments, it works. So, the secret is correct. However, when I use it for a default service account, I get a ImagePullBackOff error which on describing confirms that it's a permission issue.
Am I missing something?
I have made sure that my deployment is not configured with any other specific serviceaccount and should be using the default serviceaccount.
ok, the problem was that the default service account that I added the imagePullSecret wasn't in the same namespace.
Once, I patched the default service account in that namespace, it works perfectly well.
After you add the secret for pulling the image to the service account, then you need to add the service account into your pod or deployment. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
selector:
matchLabels:
run: helloworld
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: yourPrivateRegistry/image:tag
ports:
- containerPort: 80
serviceAccountName: pull-image # your service account
And the service account pull-image looks like this:

How to deploy gitlab-runner on kubernetes and automatically register the runner?

I'm new to gitlab ci/cd. I want to deploy gitlab-runner on kubernetes, and I use kubernetes to create two resource:
gitlab-runner-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner
namespace: gitlab
data:
config.toml: |
concurrent = 4
[[runners]]
name = "Kubernetes Runner"
url = "http:my-gitlab.com/ci"
token = "token...."
executor = "kubernetes"
tag = "my-runner"
[runners.kubernetes]
namespace = "gitlab"
image = "busybox"
gitlab-runner-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab
spec:
replicas: 1
selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
spec:
containers:
- args:
- run
image: gitlab/gitlab-runner:v11.11.3
imagePullPolicy: Always
name: gitlab-runner
volumeMounts:
- mountPath: /etc/gitlab-runner
name: config
- mountPath: /etc/ssl/certs
name: cacerts
readOnly: true
restartPolicy: Always
volumes:
- configMap:
name: gitlab-runner
name: config
- hostPath:
path: /usr/share/ca-certificates/mozilla
name: cacerts
The problem is that after creating the two resource using kubectl apply. I can't see the runner instance in http://my-gitlab.com/admin/runners. I suspect the reason is that I have not register the runner. And I enter the runner pod pod/gitlab-runner-69d894d7f8-pjrxn and register the runner manually through gitlab-runner register, after that I can see the runner instance in http://my-gitlab.com/admin/runners.
So am I do anything wrong? Or is it has to manually register the runner inside the pod?
Thanks.
Indeed you need to explicitly register runner on the GitLab server.
For example via:
gitlab-runner register --non-interactive \
--name $RUNNER_NAME \
--url $GITLAB_URL \
--registration-token $GITLAB_REGISTRATION_TOKEN \
--executor docker \
--docker-image $DOCKER_IMAGE_BUILDER \
--tag-list $GITLAB_RUNNER_TAG_LIST \
--request-concurrency=$GITLAB_RUNNER_CONCURRENCY
You can pass most of its configuration as arguments.
If you didn't create config.toml, it will generate it for you, including the runner token received from the server on registration.
However,
as you use Kubernetes, there is a simpler way.
GitLab provides great integration with Kubernetes, all you need to do is attach your cluster once to your project\group: https://docs.gitlab.com/ee/user/project/clusters/#adding-an-existing-kubernetes-cluster
And then installing a runner is just few clicks in the UI, via what they call "managed apps": https://docs.gitlab.com/ee/user/clusters/applications.html#gitlab-runner
On this last page you can find links to the Helm chart that they use.
So you can even use it directly yourself.
And you can see there specifically call to register:
configmap.yaml#L65

Are multiple imagePullSecrets allowed and used by Kubernetes to pull an image from a private registry?

I have a private registry (gitlab) where my docker images are stored.
For deployment a secret is created that allows GKE to access the registry. The secret is called deploy-secret.
The secret's login information expires after short time in the registry.
I additionally created a second, permanent secret that allows access to the docker registry, named permanent-secret.
Is it possible to specify the Pod with two secrets? For example:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: deploy-secret
- name: permanent-secret
Will Kubernetes, when trying to re-pull the image later, recognize that the first secret does not work (does not allow authentication to the private registry) and then fallback successfully to the second secret?
Surprisingly this works! I just tried this on my cluster. I added a fake registry credentials secret, with the wrong values. I put both secrets in my yaml like you did (below) and the pods got created and container is created and running successfully:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
nodeSelector:
containers:
- image: gitlab.myapp.com/my-image:tag
name: test
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred-test
- name: regcred
The regcred secret has the correct values and the regcred-test is just a bunch of gibberish. So we can see that it ignores the incorrect secret.

Kubernetes CronJob Not Correctly Using Docker Secret

I have a Kubernetes cluster that I am trying to set up a CronJob on. I have the CronJob set up in the default namespace, with the image pull secret I want to use set up in the default namespace as well. I set the imagePullSecrets in the CronJob to reference the secret I use to pull images from my private docker registry, I can verify this secret is valid because I have deployments in the same cluster and namespace that use this secret to successfully pull docker images. However, when the CronJob pod starts, I see the following error:
no basic auth credentials
I understand this happens when the pod doesn't have credentials to pull the image from the docker registry. But I am using the same secret for my deployments in the same namespace and they successfully pull down the image. Is there a difference in the configuration of using imagePullSecrets between deployments and cronjobs?
Server Version: v1.9.3
CronJob config:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: my-cronjob
spec:
concurrencyPolicy: Forbid
schedule: 30 13 * * *
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
imagePullSecrets:
- name: my-secret
containers:
- image: my-image
name: my-cronjob
command:
- my-command
args:
- my-args