Kubernetes CronJob Not Correctly Using Docker Secret - kubernetes

I have a Kubernetes cluster that I am trying to set up a CronJob on. I have the CronJob set up in the default namespace, with the image pull secret I want to use set up in the default namespace as well. I set the imagePullSecrets in the CronJob to reference the secret I use to pull images from my private docker registry, I can verify this secret is valid because I have deployments in the same cluster and namespace that use this secret to successfully pull docker images. However, when the CronJob pod starts, I see the following error:
no basic auth credentials
I understand this happens when the pod doesn't have credentials to pull the image from the docker registry. But I am using the same secret for my deployments in the same namespace and they successfully pull down the image. Is there a difference in the configuration of using imagePullSecrets between deployments and cronjobs?
Server Version: v1.9.3
CronJob config:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: my-cronjob
spec:
concurrencyPolicy: Forbid
schedule: 30 13 * * *
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
imagePullSecrets:
- name: my-secret
containers:
- image: my-image
name: my-cronjob
command:
- my-command
args:
- my-args

Related

docker desktop kubernetes can't pull from private registry

I'm running docker desktop with the built-in kubernetes cluster. I've got an image in an on-prem gitlab instance. I create a project API key and on the local machine I can do a docker push gitlab.myserver.com/group/project:latest and similarly pull the image after doing a docker login gitlab.myserver.com with the project bot username and the API key.
I create a kubernetes secret with kubectl create secret docker-registry myserver --docker-server=gitlab.myserver.com --docker-username=project_42_bot --docker-password=API_KEY
I then create a pod:
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- args:
- data_generator.py
image: gitlab.myserver.com/group/project:latest
imagePullPolicy: Always
name: foo
imagePullSecrets:
- name: myserver
but I get an access forbidden on the pull.
Try to create a secret for registry login by running the command below :
kubectl create secret docker-registry <secret_name> --docker-server=<your.registry.domain.name> --docker-username=<user> --docker-password=<password> --docker-email=<your_email>
And then use this secret for deployment:
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: server
spec:
imagePullSecrets:
- name: <secret_name>
Refer to the documentation and github link for more information.

Can anyone please guide how to pull private images from Kubernetes?

kubectl create secret docker-registry private-registry-key --docker-username="devopsrecipes" --docker-password="xxxxxx" --docker-email="username#example.com" --docker-server="https://index.docker.io/v1/"
secret "private-registry-key" created
This command is used for accessing private docker repos.
As referenced: http://blog.shippable.com/kubernetes-tutorial-how-to-pull-private-docker-image-pod
But, not able to pull the image.
When tried to access ="https://index.docker.io/v1/"
It is giving page not found error.
Please guide me.
You also need to refer the imagePullSecrets in the pod / deployment spec you create:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: private-registry-key
Read more about imagePullSecrets here.
I Just tried creating the same on my cluster.
kubectl create secret docker-registry private-registry-key --docker-username="xx" --docker-password="xx" --docker-email="xx" --docker-server="https://index.docker.io/v1/"
Output:
secret/private-registry-key created
My Yaml file looks like
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: vaibhavjain882/ubuntubase:latest
command: ["sleep", "30"]
imagePullSecrets:
- name: private-registry-key
NAME READY STATUS RESTARTS AGE
private-reg 1/1 Running 0 35s
Note: Just verify, if you are passing the correct docker image name. In my case its "vaibhavjain882/ubuntubase:latest"

how does K8S handles multiple remote docker registeries in POD definition using imagePullSecrets list

I would like to access multiple remote registries to pull images.
In the k8s documentation they say:
(If you need access to multiple registries, you can create one secret
for each registry. Kubelet will merge any imagePullSecrets into a
single virtual .docker/config.json)
and so the POD definition should be something like this:
apiVersion: v1
kind: Pod
spec:
containers:
- name: ...
imagePullSecrets:
- name: secret1
- name: secret2
- ....
- name: secretN
Now I am not sure how K8S will pick the right secret for each image? will all secrets be verified one by one each time? and how K8S will handle the failed retries? and if a specific amount of unauthorized retries could lead to some lock state in k8sor docker registries?
/ Thanks
You can use following script to add two authentications in one secret
#!/bin/bash
u1="user_1_here"
p1="password_1_here"
auth1=$(echo -n "$u1:$p1" | base64 -w0)
u2="user_2_here"
p2="password_2_here"
auth2=$(echo -n "$u2:$p2" | base64 -w0)
cat <<EOF > docker_config.json
{
"auths": {
"repo1_name_here": {
"auth": "$auth1"
},
"repo2_name_here": {
"auth": "$auth2"
}
}
}
EOF
base64 -w0 docker_config.json > docker_config_b64.json
cat <<EOF | kubectl apply -f -
apiVersion: v1
type: kubernetes.io/dockerconfigjson
kind: Secret
data:
.dockerconfigjson: $(cat docker_config_b64.json)
metadata:
name: specify_secret_name_here
namespace: specify_namespace_here
EOF
Kubernetes isn't going to try all secrets until find the correct. When you create the secret, you are referencing that it's a docker registry:
$ kubectl create secret docker-registry user1-secret --docker-server=https://index.docker.io/v1/ --docker-username=user1 --docker-password=PASSWORD456 --docker-email=user1#email.com
$ kubectl create secret docker-registry user2-secret --docker-server=https://index.docker.io/v1/ --docker-username=user2 --docker-password=PASSWORD123 --docker-email=user2#email.com
$ kubectl get secrets user1-secret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-01-13T13:15:52Z"
name: user1-secret
namespace: default
resourceVersion: "1515301"
selfLink: /api/v1/namespaces/default/secrets/user1-secret
uid: d2f3bb0c-3606-11ea-a202-42010a8000ad
type: kubernetes.io/dockerconfigjson
As you can see, type is kubernetes.io/dockerconfigjson is telling Kubernetes to treat this differently.
So, when you reference the address of your container as magic.example.com/magic-image on your yaml, Kubernetes will have enough information to connect the dots and use the right secret to pull your image.
apiVersion: v1
kind: Pod
metadata:
name: busyboxes
namespace: default
spec:
imagePullSecrets:
- name: user1-secret
- name: user2-secret
containers:
- name: jenkins
image: user1/jenkins
imagePullPolicy: Always
- name: busybox
image: user2/busybox
imagePullPolicy: Always
So as this example describes, it's possible to have 2 or more docker registry secrets with the same --docker-server value. Kubernetes will manage to take care of it seamlessly.

How to deploy gitlab-runner on kubernetes and automatically register the runner?

I'm new to gitlab ci/cd. I want to deploy gitlab-runner on kubernetes, and I use kubernetes to create two resource:
gitlab-runner-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner
namespace: gitlab
data:
config.toml: |
concurrent = 4
[[runners]]
name = "Kubernetes Runner"
url = "http:my-gitlab.com/ci"
token = "token...."
executor = "kubernetes"
tag = "my-runner"
[runners.kubernetes]
namespace = "gitlab"
image = "busybox"
gitlab-runner-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab
spec:
replicas: 1
selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
spec:
containers:
- args:
- run
image: gitlab/gitlab-runner:v11.11.3
imagePullPolicy: Always
name: gitlab-runner
volumeMounts:
- mountPath: /etc/gitlab-runner
name: config
- mountPath: /etc/ssl/certs
name: cacerts
readOnly: true
restartPolicy: Always
volumes:
- configMap:
name: gitlab-runner
name: config
- hostPath:
path: /usr/share/ca-certificates/mozilla
name: cacerts
The problem is that after creating the two resource using kubectl apply. I can't see the runner instance in http://my-gitlab.com/admin/runners. I suspect the reason is that I have not register the runner. And I enter the runner pod pod/gitlab-runner-69d894d7f8-pjrxn and register the runner manually through gitlab-runner register, after that I can see the runner instance in http://my-gitlab.com/admin/runners.
So am I do anything wrong? Or is it has to manually register the runner inside the pod?
Thanks.
Indeed you need to explicitly register runner on the GitLab server.
For example via:
gitlab-runner register --non-interactive \
--name $RUNNER_NAME \
--url $GITLAB_URL \
--registration-token $GITLAB_REGISTRATION_TOKEN \
--executor docker \
--docker-image $DOCKER_IMAGE_BUILDER \
--tag-list $GITLAB_RUNNER_TAG_LIST \
--request-concurrency=$GITLAB_RUNNER_CONCURRENCY
You can pass most of its configuration as arguments.
If you didn't create config.toml, it will generate it for you, including the runner token received from the server on registration.
However,
as you use Kubernetes, there is a simpler way.
GitLab provides great integration with Kubernetes, all you need to do is attach your cluster once to your project\group: https://docs.gitlab.com/ee/user/project/clusters/#adding-an-existing-kubernetes-cluster
And then installing a runner is just few clicks in the UI, via what they call "managed apps": https://docs.gitlab.com/ee/user/clusters/applications.html#gitlab-runner
On this last page you can find links to the Helm chart that they use.
So you can even use it directly yourself.
And you can see there specifically call to register:
configmap.yaml#L65

Are multiple imagePullSecrets allowed and used by Kubernetes to pull an image from a private registry?

I have a private registry (gitlab) where my docker images are stored.
For deployment a secret is created that allows GKE to access the registry. The secret is called deploy-secret.
The secret's login information expires after short time in the registry.
I additionally created a second, permanent secret that allows access to the docker registry, named permanent-secret.
Is it possible to specify the Pod with two secrets? For example:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: deploy-secret
- name: permanent-secret
Will Kubernetes, when trying to re-pull the image later, recognize that the first secret does not work (does not allow authentication to the private registry) and then fallback successfully to the second secret?
Surprisingly this works! I just tried this on my cluster. I added a fake registry credentials secret, with the wrong values. I put both secrets in my yaml like you did (below) and the pods got created and container is created and running successfully:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
nodeSelector:
containers:
- image: gitlab.myapp.com/my-image:tag
name: test
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred-test
- name: regcred
The regcred secret has the correct values and the regcred-test is just a bunch of gibberish. So we can see that it ignores the incorrect secret.