imagePullSecrets on default service account don't seem to work - kubernetes

I am basically trying to pull GCR images from Azure kubernetes cluster.
I have the folowing for my default service account:
kubectl get serviceaccounts default -o yaml
apiVersion: v1
imagePullSecrets:
- name: gcr-json-key-stg
kind: ServiceAccount
metadata:
creationTimestamp: "2019-12-24T03:42:15Z"
name: default
namespace: default
resourceVersion: "151571"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: 7f88785d-05de-4568-b050-f3a5dddd8ad1
secrets:
- name: default-token-gn9vb
If I add the same imagePullSecret to individual deployments, it works. So, the secret is correct. However, when I use it for a default service account, I get a ImagePullBackOff error which on describing confirms that it's a permission issue.
Am I missing something?
I have made sure that my deployment is not configured with any other specific serviceaccount and should be using the default serviceaccount.

ok, the problem was that the default service account that I added the imagePullSecret wasn't in the same namespace.
Once, I patched the default service account in that namespace, it works perfectly well.

After you add the secret for pulling the image to the service account, then you need to add the service account into your pod or deployment. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
selector:
matchLabels:
run: helloworld
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: yourPrivateRegistry/image:tag
ports:
- containerPort: 80
serviceAccountName: pull-image # your service account
And the service account pull-image looks like this:

Related

Kubernetes: set environment variables from file?

My Kubernetes deployment has an initContainer which fetches a token from a URL. My app container (3rd party) then needs that token as an environment variable.
A possible approach would be: the initContainer creates a Kubernetes Secret with the token value; the app container uses the secret as an environment variable via env[].valueFrom.secretKeyRef.
Creating the Secret from the initContainer requires accessing the Kubernetes API from a Pod though, which tends to be a tad cumbersome. For example, directly accessing the REST API requires granting proper permissions to the pod's service account; otherwise, creating the secret will fail with
secrets is forbidden: User \"system:serviceaccount:default:default\"
cannot create resource \"secrets\" in API group \"\" in the namespace \"default\"
So I was wondering, isn't there any way to just write the token to a file on an emptyDir volume...something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
initContainers:
- name: fetch-auth-token
image: curlimages/curl
command:
- /bin/sh
args:
- -c
- |
echo "Fetching token..."
url=https://gist.githubusercontent.com/MaxHorstmann/a99823d5aff66fe2ad4e7a4e2a2ee96b/raw/662c19aa96695e52384337bdbd761056bb324e72/token
curl $url > /auth-token/token
volumeMounts:
- mountPath: /auth-token
name: auth-token
...
volumes:
- name: auth-token
emptyDir: {}
... and then somehow use that file to populate an environment variable in the app container, similar to env[].valueFrom.secretKeyRef, along the lines of:
containers:
- name: my-actual-app
image: thirdpartyappimage
env:
- name: token
valueFrom:
fileRef:
path: /auth-token/token
# ^^^^ this does not exist
volumeMounts:
- mountPath: /auth-token
name: auth-token
Unfortunately, there's no env[].valueFrom.fileRef.
I considered overwriting the app container's command with a shell script which loads the environment variable from the file before launching the main command; however, the container image doesn't even contain a shell.
Is there any way to set the environment variable in the app container from a file?
Creating the Secret from the initContainer requires accessing the Kubernetes API from a Pod though, which tends to be a tad cumbersome...
It's not actually all that bad; you only need to add a ServiceAccount, Role, and RoleBinding to your deployment manifests.
The ServiceAccount manifest is minimal, and you only need it if you don't want to grant permissions to the default service account in your namespace:
apiVersion: v1
kind: ServiceAccount
metadata:
name: secretmaker
Then your Role grants access to secrets (we need create and delete permissions, and having get and list is handy for debugging):
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: env-example
name: secretmaker
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- get
- delete
- list
A RoleBinding connects the ServiceAccount to the Role:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: env-example
name: secretmaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secretmaker
subjects:
- kind: ServiceAccount
name: secretmaker
namespace: default
And with those permissions in place, the Deployment is relatively simple:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: env-example
name: env-example
namespace: env-example
spec:
selector:
matchLabels:
app: env-example
template:
metadata:
labels:
app: env-example
spec:
serviceAccountName: secretmaker
initContainers:
- command:
- /bin/sh
- -c
- |
echo "Fetching token..."
url=https://gist.githubusercontent.com/MaxHorstmann/a99823d5aff66fe2ad4e7a4e2a2ee96b/raw/662c19aa96695e52384337bdbd761056bb324e72/token
curl $url -o /tmp/authtoken
kubectl delete secret authtoken > /dev/null 2>&1
kubectl create secret generic authtoken --from-file=AUTH_TOKEN=/tmp/authtoken
image: docker.io/alpine/k8s:1.25.6
name: create-auth-token
containers:
- name: my-actual-app
image: docker.io/alpine/k8s:1.25.6
command:
- sleep
- inf
envFrom:
- secretRef:
name: authtoken
The application container here is a no-op that runs sleep inf; that gives you the opportunity to inspect the environment by running:
kubectl exec -it deployment/env-example -- env
Look for the AUTH_TOKEN variable created by our initContainer.
All the manifests mentioned here can be found in this repository.

Value of Kubernetes secret in environment variable seems incorrect

I'm deploying a test application onto kubernetes on my local computer (minikube) and trying to pass database connection details into a deployment via environment variables.
I'm passing in these details using two methods - a ConfigMap and a Secret. The username (DB_USERNAME) and connection url (DB_URL) are passed via a ConfigMap, while the DB password is passed in as a secret (DB_PASSWORD).
My issue is that while the values passed via ConfigMap are fine, the DB_PASSWORD from the secret appears jumbled - like there's some encoding issue (see image below).
My deployment yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
envFrom:
- configMapRef:
name: gweb-cm
- secretRef:
name: password
My ConfigMap and Secret yaml
apiVersion: v1
data:
DB_URL: jdbc:mysql://mysql/test?serverTimezone=UTC
DB_USERNAME: webuser
SPRING_PROFILES_ACTIVE: prod
SPRING_DDL_AUTO: create
kind: ConfigMap
metadata:
name: gweb-cm
---
apiVersion: v1
kind: Secret
metadata:
name: password
type: Generic
data:
DB_PASSWORD: test
Not sure if I'm missing something in my Secret definition?
The secret value should be base64 encoded. Instead of test, use the output of
echo -n 'test' | base64
P.S. the Secret's type should be Opaque, not Generic

Hashicorp Vault for Kubernetes - Deployment can't get secret injected

I am currently doing a PoC on Vault for K8s, but I am having some issues injecting a secret into an example application. I have created a Service Account which is associated with a role, which is then associated with a policy that allows the service account to read secrets.
I have created a secret basic-secret, which I am trying to inject to my example application. The application is then associated with a Service Account. Below you can see the code for deploying the example application (Hello World) and the service account:
apiVersion: apps/v1
kind: Deployment
metadata:
name: basic-secret
labels:
app: basic-secret
spec:
selector:
matchLabels:
app: basic-secret
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/tls-skip-verify: "true"
vault.hashicorp.com/agent-inject-secret-helloworld: "secret/basic-secret/helloworld"
vault.hashicorp.com/agent-inject-template-helloworld: |
{{- with secret "secret/basic-secret/helloworld" -}}
{
"username" : "{{ .Data.username }}",
"password" : "{{ .Data.password }}"
}
{{- end }}
vault.hashicorp.com/role: "basic-secret-role"
labels:
app: basic-secret
spec:
serviceAccountName: basic-secret
containers:
- name: app
image: jweissig/app:0.0.1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: basic-secret
labels:
app: basic-secret
When I describe the pod (kubectl describe pod basic-secret-7d6777cdb8-tlfsw -n vault) of the deployment I get:
Furthermore, for logs (kubectl logs pods/basic-secret-7d6777cdb8-tlfsw vault-agent -n vault) I get:
Error from server (BadRequest): container "vault-agent" in pod "basic-secret-7d6777cdb8-tlfsw" is waiting to start: PodInitializing
I am not sure why the Vault-Agent is not initiallizing. If someone has any idea what might be the issue, I would appreciate it a lot!
Best William.
Grab your container logs with kubectl logs pods/basic-secret-7d6777cdb8-tlfsw vault-agent-init -n vault because container vault-agent-init has to finish first.
Does your policy allow access to that secret (secret/basic-secret/helloworld)?
Did you create your role (basic-secret-role) in k8s auth? In role creation process, you can authorize certain namespaces so that might be a problem.
But, let's see those agent-init logs first

Are multiple imagePullSecrets allowed and used by Kubernetes to pull an image from a private registry?

I have a private registry (gitlab) where my docker images are stored.
For deployment a secret is created that allows GKE to access the registry. The secret is called deploy-secret.
The secret's login information expires after short time in the registry.
I additionally created a second, permanent secret that allows access to the docker registry, named permanent-secret.
Is it possible to specify the Pod with two secrets? For example:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: deploy-secret
- name: permanent-secret
Will Kubernetes, when trying to re-pull the image later, recognize that the first secret does not work (does not allow authentication to the private registry) and then fallback successfully to the second secret?
Surprisingly this works! I just tried this on my cluster. I added a fake registry credentials secret, with the wrong values. I put both secrets in my yaml like you did (below) and the pods got created and container is created and running successfully:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
nodeSelector:
containers:
- image: gitlab.myapp.com/my-image:tag
name: test
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred-test
- name: regcred
The regcred secret has the correct values and the regcred-test is just a bunch of gibberish. So we can see that it ignores the incorrect secret.

How to configure a non-default serviceAccount on a deployment

My Understanding of this doc page is, that I can configure service accounts with Pods and hopefully also deployments, so I can access the k8s API in Kubernetes 1.6+. In order not to alter or use the default one I want to create service account and mount certificate into the pods of a deployment.
How do I achieve something similar like in this example for a deployment?
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
As you will need to specify 'podSpec' in Deployment as well, you should be able to configure the service account in the same way. Something like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
# Below is the podSpec.
metadata:
name: ...
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
kubernetes nginx-deployment.yaml where serviceAccountName: test-sa
used as non default service account
Link: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
test-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-sa
namespace: test-ns
nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: test-ns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
serviceAccountName: test-sa
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80