Why k8s pod can't find key in ConfigMap? - kubernetes

I'm having an issue with a Kubernetes pod that uses a ConfigMap. My pod fails to start, with the following error:
Warning Failed 10s (x7 over 2m16s) kubelet, docker-desktop Error: Couldn't find key URL in ConfigMap default/env-config
I created my ConfigMap as follows:
kubectl create configmap env-config --from-file env-config.yaml
This is my ConfigMap:
NAME DATA AGE
env-config 1 5m38s
Nates-MacBook-Pro:k8s natereed$ kubectl describe configmap env-config
Name: env-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
env-config.yaml:
----
apiVersion: v1
kind: ConfigMap
data:
AWS_BUCKET: mybucket
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: <mydb>
POSTGRESS_HOST: <my host>
URL: http://localhost:8100
metadata:
name: env-config
It looks like command to create the ConfigMap is wrong? It's not clear to me why it creates a map with a single key "env-config.yaml".
The YAML file looks like this:
apiVersion: v1
kind: ConfigMap
data:
AWS_BUCKET: mybucket
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: mydb
POSTGRESS_HOST: postgreshost
URL: http://localhost:8100
metadata:
name: env-config
namespace: default

I'd say that the issue occurred because you are passing a ConfigMap yaml definition as parameter of --from-file.
You could simply create it using:
kubectl create -f env-config.yaml
Besides that, if you would like to create using --from-file, then you can define your file only with the parameters that you need, it would be something like:
File name: env-config
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: <mydb>
POSTGRESS_HOST: <my host>
URL: http://localhost:8100
And then you can create the ConfigMap in the way you were doing before:
kubectl create configmap env-config --from-file env-config
This would create a ConfigMap like that: (kubectl describe configmap env-config)
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
env-config:
----
AWS_BUCKET: mybucket
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: <mydb>
POSTGRESS_HOST: <my host>
URL: http://localhost:8100
Events: <none>
Here you can find some useful information:
Create ConfigMaps from files
Define container environment variables using ConfigMap data

So you kind of got things a little weird. What you have there is a config map with one key named env-config.yaml, the value of which is a string containing YAML data for a config map with a bunch of keys including URL. I'm guessing you tried using kubectl create cm --from-file instead of kubectl apply -f?

Related

How can you use a private gitlab container registry to pull an image in kubernetes?

I have a private docker registry hosted on gitlab and I would like to use this repository to pull images for my local kubernetes cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 68m
K8s is on v1.22.5 and is a single-node cluster that comes 'out of the box' with Docker Desktop. I have already built and deployed an image to the gitlab container registry registry.gitlab.com. What I have done already:
Executed the command docker login -u <username> -p <password> registry.gitlab.com
Modified the ~/.docker/config.json file to the following:
{
"auths": {
"registry.gitlab.com": {}
},
"credsStore": "osxkeychain"
}
Created and deployed a secret to the cluster with the file:
apiVersion: v1
kind: Secret
metadata:
name: registry-key
data:
.dockerconfigjson: <base-64-encoded-.config.json-file>
type: kubernetes.io/dockerconfigjson
Deployed an app with the following file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test-app
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
imagePullSecrets:
- name: registry-key
containers:
- name: test-app
image: registry.gitlab.com/<image-name>:latest
imagePullPolicy: Always
ports:
- containerPort: 80
The deployment is created successfully but upon inspection of the pod (kubectl describe pod) I find the following events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned default/test-deployment-87b5747b5-xdsl9 to docker-desktop
Normal BackOff 19s kubelet Back-off pulling image "registry.gitlab.com/<image-name>:latest"
Warning Failed 19s kubelet Error: ImagePullBackOff
Normal Pulling 7s (x2 over 20s) kubelet Pulling image "registry.gitlab.com/<image-name>:latest"
Warning Failed 7s (x2 over 19s) kubelet Failed to pull image "registry.gitlab.com/<image-name>:latest": rpc error: code = Unknown desc = Error response from daemon: Head "https://registry.gitlab.com/v2/<image-name>/manifests/latest": denied: access forbidden
Warning Failed 7s (x2 over 19s) kubelet Error: ErrImagePull
Please provide any information that might be causing these errors.
I managed to solve the issue by editing the default config.json produced by $ docker login:
{
"auths": {
"registry.gitlab.com": {}
},
"credsStore": "osxkeychain"
}
becomes
{
"auths": {
"registry.gitlab.com": {
"auth":"<access-token-in-plain-text>"
}
}
}
Thanks Bala for suggesting this in the comments. I realise storing the access token in plain text in the file may not be secure but this can be changed to use a path if needed.
I also created the secret as per OzzieFZI's suggestion:
$ kubectl create secret docker-registry registry-key \
--docker-server=registry.gitlab.com \
--docker-username=<username> \
--docker-password="$(cat /path/to/token.txt)"
What password do you use?
Confirm if you are using a Personal Access Token with read/write access to the container registry. Your username should be the gitlab username.
I would suggest creating the docker registry secret using kubectl and a txt file with the token as the content, this way you do not have to encode the dockerconfigjson yourself. Here is an example.
$ kubectl create secret docker-registry registry-key \
--docker-server=registry.gitlab.com \
--docker-username=<username> \
--docker-password="$(cat /path/to/token.txt)"
See documentation on the command here
Here's something a bit more detailed in case anyone is having problems with this. Also gitlab has introdcuced deploy tokens from the repository -> deploy tokens tab, which means you do not need to use personal access tokens.
Create auth to put in secret resource
#!/bin/bash
if [ "$#" -ne 1 ]; then
printf "Invalid number of arguments" >&2
printf "./create_registry_secret.sh <GITLAB_DEPLOY_TOKEN>" >&2
exit 1;
fi
secret_gen_string='{"auths":{"https://registry.gitlab.com":{"username":"{{USER}}","password":"{{TOKEN}}","email":"{{EMAIL}}","auth":"{{SECRET}}"}}}'
gitlab_user=<YOUR_DEPLOY_TOKEN_USER>
gitlab_token=$1
gitlab_email=<YOUR_EMAIL_OR_WHATEVER>
gitlab_secret=$(echo -n "$gitlab_user:$gitlab_token" | base64 -w 0)
echo -n $secret_gen_string \
| sed "s/{{USER}}/$gitlab_user/" \
| sed "s/{{TOKEN}}/$gitlab_token/" \
| sed "s/{{EMAIL}}/$gitlab_email/" \
| sed "s/{{SECRET}}/$gitlab_secret/" \
| base64 -w 0
Use the output of the script in secret resource
# A secret to pull container from gitlab registry
apiVersion: v1
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
name: gitlab-pull-secret
data:
.dockerconfigjson: <GENERATED_SECRET>
Reference the secret in container definition
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitlab-test-deployment
labels:
app.kubernetes.io/name: gitlab-test
spec:
selector:
matchLabels:
app.kubernetes.io/name: gitlab-test
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: gitlab-test
spec:
containers:
- name: my-gitlab-container
image: registry.gitlab.com/group/project/image:tag
imagePullPolicy: Always
ports:
- containerPort: 3000
# Include the authentication for gitlab container registry
imagePullSecrets:
- name: gitlab-pull-secret

kubernetes service account secrets is not listed

I created a secret of type service-account using the below code. The secret got created but when I run the kubectl get secrets the service-account secret is not listed. Where am I going wrong
apiVersion: v1
kind: Secret
metadata:
name: secret-sa-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
# You can include additional key value pairs as you do with Opaque Secrets
extra: YmFyCg==
kubectl create -f sa-secret.yaml
secret/secret-sa-sample created```
it might have been created in default namespace.
Specify namespace explicitly using -n $NS argument to kubectl

How can I determine whether Kubernetes is using authentication for a image repository?

I'm trying to investigate why a pod has a status of ImagePullBackOff.
If kubectl describe the pod I see an event listed :
Warning Failed 5m42s (x4 over 7m2s) kubelet Failed
to pull image
"**********************":
rpc error: code = Unknown desc = Error response from daemon:
unauthorized: You don't have the needed permissions to perform this
operation, and you may have invalid credentials. To authenticate your
request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication
This is not expected as I docker authentication set for the default service account - via a secret as mentioned here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-image-pull-secret-to-service-account
How can I determine whether it's using the correct authentication so I can further debug this issue?
Not really an answer to the question but a solution in my case:
Seems there is something wrong with the kubectl patch serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}' as it does not seem to do anything...
Instead I edited the service account resource directly - no patch...
Demonstarted here:
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl patch serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}'
serviceaccount/default patched (no change)
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl describe serviceaccount default
Name: default
Namespace: app-1
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-tqp58
Tokens: default-token-tqp58
Events: <none>
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl get serviceaccount -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2020-09-17T15:50:34Z"
name: default
namespace: app-1
resourceVersion: "111538"
selfLink: /api/v1/namespaces/app-1/serviceaccounts/default
uid: 5fe21574-67bf-485c-b9aa-d09c1fe3350c
secrets:
- name: default-token-tqp58
kind: List
metadata:
resourceVersion: ""
selfLink: ""
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl patch -n app-1 serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}'
serviceaccount/default patched (no change)

kubectl apply -f works on PC but not in Gitlab Runner

I am trying to deploy to kubernetes using Gitlab CICD. No matter what I do, kubectl apply -f helloworld-deployment.yml --record in my .gitlab-ci.yml always returns that the deployment is unchanged:
$ kubectl apply -f helloworld-deployment.yml --record
deployment.apps/helloworld-deployment unchanged
Even if I change the tag on the image, or if the deployment doesn't exist at all. However, if I run kubectl apply -f helloworld-deployment.yml --record from my own computer, it works fine and updates when a tag changes and creates the deployment when no deployment exist. Below is my .gitlab-ci.yml that I'm testing with:
image: docker:dind
services:
- docker:dind
stages:
- deploy
deploy-prod:
stage: deploy
image: google/cloud-sdk
environment: production
script:
- kubectl apply -f helloworld-deployment.yml --record
Below is helloworld-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
replicas: 2
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: registry.gitlab.com/repo/helloworld:test
imagePullPolicy: Always
ports:
- containerPort: 3000
imagePullSecrets:
- name: regcred
Update:
This is what I see if I run kubectl rollout history deployments/helloworld-deployment and there is no existing deployment:
Error from server (NotFound): deployments.apps "helloworld-deployment" not found
If the deployment already exists, I see this:
REVISION CHANGE-CAUSE
1 kubectl apply --filename=helloworld-deployment.yml --record=true
With only one revision.
I did notice this time that when I changed the tag, the output from my Gitlab Runner was:
deployment.apps/helloworld-deployment configured
However, there were no new pods. When I ran it from my PC, then I did see new pods created.
Update:
Running kubectl get pods shows two different pods in Gitlab runner than I see on my PC.
I definitely only have one kubernetes cluster, but kubectl config view shows some differences (the server url is the same). The output for contexts shows different namespaces. Does this mean I need to set a namespace either in my yml file or pass it in the command? Here is the output from the Gitlab runner:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: URL
name: gitlab-deploy
contexts:
- context:
cluster: gitlab-deploy
namespace: helloworld-16393682-production
user: gitlab-deploy
name: gitlab-deploy
current-context: gitlab-deploy
kind: Config
preferences: {}
users:
- name: gitlab-deploy
user:
token: [MASKED]
And here is the output from my PC:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: URL
contexts:
- context:
cluster: do-nyc3-helloworld
user: do-nyc3-helloworld-admin
name: do-nyc3-helloworld
current-context: do-nyc3-helloworld
kind: Config
preferences: {}
users:
- name: do-nyc3-helloworld-admin
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- kubernetes
- cluster
- kubeconfig
- exec-credential
- --version=v1beta1
- --context=default
- VALUE
command: doctl
env: null
It looks like Gitlab adds their own default for namespace:
<project_name>-<project_id>-<environment>
Because of this, I put this in the metadata section of helloworld-deployment.yml:
namespace: helloworld-16393682-production
And then it worked as expected. It was deploying before, but kubectl get pods didn't show it since that command was using the default namespace.
Since Gitlab use a custom namespace you need to add a namespace flag to you command to display your pods:
kubectl get pods -n helloworld-16393682-production
You can set the default namespace for kubectl commands. See here.
You can permanently save the namespace for all subsequent kubectl commands in that contex
In your case it could be:
kubectl config set-context --current --namespace=helloworld-16393682-production
Or if you are using multiples cluster, you can switch between namespaces using:
kubectl config use-context helloworld-16393682-production
In this link you can see a lot of useful commands and configurations.
I hope it helps! =)

How to authenticate and access Kubernetes cluster for devops pipeline?

Normally you'd do ibmcloud login ⇒ ibmcloud ks cluster-config mycluster ⇒ copy and paste the export KUBECONFIG= and then you can run your kubectl commands.
But if this were being done for some automated devops pipeline outside of IBM Cloud, what is the method for getting authenticating and getting access to the cluster?
You should not copy your kubeconfig to the pipeline. Instead you can create a service account with permissions to a particular namespace and then use its credentials to access the cluster.
What I do is create a service account and role binding like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-tez-dev # account name
namespace: tez-dev #namespace
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tez-dev-full-access #role
namespace: tez-dev
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods", "services"] #resources to which permissions are granted
verbs: ["*"] # what actions are allowed
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tez-dev-view
namespace: tez-dev
subjects:
- kind: ServiceAccount
name: gitlab-tez-dev
namespace: tez-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tez-dev-full-access
Then you can get the token for the service account using:
kubectl describe secrets -n <namespace> gitlab-tez-dev-token-<value>
The output:
Name: gitlab-tez-dev-token-lmlwj
Namespace: tez-dev
Labels: <none>
Annotations: kubernetes.io/service-account.name: gitlab-tez-dev
kubernetes.io/service-account.uid: 5f0dae02-7b9c-11e9-a222-0a92bd3a916a
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1042 bytes
namespace: 7 bytes
token: <TOKEN>
In the above command, namespace is the namespace in which you created the account and the value is the unique value which you will see when you do
kubectl get secret -n <namespace>
Copy the token to your pipeline environment variables or configuration and then you can access it in the pipeline. For example, in gitlab I do (only the part that is relevant here):
k8s-deploy-stage:
stage: deploy
image: lwolf/kubectl_deployer:latest
services:
- docker:dind
only:
refs:
- dev
script:
######## CREATE THE KUBECFG ##########
- kubectl config set-cluster ${K8S_CLUSTER_NAME} --server=${K8S_URL}
- kubectl config set-credentials gitlab-tez-dev --token=${TOKEN}
- kubectl config set-context tez-dev-context --cluster=${K8S_CLUSTER_NAME} --user=gitlab-tez-dev --namespace=tez-dev
- kubectl config use-context tez-dev-context
####### NOW COMMANDS WILL BE EXECUTED AS THE SERVICE ACCOUNT #########
- kubectl apply -f deployment.yml
- kubectl apply -f service.yml
- kubectl rollout status -f deployment.yml
The KUBECONFIG environment variable is a list of paths to Kubernetes configuration files that define one or more (switchable) contexts for kubectl (https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
Copy your Kubernetes configuration file to your pipeline agent (~/.kube/config by default) and optionally set the KUBECONFIG environment variable. If you got different contexts in your config file, you may want to remove the ones you don't need in your pipeline before copying it or switch contexts using kubectl config use-context.
Everything you need to connect to your kube api server is inside that config, certs, tokens etc.
If you don't want to copy a token into a file or want to use the API to automate the retrieval of the token, you can also execute some POST commands in order to programmatically retrieve your user token.
The full docs for this are here: https://cloud.ibm.com/docs/containers?topic=containers-cs_cli_install#kube_api
The key piece is retrieving your id token with the POST https://iam.bluemix.net/identity/token call.
The body will return an id_token that you can use in your Kubernetes API calls.