Zalando postgres operator issue with config - postgresql

Getting below issue with Zalando Postgres operator. The default manifests are applied on the Kubernetes cluster(hosted on-prem) as provided here:
https://github.com/zalando/postgres-operator/tree/4a099d698d641b80c5aeee5bee925921b7283489/manifests
Verified if there are any issues in the operator names or any in configmaps or in the service-account definitions but couldn't figure out much.
kubectl logs -f postgres-operator-944b9d484-9h796
2019/10/24 16:31:02 Spilo operator v1.2.0
2019/10/24 16:31:02 Fully qualified configmap name: default/postgres-operator
panic: configmaps "postgres-operator" is forbidden: User "system:serviceaccount:default:zalando-postgres-operator" cannot get resource "configmaps" in API group "" in the namespace "default"
goroutine 1 [running]:
github.com/zalando/postgres-operator/pkg/controller.(*Controller).initOperatorConfig(0xc0004a6000)
/root/go/src/github.com/zalando/postgres-operator/pkg/controller/controller.go:102 +0x687
github.com/zalando/postgres-operator/pkg/controller.(*Controller).initController(0xc0004a6000)
/root/go/src/github.com/zalando/postgres-operator/pkg/controller/controller.go:253 +0x825
github.com/zalando/postgres-operator/pkg/controller.(*Controller).Run(0xc0004a6000, 0xc000464660, 0xc000047a70)
/root/go/src/github.com/zalando/postgres-operator/pkg/controller/controller.go:348 +0x2f
main.main()
/workspace/cmd/main.go:82 +0x256
Any help here?

I have set up postgres-operator in my environment and it is working perfectly in my case. Please make sure that you have followed steps:
Clone postgres-operator repo:
$ git clone https://github.com/zalando/postgres-operator
$ cd postgres-operator
Operator from Zalando can be configured in two ways - using a classical configmap, or using a CRD configuration object, which is more powerful:
$ kubectl create -f manifests/operator-service-account-rbac.yaml
serviceaccount/zalando-postgres-operator created
clusterrole.rbac.authorization.k8s.io/zalando-postgres-operator created
clusterrolebinding.rbac.authorization.k8s.io/zalando-postgres-operator created
In order to use the CRD config, you must change a value in the postgres-operator itself. Change the last few lines in manifests/postgres-operator.yaml so they read:
env:
# provided additional ENV vars can overwrite individual config map entries
#- name: CONFIG_MAP_NAME
# value: "postgres-operator"
# In order to use the CRD OperatorConfiguration instead, uncomment these lines and comment out the two lines above
- name: POSTGRES_OPERATOR_CONFIGURATION_OBJECT
value: postgresql-operator-default-configuration
The service account name given in that file does not match that given by the operator service account definition, so you must adjust and create the actual config object referenced. This is placed in manifests/postgresql-operator-default-configuration.yaml. These are the values that must be set:
configuration:
kubernetes:
pod_environment_configmap: postgres-pod-config
pod_service_account_name: zalando-postgres-operator
Let’s create the operator and it’s configuration.
$ kubectl create -f manifests/postgres-operator.yaml
deployment.apps/postgres-operator created
Please wait few minutes before type following command:
$ kubectl create -f postgresql-operator-default-configuration.yaml
operatorconfiguration.acid.zalan.do/postgresql-operator-default-configuration created
Now, you will be able to see your POD running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-operator-599fd68d95-c8z67 1/1 Running 0 21m
You can also refer to this article, hope it will helps you.

Related

Helm 3 Deployment Order of Kubernetes Service Catalog Resources

I am using Helm v3.3.0, with a Kubernetes 1.16.
The cluster has the Kubernetes Service Catalog installed, so external services implementing the Open Service Broker API spec can be instantiated as K8S resources - as ServiceInstances and ServiceBindings.
ServiceBindings reflect as K8S Secrets and contain the binding information of the created external service. These secrets are usually mapped into the Docker containers as environment variables or volumes in a K8S Deployment.
Now I am using Helm to deploy my Kubernetes resources, and I read here that...
The [Helm] install order of Kubernetes types is given by the enumeration InstallOrder in kind_sorter.go
In that file, the order does neither mention ServiceInstance nor ServiceBinding as resources, and that would mean that Helm installs these resource types after it has installed any of its InstallOrder list - in particular Deployments. That seems to match the output of helm install --dry-run --debug run on my chart, where the order indicates that the K8S Service Catalog resources are applied last.
Question: What I cannot understand is, why my Deployment does not fail to install with Helm.
After all my Deployment resource seems to be deployed before the ServiceBinding is. And it is the Secret generated out of the ServiceBinding that my Deployment references. I would expect it to fail, since the Secret is not there yet, when the Deployment is getting installed. But that is not the case.
Is that just a timing glitch / lucky coincidence, or is this something I can rely on, and why?
Thanks!
As said in the comment I posted:
In fact your Deployment is failing at the start with Status: CreateContainerConfigError. Your Deployment is created before Secret from the ServiceBinding. It's only working as it was recreated when the Secret from ServiceBinding was available.
I wanted to give more insight with example of why the Deployment didn't fail.
What is happening (simplified in order):
Deployment -> created and spawned a Pod
Pod -> failing pod with status: CreateContainerConfigError by lack of Secret
ServiceBinding -> created Secret in a background
Pod gets the required Secret and starts
Previously mentioned InstallOrder will leave ServiceInstace and ServiceBinding as last by comment on line 147.
Example
Assuming that:
There is a working Kubernetes cluster
Helm3 installed and ready to use
Following guides:
Kubernetes.io: Instal Service Catalog using Helm
Magalix.com: Blog: Kubernetes Service Catalog
There is a Helm chart with following files in templates/ directory:
ServiceInstance
ServiceBinding
Deployment
Files:
ServiceInstance.yaml:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: example-instance
spec:
clusterServiceClassExternalName: redis
clusterServicePlanExternalName: 5-0-4
ServiceBinding.yaml:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: example-binding
spec:
instanceRef:
name: example-instance
Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu
spec:
selector:
matchLabels:
app: ubuntu
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
# part below responsible for getting secret as env variable
env:
- name: DATA
valueFrom:
secretKeyRef:
name: example-binding
key: host
Applying above resources to check what is happening can be done in 2 ways:
First method is to use timestamp from $ kubectl get RESOURCE -o yaml
Second method is to use $ kubectl get RESOURCE --watch-only=true
First method
As said previously the Pod from the Deployment couldn't start as Secret was not available when the Pod tried to spawn. After the Secret was available to use, the Pod started.
The statuses this Pod had were the following:
Pending
ContainerCreating
CreateContainerConfigError
Running
This is a table with timestamps of Pod and Secret:
| Pod | Secret |
|-------------------------------------------|-------------------------------------------|
| creationTimestamp: "2020-08-23T19:54:47Z" | - |
| - | creationTimestamp: "2020-08-23T19:54:55Z" |
| startedAt: "2020-08-23T19:55:08Z" | - |
You can get this timestamp by invoking below commands:
$ kubectl get pod pod_name -n namespace -o yaml
$ kubectl get secret secret_name -n namespace -o yaml
You can also get get additional information with:
$ kubectl get event -n namespace
$ kubectl describe pod pod_name -n namespace
Second method
This method requires preparation before running Helm chart. Open another terminal window (for this particular case 2) and run:
$ kubectl get pod -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done
$ kubectl get secret -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done
After that apply your Helm chart.
Disclaimer!
Above commands will watch for changes in resources and display them with a timestamp from the OS. Please remember that this command is only for example purposes.
The output for Pod:
21:54:47:534823000 NAME READY STATUS RESTARTS AGE
21:54:47:542107000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:553799000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:655593000 ubuntu-65976bb789-l48wz 0/1 ContainerCreating 0 0s
-> 21:54:52:001347000 ubuntu-65976bb789-l48wz 0/1 CreateContainerConfigError 0 4s
21:55:09:205265000 ubuntu-65976bb789-l48wz 1/1 Running 0 22s
The output for Secret:
21:54:47:385714000 NAME TYPE DATA AGE
21:54:47:393145000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:47:719864000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:51:182609000 understood-squid-redis Opaque 1 0s
21:54:52:001031000 understood-squid-redis Opaque 1 0s
-> 21:54:55:686461000 example-binding Opaque 6 0s
Additional resources:
Stackoverflow.com: Answer: Helm install in certain order
Alibabacloud.com: Helm charts and templates hooks and tests part 3
So to answer my own question (and thanks to #dawid-kruk and the folks on Service Catalog Sig on Slack):
In fact, the initial start of my Pods (the ones referencing the Secret created out of the ServiceBinding) fails! It fails because the Secret is actually not there the moment K8S tries to start the pods.
Kubernetes has a self-healing mechanism, in the sense that it tries (and retries) to reach the target state of the cluster as described by the various deployed resources.
By Kubernetes retrying to get the pods running, eventually (when the Secret is finally there) all conditions will be satisfied to make the pods start up nicely. Therefore, eventually, evth. is running as it should.
How could this be streamlined? One possibility would be for Helm to include the custom resources ServiceBinding and ServiceInstance into its ordered list of installable resources and install them early in the installation phase.
But even without that, Kubernetes actually deals with it just fine. The order of installation (in this case) really does not matter. And that is a good thing!

How to submit a kubectl job and pass the user as runas

I have a container that I want to run on Kubernetes Let's say image1
when I run kubectl apply -f somePod.yml (which runs the image1) how can I start the image with the user that runned the command kubectl?
It's not possible by design. Please find explanation below:
In the most cases Jobs create Pods, so I use Pods in my explanation.
In case of Jobs it means just a bit different YAML file.
$ kubectl explain job.spec.
$ kubectl explain job.spec.template.spec
Users run kubectl using user accounts.
Pods are runing using service accounts. There is no way to run pod "from user account".
Note: in recent versions spec.ServiceAccount was replaced by spec.serviceAccountName
However, you can use user account credentials by running kubectl inside a Pod's container or making curl requests to Kubernetes api-server from inside a pod container.
Using Secrets is the most convenient way to do that.
What else you can do differentiate users in the cluster:
create a namespace for each user
limit user permission to specific namespace
create default service account in that namespace.
This way if the user creates a Pod without specifying spec.ServiceAccountName, by default it will use default service-account from Pod's namespace.
You can even set for the default service account the same permissions as for the user account. The only difference would be that service accounts exist in the specific namespace.
If you need to use the same namespace for all users, you can use helm charts to set the correct serviceAccountName for each user ( imagine you have service-accounts with the same name as users ) by using --set command line arguments as follows:
$ cat testchart/templates/job.yaml
...
serviceAccountName: {{ .Values.saname }}
...
$ export SANAME=$(kubectl config view --minify -o jsonpath='{.users[0].name}')
$ helm template ./testchart --set saname=$SANAME
---
# Source: testchart/templates/job.yaml
...
serviceAccountName: kubernetes-admin
...
You can also specify namespace for each user in the same way.
I am still not sure if I good understood your question.
However, kubectl doesn't have an option to pass user or service account when creating jobs:
kubectl create job --help
Create a job with the specified name.
Examples:
# Create a job
kubectl create job my-job --image=busybox
# Create a job with command
kubectl create job my-job --image=busybox -- date
# Create a job from a CronJob named "a-cronjob"
kubectl create job test-job --from=cronjob/a-cronjob
Options:
--allow-missing-template-keys=true: If true, ignore any errors in templates when a field or
map key is missing in the template. Only applies to golang and jsonpath output formats.
--dry-run=false: If true, only print the object that would be sent, without sending it.
--from='': The name of the resource to create a Job from (only cronjob is supported).
--image='': Image name to run.
-o, --output='': Output format. One of:
json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
--save-config=false: If true, the configuration of current object will be saved in its
annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to
perform kubectl apply on this object in the future.
--template='': Template string or path to template file to use when -o=go-template,
-o=go-template-file. The template format is golang templates
[http://golang.org/pkg/text/template/#pkg-overview].
--validate=true: If true, use a schema to validate the input before sending it
Usage:
kubectl create job NAME --image=image [--from=cronjob/name] -- [COMMAND] [args...] [flags]
[options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
You can specify many factors inside your YAML definition. For example you could create ServiceAccount or specify runAsUser in a pod configuration. However, this requires to have a definition file instead of run-level with kubectl.
Here you can find how to do it with runAsUser parameter.
You could also consider using ServiceAccount. Here you have article which might help you. However you would need to create specific ServiceAccount
It would look similar like:
apiVersion: v1
kind: Pod
metadata:
name: pod-demo-sa
spec:
serviceAccountName: demo-sa
containers:
— name: alpine
image: alpine:3.9
command:
— "sleep"
— "10000"
If this would be for some labs or practice you could also think about creating customized docker image using Dockerfile.
Unfortunately previous options are hardcoded. Other solution would need a specific scritp and many pipelines.
In addition, as you mentioned in title, to pass some values to configuration you can use ConfigMap.

How to create an environment variable in kubernetes container

I am trying to pass an environment variable in kubernetes container.
What have I done so far ?
Create a deployment
kubectl create deployment foo --image=foo:v1
Create a NODEPORT service and expose the port
kubectl expose deployment/foo --type=NodePort --port=9000
See the pods
kubectl get pods
dump the configurations (so to add the environment variable)
kubectl get deployments -o yaml > dev/deployment.yaml
kubectl get svc -o yaml > dev/services.yaml
kubectl get pods -o yaml > dev/pods.yaml
Add env variable to the pods
env:
name: FOO_KEY
value: "Hellooooo"
Delete the svc,pods,deployments
kubectl delete -f dev/ --recursive
Apply the configuration
kubectl apply -f dev/ --recursive
Verify env parameters
kubectl describe pods
Something weird
If I manually changed the meta information of the pod yaml and hard code the name of the pod. It gets the env variable. However, this time two pods come up one with the hard coded name and other with the hash with it. For example, if the name I hardcoded was "foo", two pods namely foo and foo-12314faf (example) would appear in "kubectl get pods". Can you explain why ?
Question
Why does the verification step does not show the environment variable ?
As the issue is resolved in the comment section.
If you want to set env to pods I would suggust you to use set sub commend
kubectl set env --help will provide more detail such as list the env and create new one
Examples:
# Update deployment 'registry' with a new environment variable
kubectl set env deployment/registry STORAGE_DIR=/local
# List the environment variables defined on a deployments 'sample-build'
kubectl set env deployment/sample-build --list
Deployment enables declarative updates for Pods and ReplicaSets. Pods are not typically directly launched on a cluster. Instead, pods are usually managed by replicaSet which is managed by deployment.
following thread discuss what-is-the-difference-between-a-pod-and-a-deployment
You can add any number of env vars into your deployment file
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
process.env.MONGO_URI
or you can add a secret first then use newly created secret into your countless deployment files to share same environment variable with value:
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=my_awesome_jwt_secret_code
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.MONGO_URI
process.env.JWT_KEY

gitlab + GKE + AutoDevops auto-deploy deploy fail. error: arguments in resource/name form must have a single resource and name. How to find a mistake?

I am new to gitlab CI. So I am trying to use https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml, to deploy simple test django app to the kubernetes cluster attached to my gitlab project using a custom chat https://gitlab.com/aidamir/citest/tree/master/chart. All things goes well, but the last moment it show error message from kubectl and it fails. here is output of the pipeline:
Running with gitlab-runner 12.2.0 (a987417a)
on docker-auto-scale 72989761
Using Docker executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.1.0 ...
Running on runner-72989761-project-13952749-concurrent-0 via runner-72989761-srm-1568200144-ab3eb4d8...
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/myporject/kubetest/.git/
Created fresh repository.
From https://gitlab.com/myproject/kubetest
* [new branch] master -> origin/master
Checking out 3efeaf21 as master...
Skipping Git submodules setup
Authenticating with credentials from job payload (GitLab Registry)
$ auto-deploy check_kube_domain
$ auto-deploy download_chart
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Not installing Tiller due to 'client-only' flag having been set
"gitlab" has been added to your repositories
No requirements found in /builds/myproject/kubetest/chart/charts.
No requirements found in chart//charts.
$ auto-deploy ensure_namespace
NAME STATUS AGE
kubetest-13952749-production Active 46h
$ auto-deploy initialize_tiller
Checking Tiller...
Tiller is listening on localhost:44134
Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
[debug] SERVER: "localhost:44134"
Kubernetes: &version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.24", GitCommit:"2ce02ef1754a457ba464ab87dba9090d90cf0468", GitTreeState:"clean", BuildDate:"2019-08-12T22:05:28Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
$ auto-deploy create_secret
Create secret...
secret "gitlab-registry" deleted
secret/gitlab-registry replaced
$ auto-deploy deploy
secret "production-secret" deleted
secret/production-secret replaced
Deploying new release...
Release "production" has been upgraded.
LAST DEPLOYED: Wed Sep 11 11:12:21 2019
NAMESPACE: kubetest-13952749-production
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
production-djtest 1/1 1 1 46h
==> v1/Job
NAME COMPLETIONS DURATION AGE
djtest-update-static-auik5 0/1 3s 3s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-storage-pvc Bound nfs 10Gi RWX 3s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
djtest-update-static-auik5-zxd6m 0/1 ContainerCreating 0 3s
production-djtest-5bf5665c4f-n5g78 1/1 Running 0 46h
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
production-djtest ClusterIP 10.0.0.146 <none> 5000/TCP 46h
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace kubetest-13952749-production -l "app.kubernetes.io/name=djtest,app.kubernetes.io/instance=production" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
error: arguments in resource/name form must have a single resource and name
ERROR: Job failed: exit code 1
Please help me to find the reason of the error message.
I did look to the auto-deploy script from the image registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.1.0. There is a settings variable to disable rollout status check
if [[ -z "$ROLLOUT_STATUS_DISABLED" ]]; then
kubectl rollout status -n "$KUBE_NAMESPACE" -w "$ROLLOUT_RESOURCE_TYPE/$name"
fi
So setting
variables:
ROLLOUT_STATUS_DISABLED: "true"
prevents job fail. But I still have no answer why the script does not work with my custom chat?. When I do execution of the status checking command from my laptop it shows nothing errors.
kubectl rollout status -n kubetest-13952749-production -w "deployment/production-djtest"
deployment "production-djtest" successfully rolled out
I also found a complaint to a similar issue
https://gitlab.com/gitlab-com/support-forum/issues/4737, but there is no activity on the post.
It is my gitlab-ci.yaml:
image: alpine:latest
variables:
POSTGRES_ENABLED: "false"
DOCKER_DRIVER: overlay2
ROLLOUT_RESOURCE_TYPE: deployment
DOCKER_TLS_CERTDIR: "" # https://gitlab.com/gitlab-org/gitlab-runner/issues/4501
stages:
- build
- test
- deploy # dummy stage to follow the template guidelines
- review
- dast
- staging
- canary
- production
- incremental rollout 10%
- incremental rollout 25%
- incremental rollout 50%
- incremental rollout 100%
- performance
- cleanup
include:
- template: Jobs/Deploy.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/ci/templates/Jobs/Deploy.gitlab-ci.yml
variables:
CI_APPLICATION_REPOSITORY: eu.gcr.io/myproject/django-test
error: arguments in resource/name form must have a single resource and name
That issue you linked to has Closed (moved) in its status because it was moved from issue 66016, which has what I believe is the real answer:
Please try adding the following to your .gitlab-ci.yml:
variables:
ROLLOUT_RESOURCE_TYPE: deployment
Using just the Jobs/Deploy.gitlab-ci.yml omits the variables: block from Auto-DevOps.gitlab-ci.yml which correctly sets that variable
In your case, I think you just need to move that variables: up to the top, since (afaik) one cannot have two top-level variables: blocks. I'm actually genuinely surprised your .gitlab-ci.yml passed validation
Separately, if you haven't yet seen, you can set the TRACE variable to switch auto-deploy into set -x mode which is super, super helpful in seeing exactly what it is trying to do. I believe your command was trying to run rollout status /whatever-name and with just a slash, it doesn't know what kind of name that is.
I was facing this error in different context. There shouldn't be spaces when you're passing multiple resource type.
kubectl get deploy, rs, po -l app=mynginx # wrong
kubectl get deploy,rs,po -l app=mynginx # right

Pod status as `CreateContainerConfigError` in Minikube cluster

I am trying to run Sonarqube service using the following helm chart.
So the set-up is like it starts a MySQL and Sonarqube service in the minikube cluster and Sonarqube service talks to the MySQL service to dump the data.
When I do helm install followed by kubectl get pods I see the MySQL pod status as running, but the Sonarqube pod status shows as CreateContainerConfigError. I reckon it has to do with the mounting volume thingy: link. Although I am not quite sure how to fix it (pretty new to Kubernetes environment and till learning :) )
This can be solved by various ways, I suggest better go for kubectl describe pod podname name, you now might see the cause of why the service that you've been trying is failing. In my case, I've found that some of my key-values were missing from the configmap while doing the deployment.
I ran into this problem myself today as I was trying to create secrets and using them in my pod definition yaml file. It would help if you check the output of kubectl get secrets and kubectl get configmaps if you are using any of them and validate if the # of data items you wanted are listed correctly.
I recognized that in my case problem was that when we create secrets with multiple data items: the output of kubectl get secrets <secret_name> had only 1 item of data while I had specified 2 items in my secret_name_definition.yaml. This is because of the difference between using kubectl create -f secret_name_definition.yaml vs kubectl create secret <secret_name> --from-file=secret_name_definition.yaml The difference is that in the case of the former, all the items listed in the data section of the yaml will be considered as key-value pairs and hence the # of items will be shown as the correct output when we query using kubectl get secrets secret_name but in the case of the latter only the first data item in the secret_name_definition.yaml will be evaluated for the key-value pairs and hence the output of kubectl get secrets secret_name will show only 1 data item and this is when we see the error "CreateContainerConfigError".
Note that this problem wouldn't occur if we use kubectl create secret <secret_name> with the options --from-literal= because then we would have to use the prefix --from-literal= for every key-value pair we want to define.
Similarly, if we are using --from-file= option, we still have to specify the prefix multiple times, one for each key-value pair, but just that we can pass the raw value of the key when we use --from-literal and the encoded form (i.e. value of the key will now be echo raw_value | base64 of it as a value when we use --from-file.
For example, say the keys are "username" and "password", if creating the secret using the command kubectl create -f secret_definition.yaml we need to have the values for both "username" and "password" encoded as mentioned in the "Create a Secret" section of https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/
I would like to highlight the "Note:" section in https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/ Also, https://kubernetes.io/docs/concepts/configuration/secret/ has a very clear explanation of creating secrets
Also make sure that the deployment.yaml now has the correct definiton for this container:
env:
- name: DB_HOST
value: 127.0.0.1
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
As quoted by others, "kubectl describe pods pod_name" would help but in my case I only understood that the container wasn't being created first of all and the output of "kubectl logs pod_name -c container_name" didn't help much.
Recently, I had encountered the same CreateContainerConfigError error and after little debugging I found out that it was because I was using a kubernetes secret in my Deployment yaml, which was not actually present/created in that namespace where the pods were getting created.
Also after reading the previous answer I guess this can be assured that this particular error is focused around kubernetes secrets!
Check your secrets and config maps (kubectl get [secrets|configmaps]) that already exist and are correctly pointed in the YAML descriptor file, in both cases an incorrect secret/configmap (not created, mispelling, etc.) results in CreateContainerConfigError.
As already pointed in the answers can check the error with kubectl describe pod [pod name] and something like this should appear at the bottom of the ouput:
Warning Failed 85s (x12 over 3m37s) kubelet, gke-****-default-pool-300d3c89-9jkz
Error: configmaps "config-map-1" not found
UPDATE: From #alexis-wilke
The list of events can be ephemeral in some versions and this message disappear quickly. As a rule of thumb, check events list immediately when booting a pod, or if you have CreateContainerConfigError without events double check secrets and config maps as they can leave the pod in this state with no trace at some point
I also ran into this issue, and the problem was due to an environment variable using a field ref, on a controller. The other controller and the worker were able to resolve the reference. We didn't have time to track down the cause of the issue and wound up tearing down the cluster and rebuilding it.
- name: DD_KUBERNETES_KUBELET_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Apr 02 16:35:46 ip-10-30-45-105.ec2.internal sh[1270]: E0402 16:35:46.502567 1270 pod_workers.go:186] Error syncing pod 3eab4618-5564-11e9-a980-12a32bf6e6c0 ("datadog-datadog-spn8j_monitoring(3eab4618-5564-11e9-a980-12a32bf6e6c0)"), skipping: failed to "StartContainer" for "datadog" with CreateContainerConfigError: "host IP unknown; known addresses: [{Hostname ip-10-30-45-105.ec2.internal}]"
Try to use the option --from-env-file instead of --from-file and see if this problem disappears. I got the same error and looking into the pod events, it suggested that the key-value pairs inside the mysecrets.txt file is not properly read. If you have only one line, Kubernetes takes the content inside the file as value and the filename as key. To avoid this issue, you need to read the file as environment variable files as shown below.
mysecrets.txt:
MYSQL_PASSWORD=dfsdfsdfkhk
For example:
kubectl create secret generic secret-name --from-env-file=mysecrets.txt
kubectl create configmap generic configmap-name --from-env-file=myconfigs.txt