How can I determine whether Kubernetes is using authentication for a image repository? - kubernetes

I'm trying to investigate why a pod has a status of ImagePullBackOff.
If kubectl describe the pod I see an event listed :
Warning Failed 5m42s (x4 over 7m2s) kubelet Failed
to pull image
"**********************":
rpc error: code = Unknown desc = Error response from daemon:
unauthorized: You don't have the needed permissions to perform this
operation, and you may have invalid credentials. To authenticate your
request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication
This is not expected as I docker authentication set for the default service account - via a secret as mentioned here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-image-pull-secret-to-service-account
How can I determine whether it's using the correct authentication so I can further debug this issue?

Not really an answer to the question but a solution in my case:
Seems there is something wrong with the kubectl patch serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}' as it does not seem to do anything...
Instead I edited the service account resource directly - no patch...
Demonstarted here:
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl patch serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}'
serviceaccount/default patched (no change)
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl describe serviceaccount default
Name: default
Namespace: app-1
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-tqp58
Tokens: default-token-tqp58
Events: <none>
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl get serviceaccount -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2020-09-17T15:50:34Z"
name: default
namespace: app-1
resourceVersion: "111538"
selfLink: /api/v1/namespaces/app-1/serviceaccounts/default
uid: 5fe21574-67bf-485c-b9aa-d09c1fe3350c
secrets:
- name: default-token-tqp58
kind: List
metadata:
resourceVersion: ""
selfLink: ""
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl patch -n app-1 serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}'
serviceaccount/default patched (no change)

Related

How can you use a private gitlab container registry to pull an image in kubernetes?

I have a private docker registry hosted on gitlab and I would like to use this repository to pull images for my local kubernetes cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 68m
K8s is on v1.22.5 and is a single-node cluster that comes 'out of the box' with Docker Desktop. I have already built and deployed an image to the gitlab container registry registry.gitlab.com. What I have done already:
Executed the command docker login -u <username> -p <password> registry.gitlab.com
Modified the ~/.docker/config.json file to the following:
{
"auths": {
"registry.gitlab.com": {}
},
"credsStore": "osxkeychain"
}
Created and deployed a secret to the cluster with the file:
apiVersion: v1
kind: Secret
metadata:
name: registry-key
data:
.dockerconfigjson: <base-64-encoded-.config.json-file>
type: kubernetes.io/dockerconfigjson
Deployed an app with the following file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test-app
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
imagePullSecrets:
- name: registry-key
containers:
- name: test-app
image: registry.gitlab.com/<image-name>:latest
imagePullPolicy: Always
ports:
- containerPort: 80
The deployment is created successfully but upon inspection of the pod (kubectl describe pod) I find the following events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned default/test-deployment-87b5747b5-xdsl9 to docker-desktop
Normal BackOff 19s kubelet Back-off pulling image "registry.gitlab.com/<image-name>:latest"
Warning Failed 19s kubelet Error: ImagePullBackOff
Normal Pulling 7s (x2 over 20s) kubelet Pulling image "registry.gitlab.com/<image-name>:latest"
Warning Failed 7s (x2 over 19s) kubelet Failed to pull image "registry.gitlab.com/<image-name>:latest": rpc error: code = Unknown desc = Error response from daemon: Head "https://registry.gitlab.com/v2/<image-name>/manifests/latest": denied: access forbidden
Warning Failed 7s (x2 over 19s) kubelet Error: ErrImagePull
Please provide any information that might be causing these errors.
I managed to solve the issue by editing the default config.json produced by $ docker login:
{
"auths": {
"registry.gitlab.com": {}
},
"credsStore": "osxkeychain"
}
becomes
{
"auths": {
"registry.gitlab.com": {
"auth":"<access-token-in-plain-text>"
}
}
}
Thanks Bala for suggesting this in the comments. I realise storing the access token in plain text in the file may not be secure but this can be changed to use a path if needed.
I also created the secret as per OzzieFZI's suggestion:
$ kubectl create secret docker-registry registry-key \
--docker-server=registry.gitlab.com \
--docker-username=<username> \
--docker-password="$(cat /path/to/token.txt)"
What password do you use?
Confirm if you are using a Personal Access Token with read/write access to the container registry. Your username should be the gitlab username.
I would suggest creating the docker registry secret using kubectl and a txt file with the token as the content, this way you do not have to encode the dockerconfigjson yourself. Here is an example.
$ kubectl create secret docker-registry registry-key \
--docker-server=registry.gitlab.com \
--docker-username=<username> \
--docker-password="$(cat /path/to/token.txt)"
See documentation on the command here
Here's something a bit more detailed in case anyone is having problems with this. Also gitlab has introdcuced deploy tokens from the repository -> deploy tokens tab, which means you do not need to use personal access tokens.
Create auth to put in secret resource
#!/bin/bash
if [ "$#" -ne 1 ]; then
printf "Invalid number of arguments" >&2
printf "./create_registry_secret.sh <GITLAB_DEPLOY_TOKEN>" >&2
exit 1;
fi
secret_gen_string='{"auths":{"https://registry.gitlab.com":{"username":"{{USER}}","password":"{{TOKEN}}","email":"{{EMAIL}}","auth":"{{SECRET}}"}}}'
gitlab_user=<YOUR_DEPLOY_TOKEN_USER>
gitlab_token=$1
gitlab_email=<YOUR_EMAIL_OR_WHATEVER>
gitlab_secret=$(echo -n "$gitlab_user:$gitlab_token" | base64 -w 0)
echo -n $secret_gen_string \
| sed "s/{{USER}}/$gitlab_user/" \
| sed "s/{{TOKEN}}/$gitlab_token/" \
| sed "s/{{EMAIL}}/$gitlab_email/" \
| sed "s/{{SECRET}}/$gitlab_secret/" \
| base64 -w 0
Use the output of the script in secret resource
# A secret to pull container from gitlab registry
apiVersion: v1
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
name: gitlab-pull-secret
data:
.dockerconfigjson: <GENERATED_SECRET>
Reference the secret in container definition
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitlab-test-deployment
labels:
app.kubernetes.io/name: gitlab-test
spec:
selector:
matchLabels:
app.kubernetes.io/name: gitlab-test
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: gitlab-test
spec:
containers:
- name: my-gitlab-container
image: registry.gitlab.com/group/project/image:tag
imagePullPolicy: Always
ports:
- containerPort: 3000
# Include the authentication for gitlab container registry
imagePullSecrets:
- name: gitlab-pull-secret

Unable to fetch Vault Token for Pod Service Account

I am using Vault CSI Driver on Charmed Kubernetes v1.19 where I'm trying to retrieve secrets from Vault for a pod running in a separate namespace (webapp) with its own service account (webapp-sa) following the steps in the blog.
As I have been able to understand so far, the Pod is trying authenticate to the Kubernetes API, so that later it can generate a Vault token to access the secret from Vault.
$ kubectl get po webapp
NAME READY STATUS RESTARTS AGE
webapp 0/1 ContainerCreating 0 22m
It appears to me there's some issue authenticating with the Kubernetes API.
The pod remains stuck in the Container Creating state with the message - failed to create a service account token for requesting pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 35m default-scheduler Successfully assigned webapp/webapp to host-03
Warning FailedMount 4m38s (x23 over 35m) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod webapp/webapp, err: rpc error: code = Unknown desc = error making mount request: **failed to create a service account token for requesting pod** {webapp xxxx webapp webapp-sa}: the server could not find the requested resource
I can get the vault token using cli in the pod namespace:
$ vault write auth/kubernetes/login role=database jwt=$SA_JWT_TOKEN
Key Value
--- -----
token <snipped>
I do get the vault token using the API as well:
$ curl --request POST --data #payload.json https://127.0.0.1:8200/v1/auth/kubernetes/login
{
"request_id":"1234",
<snipped>
"auth":{
"client_token":"XyZ",
"accessor":"abc",
"policies":[
"default",
"webapp-policy"
],
"token_policies":[
"default",
"webapp-policy"
],
"metadata":{
"role":"database",
"service_account_name":"webapp-sa",
"service_account_namespace":"webapp",
"service_account_secret_name":"webapp-sa-token-abcd",
"service_account_uid":"123456"
},
<snipped>
}
}
Reference: https://www.vaultproject.io/docs/auth/kubernetes
As per the vault documentation, I've configured Vault with the Token Reviewer SA as follows:
$ cat vault-auth-service-account.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: role-token-review-binding
namespace: vault
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-auth
namespace: vault
Vault is configured with JWT from the Token Reviewer SA as follows:
$ vault write auth/kubernetes/config \
token_reviewer_jwt="< TOKEN Reviewer service account JWT>" \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
kubernetes_ca_cert=#ca.crt
I have defined a Vault Role to allow the webapp-sa access to the secret:
$ vault write auth/kubernetes/role/database \
bound_service_account_names=webapp-sa \
bound_service_account_namespaces=webapp \
policies=webapp-policy \
ttl=72h
Success! Data written to: auth/kubernetes/role/database
The webapp-sa is allowed access to the secret as per the Vault Policy defined as follows:
$ vault policy write webapp-policy - <<EOF
> path "secret/data/db-pass" {
> capabilities = ["read"]
> }
> EOF
Success! Uploaded policy: webapp-policy
Pod and it's SA is defined as follows:
$ cat webapp-sa-and-pod.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
name: webapp-sa
---
kind: Pod
apiVersion: v1
metadata:
name: webapp
spec:
serviceAccountName: webapp-sa
containers:
- image: registry/jweissig/app:0.0.1
name: webapp
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
providerName: vault
secretProviderClass: "vault-database"
Does anyone have any clue as to why the Pod won't authenticate with
the Kubernetes API?
Do I have to enable flags on the kube-apiserver for Token Review API
to work?
Is it enabled by default on Charmed Kubernetes v1.19?
Would be grateful for any help.
Regards,
Sana

Syncing Prometheus and Kubewatch with Kubernetes Cluster

I want to get all events that occurred in Kubernetes cluster in some python dictionary using maybe some API to extract data from the events that occurred in the past. I found on internet that it is possible by storing all data of Kube-watch on Prometheus and later accessing it. I am unable to figure out how to set it up and see all past pod events in python. Any alternative solutions to access past events are also appreciated. Thanks!
I'll describe a solution that is not complicated and I think meets all your requirements.
There are tools such as Eventrouter that take Kubernetes events and push them to a user specified sink. However, as you mentioned, you only need Pods events, so I suggest a slightly different approach.
In short, you can run the kubectl get events --watch command from within a Pod and collect the output from that command using a log aggregation system like Loki.
Below, I will provide a detailed step-by-step explanation.
1. Running kubectl command from within a Pod
To display only Pod events, you can use:
$ kubectl get events --watch --field-selector involvedObject.kind=Pod
We want to run this command from within a Pod. For security reasons, I've created a separate events-collector ServiceAccount with the view Role assigned and our Pod will run under this ServiceAccount.
NOTE: I've created a Deployment instead of a single Pod.
$ cat all-in-one.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: events-collector
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: events-collector-binding
subjects:
- kind: ServiceAccount
name: events-collector
namespace: default
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: events-collector
name: events-collector
spec:
selector:
matchLabels:
app: events-collector
template:
metadata:
labels:
app: events-collector
spec:
serviceAccountName: events-collector
containers:
- image: bitnami/kubectl
name: test
command: ["kubectl"]
args: ["get","events", "--watch", "--field-selector", "involvedObject.kind=Pod"]
After applying the above manifest, the event-collector was created and collects Pod events as expected:
$ kubectl apply -f all-in-one.yml
serviceaccount/events-collector created
clusterrolebinding.rbac.authorization.k8s.io/events-collector-binding created
deployment.apps/events-collector created
$ kubectl get deploy,pod | grep events-collector
deployment.apps/events-collector 1/1 1 1 14s
pod/events-collector-d98d6c5c-xrltj 1/1 Running 0 14s
$ kubectl logs -f events-collector-d98d6c5c-xrltj
LAST SEEN TYPE REASON OBJECT MESSAGE
77s Normal Scheduled pod/app-1-5d9ccdb595-m9d5n Successfully assigned default/app-1-5d9ccdb595-m9d5n to gke-cluster-2-default-pool-8505743b-brmx
76s Normal Pulling pod/app-1-5d9ccdb595-m9d5n Pulling image "nginx"
71s Normal Pulled pod/app-1-5d9ccdb595-m9d5n Successfully pulled image "nginx" in 4.727842954s
70s Normal Created pod/app-1-5d9ccdb595-m9d5n Created container nginx
70s Normal Started pod/app-1-5d9ccdb595-m9d5n Started container nginx
73s Normal Scheduled pod/app-2-7747dcb588-h8j4q Successfully assigned default/app-2-7747dcb588-h8j4q to gke-cluster-2-default-pool-8505743b-p7qt
72s Normal Pulling pod/app-2-7747dcb588-h8j4q Pulling image "nginx"
67s Normal Pulled pod/app-2-7747dcb588-h8j4q Successfully pulled image "nginx" in 4.476795932s
66s Normal Created pod/app-2-7747dcb588-h8j4q Created container nginx
66s Normal Started pod/app-2-7747dcb588-h8j4q Started container nginx
2. Installing Loki
You can install Loki to store logs and process queries. Loki is like Prometheus, but for logs :). The easiest way to install Loki is to use the grafana/loki-stack Helm chart:
$ helm repo add grafana https://grafana.github.io/helm-charts
"grafana" has been added to your repositories
$ helm repo update
...
Update Complete. ⎈Happy Helming!⎈
$ helm upgrade --install loki grafana/loki-stack
$ kubectl get pods | grep loki
loki-0 1/1 Running 0 76s
loki-promtail-hm8kn 1/1 Running 0 76s
loki-promtail-nkv4p 1/1 Running 0 76s
loki-promtail-qfrcr 1/1 Running 0 76s
3. Querying Loki with LogCLI
You can use the LogCLI tool to run LogQL queries against a Loki server. Detailed information on installing and using this tool can be found in the LogCLI documentation. I'll demonstrate how to install it on Linux:
$ wget https://github.com/grafana/loki/releases/download/v2.2.1/logcli-linux-amd64.zip
$ unzip logcli-linux-amd64.zip
Archive: logcli-linux-amd64.zip
inflating: logcli-linux-amd64
$ mv logcli-linux-amd64 logcli
$ sudo cp logcli /bin/
$ whereis logcli
logcli: /bin/logcli
To query the Loki server from outside the Kubernetes cluster, you may need to expose it using the Ingress resource:
$ cat ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: loki-ingress
spec:
rules:
- http:
paths:
- backend:
serviceName: loki
servicePort: 3100
path: /
$ kubectl apply -f ingress.yml
ingress.networking.k8s.io/loki-ingress created
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
loki-ingress <none> * <PUBLIC_IP> 80 19s
Finally, I've created a simple python script that we can use to query the Loki server:
NOTE: We need to set the LOKI_ADDR environment variable as described in the documentation. You need to replace the <PUBLIC_IP> with your Ingress IP.
$ cat query_loki.py
#!/usr/bin/env python3
import os
os.environ['LOKI_ADDR'] = "http://<PUBLIC_IP>"
os.system("logcli query '{app=\"events-collector\"}'")
$ ./query_loki.py
...
2021-07-02T10:33:01Z {} 2021-07-02T10:33:01.626763464Z stdout F 0s Normal Pulling pod/backend-app-5d99cf4b-c9km4 Pulling image "nginx"
2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.836755152Z stdout F 0s Normal Scheduled pod/backend-app-5d99cf4b-c9km4 Successfully assigned default/backend-app-5d99cf4b-c9km4 to gke-cluster-1-default-pool-328bd2b1-288w
2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.649954267Z stdout F 0s Normal Started pod/web-app-6fcf9bb7b8-jbrr9 Started container nginx2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.54819851Z stdout F 0s Normal Created pod/web-app-6fcf9bb7b8-jbrr9 Created container nginx
2021-07-02T10:32:59Z {} 2021-07-02T10:32:59.414571562Z stdout F 0s Normal Pulled pod/web-app-6fcf9bb7b8-jbrr9 Successfully pulled image "nginx" in 4.228468876s
...

Unable to sign in kubernetes dashboard?

After follow all steps to create service account and role binding unable to sign in
kubectl create serviceaccount dashboard -n default
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard
and apply yml file
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
--------
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
Getting following error when i click on sign in for config sevice
[![{
"status": 401,
"plugins": \[\],
"errors": \[
{
"ErrStatus": {
"metadata": {},
"status": "Failure",
"message": "MSG_LOGIN_UNAUTHORIZED_ERROR",
"reason": "Unauthorized",
"code": 401
}
}][1]][1]
Roman Marusyk is on the right path. This is what you have to do.
$ kubectl get secret -n kubernetes-dashboard
NAME TYPE DATA AGE
default-token-rqw2l kubernetes.io/service-account-token 3 9m8s
kubernetes-dashboard-certs Opaque 0 9m8s
kubernetes-dashboard-csrf Opaque 1 9m8s
kubernetes-dashboard-key-holder Opaque 2 9m8s
kubernetes-dashboard-token-5tqvd kubernetes.io/service-account-token 3 9m8s
From here you will get the kubernetes-dashboard-token-5tqvd, which is the secret holding the token to access the dashboard.
$ kubectl get secret kubernetes-dashboard-token-5tqvd -n kubernetes-dashboard -oyaml | grep -m 1 token | awk -F : '{print $2}'
ZXlK...
Now you will need to decode it:
echo -n ZXlK... | base64 -d
eyJhb...
introduce the token in the sign in page:
And you are in.
UPDATE
You can also do the 2 steps in one to get the secret and decode it:
$ kubectl get secret kubernetes-dashboard-token-5tqvd -n kubernetes-dashboard -oyaml | grep -m 1 token | awk -F ' ' '{print $2}' | base64 -d
namespace: kube-system
Usually, Kubernetes Dashboard runs in namespace: kubernetes-dashboard
Please ensure that you have the correct namespace for ClusterRoleBinding and ServiceAccount.
If you have been installing the dashboard according to the official doc, then it was installed to kubernetes-dashboard namespace.
You can check this post for reference.
EDIT 22-May-2020
Issue was i'm accessing dashboard UI through http protocal from external machine and use default ClusterIP.
Indeed, the Dashboard should not be exposed publicly over HTTP. For domains accessed over HTTP it will not be possible to sign in. Nothing will happen after clicking 'Sign In' button on login page.
That is discussed in details in this post on StackOverflow. There is a workaround for that behavior discussed in a same thread. Basically it is the same thing I've been referring to here: "You can check this post for reference. "

Why k8s pod can't find key in ConfigMap?

I'm having an issue with a Kubernetes pod that uses a ConfigMap. My pod fails to start, with the following error:
Warning Failed 10s (x7 over 2m16s) kubelet, docker-desktop Error: Couldn't find key URL in ConfigMap default/env-config
I created my ConfigMap as follows:
kubectl create configmap env-config --from-file env-config.yaml
This is my ConfigMap:
NAME DATA AGE
env-config 1 5m38s
Nates-MacBook-Pro:k8s natereed$ kubectl describe configmap env-config
Name: env-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
env-config.yaml:
----
apiVersion: v1
kind: ConfigMap
data:
AWS_BUCKET: mybucket
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: <mydb>
POSTGRESS_HOST: <my host>
URL: http://localhost:8100
metadata:
name: env-config
It looks like command to create the ConfigMap is wrong? It's not clear to me why it creates a map with a single key "env-config.yaml".
The YAML file looks like this:
apiVersion: v1
kind: ConfigMap
data:
AWS_BUCKET: mybucket
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: mydb
POSTGRESS_HOST: postgreshost
URL: http://localhost:8100
metadata:
name: env-config
namespace: default
I'd say that the issue occurred because you are passing a ConfigMap yaml definition as parameter of --from-file.
You could simply create it using:
kubectl create -f env-config.yaml
Besides that, if you would like to create using --from-file, then you can define your file only with the parameters that you need, it would be something like:
File name: env-config
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: <mydb>
POSTGRESS_HOST: <my host>
URL: http://localhost:8100
And then you can create the ConfigMap in the way you were doing before:
kubectl create configmap env-config --from-file env-config
This would create a ConfigMap like that: (kubectl describe configmap env-config)
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
env-config:
----
AWS_BUCKET: mybucket
AWS_PROFILE: dev
AWS_REGION: us-east-2
JWT_SECRET: foo
POSTGRESS_DB: <mydb>
POSTGRESS_HOST: <my host>
URL: http://localhost:8100
Events: <none>
Here you can find some useful information:
Create ConfigMaps from files
Define container environment variables using ConfigMap data
So you kind of got things a little weird. What you have there is a config map with one key named env-config.yaml, the value of which is a string containing YAML data for a config map with a bunch of keys including URL. I'm guessing you tried using kubectl create cm --from-file instead of kubectl apply -f?