How can I get the clusters CIDR in a pod? - kubernetes

How can I use the cluster CIDR (the ip address range containing all pod ip addresses) inside a pod? (Autmoatically, without putting it manually in an environment variable, ConfigMap or anywhere else.)
Exampel of what I would like to:
env:
- name: CLUSTER_CIDR
valueFrom: # ??? does a configMap like this exist ??? Or any other source for clusterCidr?
configMap:
key: clusterCidr
name: ...
my best partial solution:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GUESSED_CLUSTER_CIDR
value: $(POD_IP)/16
I can find clusterCidr inside the configMap full-cluster-state in namespace kube-system somewhere in the value of key full-cluster-state. But this value is a string containing json, and it looks vendor specific (in currentState.rkeConfig.services.kubeController.clusterCidr). I can not extract part of the the value in deployment.yaml. And I prefer to have a vendor independent solution.
I have not idea where to find ComponentConfig mentioned in related issues and do not even know if it is in alpha still.
related k8s issues (all closed without (clear) fixing):
https://github.com/kubernetes/kubernetes/issues/25533
https://github.com/kubernetes/kubernetes/issues/46508
About finding the CIDR of the cluster manually:
How do you find the cluster & service CIDR of a Kubernetes cluster?
old about finding it programmatically: Kubernetes - Find out service ip range CIDR programatically
using the CIDR for trusted proxy, what I want to: Kubernetes: add ingress internal ip to environment

Im afraid there is no vendor independent solution for this. Also ComponentConfig is still an alpha feature so there is not enough proper documentation.
However, the best thing right now (even if it's not universal) is to use:
$ kubectl cluster-info dump | grep -m 1 cluster-cidr
Then you can create a new ConfigMap with the cluster CIDR value that was outputted and then refer to it in the pod as in this docs.
Even if the concept is the same, you will have to apply a different approach in different environments. Unfortunately, as of today there is no single solution.
As for the additional information, I have already made a small comparison between Kubeadm and Google Kubernetes Engine about CIDR. You can check out this thread for more information.

Related

how can i assign my host ip address into kubernetes configmap?

I assigned my host IP address in the config map. Yaml
But my host IP address always changes
How can I assign my host MAC address or any possible solution?
apiVersion: v1
kind: ConfigMap
metadata:
name: app-configmap
data:
display: 10.0.10.123:0.0
You can't put "the host" IP address into a ConfigMap. Consider a cluster with multiple nodes and multiple replicas of your Deployment: you could have three identical Pods running, all mounting the same ConfigMap, but all running on different hosts.
If you do need the host's IP address for some reason, you can use the downward API to get it:
# In your pod spec, not a ConfigMap
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Again, though, note that each replica could be running on a different node, so this is only useful if you can guarantee some resource is running on every node (maybe a Kubernetes DaemonSet is launching it). That configuration suggests an X Window System display server address, and typically this would be located outside the cluster, not on the nodes actually running the pods.

GKE AppArmor profile is unconfined eventhough the node has it defined and working

I am trying to load an apparmor profile I created using GKE and some of the following instructions.
To apply the created app armor profile I followed this instructions:
https://cloud.google.com/container-optimized-os/docs/how-to/secure-apparmor#creating_a_custom_security_profile
which is just the apparmor parser applied to the node[s], and some follow up instructions to apply this same profile creation during restart of the node.
Basically is running the following line:
/sbin/apparmor_parser --replace --write-cache /etc/apparmor.d/no_raw_net
and testing that a container with this profile is secured as expected.
As a second step I defined an environment variable with the apparmor profile name inside of an environment variable of the pod. As explained in here:
https://cloud.google.com/migrate/anthos/docs/troubleshooting/app-armor-profile
Basically is defining the pod in this way:
spec:
containers:
- image: gcr.io/my-project/my-container:v1.0.0
name: my-container
env:
- name: HC_APPARMOR_PROFILE
value: "apparmor-profile-name"
securityContext:
privileged: true
Inside of the host the apparmor profile works as expected. But I cannot provide this profile.
Also tried removing the security context section of the pod that is defined as true in the documentation for gke.
Last but not least I tried with k8s pod annotation which is a feature of k8s to set a profile to a given container as explained here:
https://kubernetes.io/docs/tutorials/security/apparmor/
with this the pod looks like this:
apiVersion: v1
kind: Pod
metadata:
name: hello-apparmor-2
annotations:
container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-allow-write
spec:
containers:
- name: hello
image: busybox
command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
but also had no good luck to apply the given profile.
Also tried to apply user-data config as a custom metadata for the cloud-init of the node instance, so it can add also the profile I created to app armor, and double check that the creation matters is not an issue but the edition of the cluster matadata is disable post creation of the cluster, and the creation of a new cluster node with the user-data is not allowed due to the fact that user-data is reserved for the container optimized os user data that will be defined by google.
No matter what I do I always end up either having unconfined profile for the current container or "cri-containerd.apparmor.d (enforce)" depending if the security context is set to true or not...
Do you have any advice on how can I provide the given profile to a pod in GKE?
If I understood the question correctly, seems like you are mixing the profile's filename with the profile name.
annotations:
container.apparmor.security.beta.kubernetes.io/<container-name>: localhost/<profile-name>
Here, <profile-name> is the name of the profile, it's not the same as the filename of the profile. Eg: in the below example filename is no_raw_net and profile name is no-ping.
cat > /etc/apparmor.d/no_raw_net <<EOF
#include <tunables/global>
profile no-ping flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
network inet tcp,
network inet udp,
network inet icmp,
deny network raw,
deny network packet,
file,
mount,
}
EOF
As mentioned I missed the way I was naming things, but besides that I also would like to mention one more alternative: https://github.com/kubernetes-sigs/security-profiles-operator which is to work with some kubernetes CRDs that allows to integrate with apparmor, seccomp, and SELinux.
Some of the implementation like AppArmor looks like it is still in WIP at the moment of this writing and I hope this initiative moves forward.

Accessing kubernetes namespace (value) from inside a pod

I am fairly new to kubernetes. Wanted to know if a program running inside a pod can access the namespace in which the pod is running.
Let me explain my usecase. There are two pods in my application's namespace. One pod has to be statefulset and must have atleast 3 replicas. Other pod (say POD-A) can be just a normal deployment. Now POD-A needs to talk to a particular instance of the statefulset. I read in an article that it can be done using this address format -
<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local.
In my application, the namespace part changes dynamically with each deployment. So can this value be read dynamically from a program running inside a pod?
Please help me if I have misunderstood something here. Any alternate/simpler solutions are also welcome. Thanks in advance!
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api has an example of this and more.
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
You can get the namespace of a pod using the downward API : https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api (mount the namespace as a environment variable).
Or , if a serviceAcount is mounted in the pod , the namespace the pod is living in can be found in the file: /var/run/secrets/kubernetes.io/serviceaccount/namespace.

kubernetes how to get cluster domain (such as svc.cluster.local) inside pod?

I deployed a squid proxy in each namespace cause I want to access the services from external via the squid proxy, thus I need to add the line below to the squid.conf so that I can access services just using service names:
append_domain .${namespace}.svc.cluster.local
Here is my problem:
I can get ${namespace} via metadata.namespace inside a pod, but how can I get the cluster domain ? Is it possible ?
I’ve tried this but it retruned an error when creating pod:
- name: POD_CLUSERDOMAIN
valueFrom:
fieldRef:
fieldPath: metadata.clusterName
Thanks for your help.
Alright, failed to get the current NAMESPACE inside a pod, but I find another way to reach the point -- retrieve the whole host domain from search domian in resolv.conf.
Here's the detail:
keep the Dockerfile unmodified
add a command item to deployment.yaml
image: squid:3.5.20
command: ["/bin/sh","-c"]
args: [ "echo append_domain .$(awk -v s=search '{if($1 == s)print $2}' /etc/resolv.conf) >> /etc/squid/squid.conf; /usr/sbin/squid -N" ]
This will add a line like append_domain .default.svc.cluster.local to the end of file /etc/squid/squid.conf then we can access the services from external via the squid proxy just using service name now.
The cluster domain is configured from the kubelet parameters, it must be the same through the whole cluster, so, you can't get it from pod's metadata, just use it as it is: svc.cluster.local

Pod status as `CreateContainerConfigError` in Minikube cluster

I am trying to run Sonarqube service using the following helm chart.
So the set-up is like it starts a MySQL and Sonarqube service in the minikube cluster and Sonarqube service talks to the MySQL service to dump the data.
When I do helm install followed by kubectl get pods I see the MySQL pod status as running, but the Sonarqube pod status shows as CreateContainerConfigError. I reckon it has to do with the mounting volume thingy: link. Although I am not quite sure how to fix it (pretty new to Kubernetes environment and till learning :) )
This can be solved by various ways, I suggest better go for kubectl describe pod podname name, you now might see the cause of why the service that you've been trying is failing. In my case, I've found that some of my key-values were missing from the configmap while doing the deployment.
I ran into this problem myself today as I was trying to create secrets and using them in my pod definition yaml file. It would help if you check the output of kubectl get secrets and kubectl get configmaps if you are using any of them and validate if the # of data items you wanted are listed correctly.
I recognized that in my case problem was that when we create secrets with multiple data items: the output of kubectl get secrets <secret_name> had only 1 item of data while I had specified 2 items in my secret_name_definition.yaml. This is because of the difference between using kubectl create -f secret_name_definition.yaml vs kubectl create secret <secret_name> --from-file=secret_name_definition.yaml The difference is that in the case of the former, all the items listed in the data section of the yaml will be considered as key-value pairs and hence the # of items will be shown as the correct output when we query using kubectl get secrets secret_name but in the case of the latter only the first data item in the secret_name_definition.yaml will be evaluated for the key-value pairs and hence the output of kubectl get secrets secret_name will show only 1 data item and this is when we see the error "CreateContainerConfigError".
Note that this problem wouldn't occur if we use kubectl create secret <secret_name> with the options --from-literal= because then we would have to use the prefix --from-literal= for every key-value pair we want to define.
Similarly, if we are using --from-file= option, we still have to specify the prefix multiple times, one for each key-value pair, but just that we can pass the raw value of the key when we use --from-literal and the encoded form (i.e. value of the key will now be echo raw_value | base64 of it as a value when we use --from-file.
For example, say the keys are "username" and "password", if creating the secret using the command kubectl create -f secret_definition.yaml we need to have the values for both "username" and "password" encoded as mentioned in the "Create a Secret" section of https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/
I would like to highlight the "Note:" section in https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/ Also, https://kubernetes.io/docs/concepts/configuration/secret/ has a very clear explanation of creating secrets
Also make sure that the deployment.yaml now has the correct definiton for this container:
env:
- name: DB_HOST
value: 127.0.0.1
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
As quoted by others, "kubectl describe pods pod_name" would help but in my case I only understood that the container wasn't being created first of all and the output of "kubectl logs pod_name -c container_name" didn't help much.
Recently, I had encountered the same CreateContainerConfigError error and after little debugging I found out that it was because I was using a kubernetes secret in my Deployment yaml, which was not actually present/created in that namespace where the pods were getting created.
Also after reading the previous answer I guess this can be assured that this particular error is focused around kubernetes secrets!
Check your secrets and config maps (kubectl get [secrets|configmaps]) that already exist and are correctly pointed in the YAML descriptor file, in both cases an incorrect secret/configmap (not created, mispelling, etc.) results in CreateContainerConfigError.
As already pointed in the answers can check the error with kubectl describe pod [pod name] and something like this should appear at the bottom of the ouput:
Warning Failed 85s (x12 over 3m37s) kubelet, gke-****-default-pool-300d3c89-9jkz
Error: configmaps "config-map-1" not found
UPDATE: From #alexis-wilke
The list of events can be ephemeral in some versions and this message disappear quickly. As a rule of thumb, check events list immediately when booting a pod, or if you have CreateContainerConfigError without events double check secrets and config maps as they can leave the pod in this state with no trace at some point
I also ran into this issue, and the problem was due to an environment variable using a field ref, on a controller. The other controller and the worker were able to resolve the reference. We didn't have time to track down the cause of the issue and wound up tearing down the cluster and rebuilding it.
- name: DD_KUBERNETES_KUBELET_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Apr 02 16:35:46 ip-10-30-45-105.ec2.internal sh[1270]: E0402 16:35:46.502567 1270 pod_workers.go:186] Error syncing pod 3eab4618-5564-11e9-a980-12a32bf6e6c0 ("datadog-datadog-spn8j_monitoring(3eab4618-5564-11e9-a980-12a32bf6e6c0)"), skipping: failed to "StartContainer" for "datadog" with CreateContainerConfigError: "host IP unknown; known addresses: [{Hostname ip-10-30-45-105.ec2.internal}]"
Try to use the option --from-env-file instead of --from-file and see if this problem disappears. I got the same error and looking into the pod events, it suggested that the key-value pairs inside the mysecrets.txt file is not properly read. If you have only one line, Kubernetes takes the content inside the file as value and the filename as key. To avoid this issue, you need to read the file as environment variable files as shown below.
mysecrets.txt:
MYSQL_PASSWORD=dfsdfsdfkhk
For example:
kubectl create secret generic secret-name --from-env-file=mysecrets.txt
kubectl create configmap generic configmap-name --from-env-file=myconfigs.txt