I have several pods which belong to the same service. I need to share a value between all pods in this service.
Per my understanding, the shared volume won't work well, because pods may end up being on different nodes.
Having any kind of database (even most lightweight) exposed as a service to share this value would be overkill (however, probably it's my backup plan).
I was wondering whether there is some k8s native way to share the value.
Put the values in a ConfigMap and mount it in the Pods. You can include the values of the ConfigMap in the containers of a Pod either as a volume or as environment variables.
See Configure a Pod to Use a ConfigMap in the Kubernetes documentation.
If the Pods need to update the shared values they can write to the ConfigMap (requires Kubernetes API permissions). However, in this case the ConfigMap must be included as a volume, since environment variable values from a ConfigMap are not updated when the ConfigMap changes.
Related
Is it possible to create a Kubernetes service and pod in different namespaces, for example, having myweb-svc pointing to the actual running myweb-pod, while myweb-svc and myweb-pod are in different namespaces?
YAML manifest to create both the pod and the service in their respective namespaces. You need to specify the ‘namespace’ field in the ‘metadata’ section of both the ‘pod’ and ‘service’ objects to specify the namespace in which they should be created.
Also, if you want to point your Service to a Service in a different namespace or on another cluster you can use service without a pod selector.
Refer to this link on Understanding kubernetes Object for more information.
Kubernetes API objects that are connected together at the API layer generally need to be in the same namespace. So a Service can only connect to Pods in its own namespace; if a Pod references a ConfigMap or a Secret or a PersistentVolumeClaim, those need to be in the same namespace as well.
I want to know if different nodes can share Secrets and ConfigMaps. Went through the Kubernetes documentation at https://kubernetes.io/docs/concepts/configuration/secret/ but could not find exact information.
All Kubernetes resources are stored centrally in the etcd database and access through the Kubernetes API server. When using Config Maps or Secrets, the data inside them are directly embedded into the resource it self (i.e. unlike ParsistentVolume for example, they do not just reference to the data stored somewhere else). This is also the reason why the size of ConfigMap or Secret is limited.
As such they can be used on all Kubernetes nodes. When you have a Pod which is using them, the ConfigMaps or Secrets will be mapped to the node where the Pod is scheduled. So the files from the ConfigMap or Secret might exist on given node, but that will be just copies of the original ConfigMap or Secret stored centrally in the etcd database.
I am using filebeat as a daemonset and I would like each generated pod to export to a single port for the logstash.
Is there an approach to be used for this?
No. You cannot provide different configmap to the pods of same daemonset or deployment. If you want each of your pods of daemonset to have different configurations then you can mount some local-volume (using hostpath) so that all the pods will take configuration from that path and that can be different on each node. Or you need to deploy different daemonsets with different configmaps and select different nodes for each of them.
As you can read here:
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.
...a copy of Pod based on a single template and this is the reason why you cannot specify different ConfigMaps to be used by different Pods managed by DaemonSet Controller.
As an alternative you can configure many different DaemonSets where each one will be responsible for running copy of a Pod specified in the template only on specific node.
Another alternative is using static pods:
It is possible to create Pods by writing a file to a certain directory
watched by Kubelet. These are called static pods. Unlike DaemonSet,
static Pods cannot be managed with kubectl or other Kubernetes API
clients. Static Pods do not depend on the apiserver, making them
useful in cluster bootstrapping cases. Also, static Pods may be
deprecated in the future.
The whole procedure of creation a static Pod is described here.
I hope it helps.
You can use a ConfigMap containing config in each node and expose spec.nodeName environment to your pods. Then your pods can know which node it's running on and decide which config it loads.
I am new to kubernetes can somebody please explain why there are multiple volume types like
configMap
emptyDir
projected
secret
downwardAPI
persistentVolumeClaim
For few I am able to figure out, like why we need secret instead of configmap.
For rest I am not able to understand the need for others.
Your question is too generic to answer, here are the few comments off the top of my head
If the deployed pod or containers wants to have configuration data then you need to use configMap resource, if there are secrets or passwords its obvious to use secret resource.
Now if the deployed pods wants to use POD_NAME which is generated during schedule or Run time, then it need to use DownwardAPI resources.
Emptydir share the lifecycle with the Deployed pod, If the pods dies then all of the data which are stored using emptydir resource will be gone, now If you want to persist the data then you need to use persistentVolume, persistentVolumeClaim and storageclass Resources.
for further information k8s volumes
Configmap is used to make the application specific configuration data available to container at run time.
DownwardAPI is used to make kubernetes metadata ( like pod namespace, pod name, pod ip, pod lebels etc ) available to container at run time
I created ConfigMap using kubectl and I can also see it using:
kubectl get cm
I am just curious where kubernetes stores this data/information within the cluster? Does it store in etc? How do I view it, if it stored in etcd?
Does it store in any file/folder location or anywhere else?
I mean where kubernetes stores it internally?
Yes etcd is used for storing ConfigMaps and other resources you deploy to the cluster. See https://matthewpalmer.net/kubernetes-app-developer/articles/how-does-kubernetes-use-etcd.html and note https://github.com/kubernetes/kubernetes/issues/19781#issuecomment-172553264
You view the content of the configmap with 'kubectl get cm -oyaml' i.e. through the k8s API directly as illustrated in https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/ You don't need to look inside etcd to see the content of a configmap.