I am fairly new to kubernetes. Wanted to know if a program running inside a pod can access the namespace in which the pod is running.
Let me explain my usecase. There are two pods in my application's namespace. One pod has to be statefulset and must have atleast 3 replicas. Other pod (say POD-A) can be just a normal deployment. Now POD-A needs to talk to a particular instance of the statefulset. I read in an article that it can be done using this address format -
<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local.
In my application, the namespace part changes dynamically with each deployment. So can this value be read dynamically from a program running inside a pod?
Please help me if I have misunderstood something here. Any alternate/simpler solutions are also welcome. Thanks in advance!
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api has an example of this and more.
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
You can get the namespace of a pod using the downward API : https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api (mount the namespace as a environment variable).
Or , if a serviceAcount is mounted in the pod , the namespace the pod is living in can be found in the file: /var/run/secrets/kubernetes.io/serviceaccount/namespace.
Related
How can I use the cluster CIDR (the ip address range containing all pod ip addresses) inside a pod? (Autmoatically, without putting it manually in an environment variable, ConfigMap or anywhere else.)
Exampel of what I would like to:
env:
- name: CLUSTER_CIDR
valueFrom: # ??? does a configMap like this exist ??? Or any other source for clusterCidr?
configMap:
key: clusterCidr
name: ...
my best partial solution:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GUESSED_CLUSTER_CIDR
value: $(POD_IP)/16
I can find clusterCidr inside the configMap full-cluster-state in namespace kube-system somewhere in the value of key full-cluster-state. But this value is a string containing json, and it looks vendor specific (in currentState.rkeConfig.services.kubeController.clusterCidr). I can not extract part of the the value in deployment.yaml. And I prefer to have a vendor independent solution.
I have not idea where to find ComponentConfig mentioned in related issues and do not even know if it is in alpha still.
related k8s issues (all closed without (clear) fixing):
https://github.com/kubernetes/kubernetes/issues/25533
https://github.com/kubernetes/kubernetes/issues/46508
About finding the CIDR of the cluster manually:
How do you find the cluster & service CIDR of a Kubernetes cluster?
old about finding it programmatically: Kubernetes - Find out service ip range CIDR programatically
using the CIDR for trusted proxy, what I want to: Kubernetes: add ingress internal ip to environment
Im afraid there is no vendor independent solution for this. Also ComponentConfig is still an alpha feature so there is not enough proper documentation.
However, the best thing right now (even if it's not universal) is to use:
$ kubectl cluster-info dump | grep -m 1 cluster-cidr
Then you can create a new ConfigMap with the cluster CIDR value that was outputted and then refer to it in the pod as in this docs.
Even if the concept is the same, you will have to apply a different approach in different environments. Unfortunately, as of today there is no single solution.
As for the additional information, I have already made a small comparison between Kubeadm and Google Kubernetes Engine about CIDR. You can check out this thread for more information.
In a Hazelcast based system, deployed on Kubernetes, using auto-discovery by service-label, I'm trying to get the Pod name that each node is deployed on. What I'm getting is indeed the pod name for the first node, but the service name for the second. For example, octane-deployment-blue-123c44bfb-xyzab (pod) and then 10-20-30-100.my-service.svc.cluster.local (service).
I'm fetching the values by
HazelcastInstance hazelcastInstance = getInstance();
Member localMember = hazelcastInstance.getCluster().getLocalMember();
String name = localMember.getSocketAddress().getAddress().getHostName();
It seems that the name is determined by the auto-discovery mechanism.
Any way of getting this value?
The simple answer of how to get Pod name is to skip all the Hazelcast part and just get Pod name from the env variable HOSTAME or with the use of Downward API like this:
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Saying that, its very weird that you receive service name by executing localMember.getSocketAddress().getAddress().getHostName(). Seems like a bug to me. You can raise an issue with the steps to reproduce here: https://github.com/hazelcast/hazelcast-kubernetes
Im trying to create yaml for deployment with kubernetes. I am using a same script for different environment, which is separated with namespace. Now, I need to access the namespace name within the deployment yaml, such as
"name":"$(namespace)"
in the yaml file. Is it possible to do so?
edit sorry, I may have misunderstood your question: if you want access to the current namespace in which the Pod is running, you can inject that into its environment via an env: valueFrom: construct, described in greater detail here:
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Omit the namespace: from the yaml and provide it to kubectl as kubectl --namespace=foo1 create -f my-thing.yaml (assuming, of course, you're using kubectl; the idea behind the answer is the same, just the mechanics will change if using a different method)
You can also specify the default namespace in ~/.kube/config in the context, and address it that way: kubectl --context=server-foo1 which allows associating different credentials with the different namespaces, too. They all boil down to the same effect in the end, it's just a matter of which is the most convenient for your case.
The most extreme(?) form is that you can also have multiple configs and switch between them via env KUBECONFIG=$TMPDIR/foo1.yaml kubectl create -f my-thing.yaml
I searched the documentation but I am unable to find out if I can run a pod in Kubernetes without Scheduler. If anyone can help with any pointers it would be helpful
Update:
I can attach a label to node and let pod stick to that label but that would involve going through the scheduler. Is there any method without daemonset and does not use scheduler.
The scheduler just sets the spec.nodeName field on the pod. You can set that to a node name yourself if you know which node you want to run your pod, though you are then responsible for ensuring the node has sufficient resources to run the pod (enough memory, free host ports, etc… all things the scheduler is normally responsible for checking before it assigns a pod to a node)
You want static pods
Static pods are managed directly by kubelet daemon on a specific node, without API server observing it. It does not have associated any replication controller, kubelet daemon itself watches it and restarts it when it crashes.
You can simply add a nodeName attribute to the pod definition which by they normally field out by the scheduler therefor it's not a mandatory field.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
nodeName: node01
if the pod has been created and in pending state you have to recreate it with the new field, edit is not permitted with the nodeName att.
All the answers given here would require a scheduler to run.
I think what you want to do is create the manifest file of the pod and put it in the default manifest directory of the node in question.
Default directory is /etc/kubernetes/manifests/
The pod will automatically be created and if you wish to delete it, just delete the manifest file.
You can simply add a nodeName attribute to the pod definition
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
nodeName: controlplane
containers:
- image: nginx
name: nginx
Now important point - check the node listed by using below command, and then assign to one of them:
kubectl get nodes
Forgive my ignorance but I can't seem to find a way of using a yaml file to deploy a single container pod (read: kind: Pod). It appears the only way to do it is to use a deployment yaml file (read: kind: Deployment) with a replica of 1.
Is there really no way?
Why I ask is because it would be nice to put everything in source control, including the one off's like databases.
It would be awesome if there was a site with all the available options you can use in a yaml file (like vagrant's vagrantfile). There isn't one, right?
Thanks!
You should be able to find pod yaml files easily. For example, the documentation has an example of a Pod being created.
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pod's contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
command: ["/bin/echo", "hello", "world"]
One thing to note is that if a deployment or a replicaset created a resource on your behalf, there is no reason why you couldn't do the same.
kubectl get pod <pod-name> -o yaml should give you the YAML spec of a created pod.
There is Kubernetes charts, which serves as a repository for configuration surrounding complex applications, using the helm package manager. This would serve you well for deploying more complex applications.
Never mind, figured it out. It's possible. You just use the multi-container yaml file (example found here: https://kubernetes.io/docs/user-guide/pods/multi-container/) but only specify one container.
I'd tried it before but had inadvertently mistyped the yaml formatting.
Thanks rubber ducky!