Can I access VM's IP inside pod? - kubernetes

I have installed kubernetes on my VM. And I run my application inside pod on kubernetes.
Now I want to get IP address of my VM in my application which is running inside pod.
Is there any way to do this?

Here's how you can get the host IP:
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
restartPolicy: Never
containers:
- name: busybox
image: busybox
command: ["ash","-c","echo $MY_HOST_IP && sleep 3600"]
env:
- name: MY_HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
kubectl logs busybox will print you the IP.

Yes, it is possible to get the host ip and the hostname from the pod running on a specific node. Use the below variable to get those values
spec.nodeName - the name of the node to which the scheduler always attempts to schedule the pod
status.hostIP - the IP of the node to which the Pod is assigned
following link would be helpful --> https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/

Related

Kubernetes: how to get pod name I'm running in?

I want to run Python code inside a pod. The pod is created by airflow that I don't control.
I want to somehow get the name of the pod I'm running in.
How can it be done?
You can tell kuberenetes to mount an env variable for you:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
and then in python you can access it like:
import os
pod_name = os.environ['MY_POD_NAME']
Or you can just open and read /etc/hostname:
f = open('/etc/hostname')
pod_name = f.read()
f.close()
Exposing Pod and Cluster Vars to Containers
Lets say you need some data about Pod or K8s environment in your application to add Pod informnation as metada tp logs. such as e.g.
Pod IP
Pod Name
Service Account of Pod
NOTE: All Pod information can be made available in the config file.
There are 2 ways to expose Pod fields into a running Container:
Environment Variables
Volume Files
Example of Environment Variable
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-env
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- name: log-sider
image: busybox
command: [ 'sh', '-c' ]
args:
- while true; do
echo sync app logs;
printenv POD_NAME POD_IP POD_SERVICE_ASCCOUNT;
sleep 20;
done;
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_SERVICE_ASCCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
Kubernets Docs
Source
one way can be
So in the Kubernetes cluster you are operating in do
kubectl get pods
now see the yaml of all pods by
oc get pods <pod-name> -o yaml
then in that find the container images being used by the pods .
identify the image tag that belongs to your container creation.
That means that when you build you image , image has a name and a tag , which is further pushed to some cloud hub from where your pod will pull the image and start the container .
you need to find the image tag and name in pod yaml using the above commands given .
Try the below :
# List all pods in all namespaces
kubectl get pods --all-namespaces
# List all pods in the current namespace
kubectl get pods -o wide
Then u can see more details using the below :
kubectl describe pod <pod-name>
Also you can refer to the following stackoverflow question and the related answers.
get-current-pod

How get consistent names for Pods in Kubernetes

I want to run this docker image in kubernetes:
https://hub.docker.com/_/rabbitmq
This is not a problem, and it is running. The problem is that I need to send through switches do the "docker run" command. For this image, when starting the container in docker you would run this:
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
The yaml file will then look something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rb
namespace: rabbittest
spec:
selector:
matchLabels:
app: rb
replicas: 1
template:
metadata:
labels:
app: rb
spec:
containers:
- name: rb-container
env:
- name: HOSTNAME
value: "rbnode001"
image: rabbitmq:3-management
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: azure
mountPath: /mnt/azure
ports:
- containerPort: 5672
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: rabbittest
readOnly: false
My question is, how do I get kubernetes to apply the --name and --hostname when kubernetes executes the "docker run" commands?
First of all, you need to create StatefulSets to RabbitMQ.
In your StatefulSet, add this ENV.
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: K8S_SERVICE_NAME
value: rabbitmq-headless
- name: RABBITMQ_NODENAME
value: $(POD_NAME).$(K8S_SERVICE_NAME).$(POD_NAMESPACE).svc.cluster.local
Here, K8S_SERVICE_NAME is headless service required by statefulset.
Finally, RABBITMQ_NODENAME will hold the HOSTNAME.
Kubernetes's options are just different from Docker's options. Many options have equivalents in the pod spec object; some options don't have direct equivalents, or are split into multiple layers (volume mounts, published ports).
For the two options you mention:
docker run --name sets the name of the container. In Kubernetes you don't usually directly interact with containers, so you can't set their names. The metadata: {name: } sets the name of the Deployment, and generated Pods have names derived from that; the containers within a Pod also have names but it's very rare to use these.
docker run --hostname sets the name a container believes its hostname is. You almost never need this option; RabbitMQ is the notable exception. The pod spec has a hostname: option that can set this.
spec:
template:
spec:
hostname: my-rabbit
containers: [...]
As #Shahriar's answer suggests, a StatefulSet is a better match for deployments that need some backing persistent state. In this case the StatefulSet automatically sets the host name based on the pod identity, so you don't need to do anything.
(The important detail for RabbitMQ is that the hostname must be consistent across recreating the container, and if the hostname is always rb-0 from a StatefulSet that satisfies this rule. An autogenerated Docker container ID, or a variable Kubernetes Deployment pod name, will be different on every restart, and Rabbit will lose track of its state.)

how to communicate with daemonset pod from another pod in the same node?

i want a daemonset-redis where every node will have it's own caching and each deployment pod will communicate with it's local daemonset-redis how to achieve it? how to reference daemonset pod in the same node from within docker-container?
UPDATE:
i rather not use service option and make sure each pod access its local daemonset
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: redislocal
spec:
selector:
matchLabels:
name: redislocal
template:
metadata:
labels:
name: redislocal
spec:
hostNetwork: true
containers:
- name: redislocal
image: redis:5.0.5-alpine
ports:
- containerPort: 6379
hostPort: 6379
There is a way of not using a service.
You can Expose Pod Information to Containers Through Environment Variables.
And you can use status.hostIP to know the ip address of node where pod is running.
This was introduced in Kubernetes 1.7 link
You can add that to your pod or deployment yaml:
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
It will set an variable HOST_IP which will have a value of node ip on which the pod is running, then you can use it to connect to a local DeamonSet.
you should define a service ( selecting all redis pods ) and then communicate with redis from other pods

Kubernetes Master API SERVER IP

I've k8s cluster and pod which one living in there .
So , I have got a requirements for pod process .
Pod need a Cluster Ip for manage some jobs . How I can set the API Server name as a Environment variable .
My Pod Yaml shown as below :
apiVersion: v1
kind: Pod
metadata:
name: api-server-check
spec:
containers:
- name: container-1
image: project_reg/pod:latest
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CLUSTER_IP
valueFrom:
fieldRef:
fieldPath: ???????? ### Problem is here I think .
If you have another suggestion for me , I will apply it to the Pod yaml . (Shell script or etc .. )
Thanks
You can use the cluster internal DNS to point to the Kubernetes API Server.
The API Service should already be exposed as a service called "kubernetes" in the default namespace.
kubernetes.default.svc.cluster.local should resolve to the API server.
Also, if you dump the env inside a running pod you should see that there is an environment variable which already has this information... KUBERNETES_SERVICE_HOST

Service selection from only one pod of one statefulset

It is possible to create a service that only points to a pod, created by a statefulset?
The solutions that make me would be:
Put as a provider on behalf of the pod.
Dynamic labels with the name of the pod.
As per Kubernetes 1.9 you can use: statefulset.kubernetes.io/pod-name
From the documentation:
"When the StatefulSet controller creates a Pod, it adds a label,
statefulset.kubernetes.io/pod-name, that is set to the name of the
Pod. This label allows you to attach a Service to a specific Pod in
the StatefulSet."
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label
You can use the ordinal value of statefulset pod to label a pod.
I would use kubectl in initContainer to label the pods from within the pods created by statefulset and use that label in service selector spec.
example init container:
initContainers:
- name: set-label
image: lachlanevenson/k8s-kubectl:v1.8.5
command:
- sh
- -c
- '/usr/local/bin/kubectl label pod $POD_NAME select-id=${HOSTNAME##*-} --server=https://kubernetes.default --token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt -n $POD_NAMESPACE'
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
example service selector:
selector:
app: my-app
select-id: <0|1|2 ordinal value>