Is possible to access to pod information from inside the container? - kubernetes

I have a deployment configured with 5 replicas. I want to know inside each running container the name of the pod replica.
When I execute:
kubectl get pods
NAME READY STATUS RESTARTS AGE
test-581957695-cbjtm 1/1 Running 3 1d
test-581957695-dnv8s 1/1 Running 1 1d
test-581957695-fv467 1/1 Running 1 1d
test-581957695-m74lc 1/1 Running 0 1d
test-581957695-s6cx0 1/1 Running 1 1d
Is it possible to get the name "test-581957695-cbjtm" from inside the container?
Thank you.

You can use env vars to expose pod information to container.
https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/

Another option is to use Kubernetes Downward API. Using the API, you can expose pod/container information inside your container via a VolumeFile or Environment Variables.
Currently, these are the pieces of information you can expose:
The node’s name
The Pod’s name
The Pod’s namespace
The Pod’s IP address
The Pod’s service account name
A Container’s CPU limit
A container’s CPU request
A Container’s memory limit
A Container’s memory request
In addition, the following information is available through DownwardAPIVolumeFiles,
The Pod’s labels
The Pod’s annotations
For more information please see, https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api

Related

Redis Sentinel doesn't auto discover new slave instances

I've deployed the redis helm chart on k8s with Sentinel enabled.
I've set up the Master-Replicas with Sentinel topology, it means one master and two slaves. Each pod is running both the redis and sentinel container successfully:
NAME READY STATUS RESTARTS AGE IP NODE
my-redis-pod-0 2/2 Running 0 5d22h 10.244.0.173 node-pool-u
my-redis-pod-1 2/2 Running 0 5d22h 10.244.1.96 node-pool-j
my-redis-pod-2 2/2 Running 0 3d23h 10.244.1.145 node-pool-e
Now, I've a python script that connects to redis and discovers the master by passing it the pod's ip.
sentinel = Sentinel([('10.244.0.173', 26379),
('10.244.1.96',26379),
('10.244.1.145',26379)],
sentinel_kwargs={'password': 'redispswd'})
host, port = sentinel.discover_master('mymaster')
redis_client = StrictRedis(
host=host,
port=port,
password='redispswd')
Let's suposse the master node is on my-redis-pod-0, when I do kubectl delete pod to simulate a problem that leads me to loss the pod, Sentinel will promote one of the others slaves to master and kubernetes will give me a new pod with redis and sentinel.
NAME READY STATUS RESTARTS AGE IP NODE
my-redis-pod-0 2/2 Running 0 3m 10.244.0.27 node-pool-u
my-redis-pod-1 2/2 Running 0 5d22h 10.244.1.96 node-pool-j
my-redis-pod-2 2/2 Running 0 3d23h 10.244.1.145 node-pool-e
The question is, how can I do to tell Sentinel to add this new ip to the list automatically (without code changes)?
Thanks!
Instead of using IPs, you may use the dns entries for a headless service.
A headless service is created by explicitly specifying
ClusterIP: None
Then you will be able to use the dns entries as under, where redis-0 will be the master
#syntax
pod_name.service_name.namespace.svc.cluster.local
#Example
redis-0.redis.redis.svc.cluster.local
redis-1.redis.redis.svc.cluster.local
redis-2.redis.redis.svc.cluster.local
References:
What is a headless service, what does it do/accomplish, and what are some legitimate use cases for it?
https://www.containiq.com/post/deploy-redis-cluster-on-kubernetes

Kubectl : No resource found even tough there are pods running in the namespace

I have 2 pods running on default namespace as shown below
NAMESPACE NAME READY STATUS RESTARTS AGE
default alpaca-prod 1/1 Running 0 36m
default alpaca-test 1/1 Running 0 4m26s
kube-system coredns-78fcd69978-xd7jw 1/1 Running 0 23h
But when I try to get deployments I do not see any
kubectl get deployments
No resources found in default namespace.
Can someone explain this behavior ?
I am running k8 on Minikube.
I think these are pods which were spawned without Deployment, StatefulSet or DaemonSet.
You can run pod like this using the command, e.g.:
kubectl run nginx-test --image=nginx -n default
pods created via DaemonSet usually end with -xxxxx
pods created via Deployment usually end with -xxxxxxxxxx-xxxxx
pods created via StatefulSet usually end with -0, -1 etc.
pods created without upper resource, usually have exact name as you specified e.g. nginx-test, nginx, etc.
So my guess that is a standalone Pod resource (last option)

Is it possible to set one constant pod name in a kubernetes deployment?

Is it possible to avoid those hashes in pod name?
> kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-ingress-nginx-controller-599c688b77-nbvds 1/1 Running 0 11d
pgadmin-756f5949ff-mbkk9 1/1 Running 0 11d
postgres-postgresql-0 1/1 Running 0 11d
redis-master-5d9cfb54f8-8pbgq 1/1 Running 43 4d
According to your requirement Statefulset can fulfil your needs. Using deployment this is not possible. Statefulset assigns name to pods in an incremental fashion like pgadmin-0,pgadmin-1 and so on. I would highly recommend check this docs section as statefulset comes up with very cool feature like rolling out pods in sequential manner and delete them also in one pod one at a time etc.
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
No it's not possible to avoid the hash in pod name if you are using a deployment. You can add labels/annotations of your own and select or operate on pods with those labels.
Pods created by statefulset have a unique identity that is comprised of an ordinal i.e redis-master-0 redis-master-1 redis-master-2 etc. If you are running stateful workload such as redis I would suggest using statefulset

Kubernetes pods are pending not active

If I run this:
kubectl get pods -n kube-system
I get this output:
NAME READY STATUS RESTARTS AGE
coredns-6fdd4f6856-6bl64 0/1 Pending 0 1h
coredns-6fdd4f6856-xgrbm 0/1 Pending 0 1h
kubernetes-dashboard-65c76f6c97-c69jg 0/1 Pending 0 13m
supposedly I need a kubernetes scheduler in order to actually launch containers? Does anyone know how to initiate a kube-scheduler?
More than a Kubernetes scheduler issue, it looks like it's more about not having enough resources on your nodes (or no nodes at all) in your cluster to schedule any workloads. You can check your nodes with:
$ kubectl get nodes
Also, you are not likely able to see any control plane resource on the kube-system namespace because you may be using managed services like EKS or GKE.

Autoscaler not scaling up leaving nodes in NotReady state and pods in Unknown state

I am running a cluster on GKE with a single node pool. It has 3 nodes and can scale from 1 to 99 nodes. The cluster uses the nginx-ingress controller
On this cluster, I want to deploy apps. An app is scoped by a namespace and consists of 3 deployments and one ingress (defining paths to access the application from the internet). Each deployment runs a single replica of a container.
Deploying a couple of apps works fine, but deploying many apps (requiring the node pool to scale up) breaks everything:
All pods start having warnings (including those successfully deployed earlier)
kubectl get pods --namespace bcd
NAME READY STATUS RESTARTS AGE
actions-664b7d79f5-7qdkw 1/1 Unknown 1 35m
actions-664b7d79f5-v8s2m 1/1 Running 1 18m
core-85cb74f89b-ns49z 1/1 Unknown 1 35m
core-85cb74f89b-qqzfp 1/1 Running 1 18m
nlu-77899ddbf-8pd7k 1/1 Running 1 27m
All nodes becomes unready:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-clients-projects-default-pool-f9af73d4-gzwr NotReady <none> 42m v1.9.7-gke.6
gke-clients-projects-default-pool-f9af73d4-p5l2 NotReady <none> 21m v1.9.7-gke.6
gke-clients-projects-default-pool-f9af73d4-wnxc NotReady <none> 37m v1.9.7-gke.6
Deleting the namespace to remove all resources from the cluster also seems to fail as after a long while the pods remain active but still in an unknown state.
How can I safely add more apps and let the cluster autoscale?
The reason seems to be that not knowing the resources needed for each pod, the scheduler schedules them on any available node, potentially exhausting available resources and putting the Docker daemon in an inconsistent state.
The solution is to specify resources requests and limits: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container