What is IP field in the output of "kubectl describe pod" command - kubernetes

This is my Pod manifest:
apiVersion: v1
kind: Pod
metadata:
name: pod-nginx-container
spec:
containers:
- name: nginx-alpine-container-1
image: nginx:alpine
ports:
- containerPort: 80
Below is output of my "kubectl describe pod" command:
C:\Users\so.user\Desktop\>kubectl describe pod pod-nginx-container
Name: pod-nginx-container
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 15 Feb 2021 23:44:22 +0530
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.0.29
IPs:
IP: 10.244.0.29
Containers:
nginx-alpine-container-1:
Container ID: cri-o://01715e35d3d809bdfe70badd53698d6e26c0022d16ae74f7053134bb03fa73d2
Image: nginx:alpine
Image ID: docker.io/library/nginx#sha256:01747306a7247dbe928db991eab42e4002118bf636dd85b4ffea05dd907e5b66
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 15 Feb 2021 23:44:24 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sxlc9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-sxlc9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sxlc9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m52s default-scheduler Successfully assigned default/pod-nginx-container to minikube
Normal Pulled 7m51s kubelet Container image "nginx:alpine" already present on machine
Normal Created 7m50s kubelet Created container nginx-alpine-container-1
Normal Started 7m50s kubelet Started container nginx-alpine-container-1
I couldn't understand what is IP address mentioned in "IPs:" field of this output. I am sure this is not my Node's IP, so I am wondering what IP is this. And please note that I have not exposed a Service, infact there is no Service in my Kubernetes cluster, so I not able to figure out this.
Also, how "Port" and "Host Port" are different, from Googling I could understand little bit but if someone can explain with an example then it would be great.
NOTE: I have already Googled "explanation of kubectl describe pod command" and tried searching a lot, but I can't find my answers, so posting this question.

Pods
A pod in Kubernetes is the smallest deployment unit. A pod is a group of one or more containers. The containers in a pod share storage and network resources.
Pod networking
In Kubernetes, each pod is assigned a unique IP address, this IP address is local within the cluster. Containers within the same pod use localhost to communicate with each other. Networking with other pods or services is done with IP networking.
When doing kubectl describe pod <podname> you see the IP address for the pod.
See Pod networking
Application networking in a cluster
A pod is a single instance of an application. You typically run an application as a Deployment with one ore more replicas (instances). When upgrading a Deployment with a new version of your container image, new pods is created - this means that all your instances get new IP addresses.
To keep a stable network address for your application, create a Service - and always use the service name when sending traffic to other applications within the cluster. The traffic addressed to a service is load balanced to the replicas (instances).
Exposing an application outside the cluster
To expose an application to clients outside the cluster, you typically use an Ingress resource - it typically represents a load balancer (e.g. cloud load balancer) with reverse proxy functionality - and route traffic for some specific paths to your services.

Thats pod's ip.
Every Pod gets its own IP address.
When you will create service, the service will internally map to this pod's ip.
If you delete the pod and recreate it again. you will notice a new ip. thats the reason why it is recommended to create service object which will keep track of pod's ip based on label selector.
Found this image online to explain
I dont know about the difference between port and hostport field under containerSpec.

PodIP is the local ip of the pod within the cluster. Each pod gets a dynamic IP allocated to it.
You can see the explanation from kubectl command
kubectl explain po.status.podIP
IP address allocated to the pod. Routable at least within the cluster.
Empty if not yet allocated.

Related

Pod stuck in Pending state when trying to schedule it on AWS Fargate

I have an EKS cluster to which I've added support to work in hybrid mode (in other words, I've added Fargate profile to it). My intention is to run only specific workload on the AWS Fargate while keeping the EKS worker nodes for other kind of workload.
To test this out, my Fargate profile is defined to be:
Restricted to specific namespace (Let's say: mynamespace)
Has specific label so that pods need to match it in order to be scheduled on Fargate (Label is: fargate: myvalue)
For testing k8s resources, I'm trying to deploy simple nginx deployment which looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: mynamespace
labels:
fargate: myvalue
spec:
selector:
matchLabels:
app: nginx
version: 1.7.9
fargate: myvalue
replicas: 1
template:
metadata:
labels:
app: nginx
version: 1.7.9
fargate: myvalue
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
When I try to apply this resource, I get following:
$ kubectl get pods -n mynamespace -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-596c594988-x9s6n 0/1 Pending 0 10m <none> <none> 07c651ad2b-7cf85d41b2424e529247def8bda7bf38 <none>
Pod stays in the Pending state and it is never scheduled to the AWS Fargate instances.
This is a pod describe output:
$ kubectl describe pod nginx-deployment-596c594988-x9s6n -n mynamespace
Name: nginx-deployment-596c594988-x9s6n
Namespace: mynamespace
Priority: 2000001000
PriorityClassName: system-node-critical
Node: <none>
Labels: app=nginx
eks.amazonaws.com/fargate-profile=myprofile
fargate=myvalue
pod-template-hash=596c594988
version=1.7.9
Annotations: kubernetes.io/psp: eks.privileged
Status: Pending
IP:
Controlled By: ReplicaSet/nginx-deployment-596c594988
NominatedNodeName: 9e418415bf-8259a43075714eb3ab77b08049d950a8
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-784d2 (ro)
Volumes:
default-token-784d2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-784d2
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
One thing that I can conclude from this output is that correct Fargate profile was chosen:
eks.amazonaws.com/fargate-profile=myprofile
Also, I see that some value is added to NOMINATED NODE field but not sure what it represents.
Any ideas or usual problems that happen and that might be worth troubleshooting in this case? Thanks
It turns out the problem was in networking setup of private subnets associated with the Fargate profile all the time.
To give more info, here is what I initially had:
EKS cluster with several worker nodes where I've assigned only public subnets to the EKS cluster itself
When I tried to add Fargate profile to the EKS cluster, because of the current limitation on Fargate, it is not possible to associate profile with public subnets. In order to solve this, I've created private subnets with the same tag like the public ones so that EKS cluster is aware of them
What I forgot was that I needed to enable connectivity from the vpc private subnets to the outside world (I was missing NAT gateway). So I've created NAT gateway in Public subnet that is associated with EKS and added to the private subnets additional entry in their associated Routing table that looks like this:
0.0.0.0/0 nat-xxxxxxxx
This solved the problem that I had above although I'm not sure about the real reason why AWS Fargate profile needs to be associated only with private subnets.
If you use the community module, all of this can be taken care of by the following config:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.44.0"
name = "vpc-module-demo"
cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
single_nat_gateway = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
enable_nat_gateway = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
enable_vpn_gateway = false
enable_dns_hostnames = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
enable_dns_support = true # needed for fargate (https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf#page=135&zoom=100,96,764)
tags = {
"Name" = "terraform-eks-demo-node"
"kubernetes.io/cluster/${var.cluster-name}" = "shared"
}
}

How do I access a service on a kubernetes node from another node on the same cluster?

My service description:
kubernetes describe service app-checklot --namespace=app-test-gl
Name: app-checklot
Namespace: app-test-gl
Labels: app=app-checklot
chart=app-checklot-0.1.0
heritage=Tiller
release=chkl
Annotations: <none>
Selector: app=app-checklot,release=chkl
Type: ClusterIP
IP: 10.99.252.76
Port: https 11080/TCP
TargetPort: 11080/TCP
Endpoints: 85.101.213.102:11080,85.101.213.103:11080
Session Affinity: None
Events: <none>
I am able to access the pods separately using the individual ip's:
http://85.101.213.102:11080/service
http://85.101.213.103:11080/service
Also the service using the IP (this needs to be configured from another node by means of the url):
http://10.99.252.76:11080/service
What I would want is to access the service (app-checklot) using the service name in the url - so that I needn't update the url always . Is this possible? If so, how?
From Documentation:
For example, if you have a Service called "my-service" in a Kubernetes
Namespace called "my-ns", a DNS record for "my-service.my-ns" is
created. Pods which exist in the "my-ns" Namespace should be able to
find it by simply doing a name lookup for "my-service". Pods which
exist in other Namespaces must qualify the name as "my-service.my-ns".
The result of these name lookups is the cluster IP.
Another service, deployed to the same namespace, would be able to call http://app-checklot/service.
Yes, from within the cluster your service should be available at:
http://app-checklot.app-test-gl:11080/service

Kubernetes,k8s how to make service url?

I am learning k8s. My question is that how to let k8s get service url as minikube command "minikube get service xxx --url" do?
Why I ask is because that when pod is down and up/created/initiated again, there is no need to change url by visiting service url. While
I deploy pod as NodePort, I could access pod with host IP and port, but if it is reinitiated/created again, the port changes.
My case is illustrated below: I have
one master(172.16.100.91) and
one node(hostname node3, 172.16.100.96)
I create pod and service as below, helllocomm deployed as NodePort, and helloext deployed as ClusterIP. hellocomm and helloext are both
spring boot hello world applications.
docker build -t jshenmaster2/hellocomm:0.0.2 .
kubectl run hellocomm --image=jshenmaster2/hellocomm:0.0.2 --port=8080
kubectl expose deployment hellocomm --type NodePort
docker build -t jshenmaster2/helloext:0.0.1 .
kubectl run helloext --image=jshenmaster2/helloext:0.0.1 --port=8080
kubectl expose deployment helloext --type ClusterIP
[root#master2 shell]# kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
hellocomm NodePort 10.108.175.143 <none> 8080:31666/TCP 8s run=hellocomm
helloext ClusterIP 10.102.5.44 <none> 8080/TCP 2m run=helloext
[root#master2 hello]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
hellocomm-54584f59c5-7nxp4 1/1 Running 0 18m 192.168.136.2 node3
helloext-c455859cc-5zz4s 1/1 Running 0 21m 192.168.136.1 node3
In above, my pod is deployed at node3(172.16.100.96), so I could access hellocomm by 172.16.100.96:31666/hello,
With this scenario, one could see easily that when node3 is down, a new pod is created/initiated, the port changes also.
so that my client lost connection. I do not want this solution.
My current question is that as helloext is deployed as ClusteriP and it is also a service as shown above. does that mean ClusterIP
10.102.5.44 and port 8080 would be service url, http://10.102.5.44:8080/hello?
Do I need to create service by yaml file again? What is the difference from service created by command against by yaml file? How to write
following yaml file if I have to create service by yaml?
Below is yaml definition template I need to fill, How to fill?
apiVersion: v1
kind: Service
matadata:
name: string helloext
namespace: string default
labels:
- name: string helloext
annotations:
- name: string hello world
spec:
selector: [] ?
type: string ?
clusterIP: string anything I could give?
sessionAffinity: string ? (yes or no)
ports:
- name: string helloext
protocol: string tcp
port: int 8081? (port used by host machine)
targetPort: int 8080? (spring boot uses 8080)
nodePort: int ?
status: since I am not using loadBalancer in deploymennt, I could forget this.
loadBalancer:
ingress:
ip: string
hostname: string
NodePort, as the name suggests, opens a port directly on the node (actually on all nodes in the cluster) so that you can access your service. By default it's random - that's why when a pod dies, it generates a new one for you. However, you can specify a port as well (3rd paragraph here) - and you will be able to access on the same port even after the pod has been re-created.
The clusterIP is only accessible inside the cluster, as it's a private IP. Meaning, in a default scenario you can access this service from another container / node inside the cluster. You can exec / ssh into any running container/node and try it out.
Yaml files can be version controlled, documented, templatized (Helm), etc.
Check https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#servicespec-v1-core for details on each field.
EDIT:
More detailed info on services here: https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
What about creating a ingress and point it to the service to access it outside of the cluster?

Google Container Engine Auto deleting services/pods

I am testing goolge container engine and everything was fine until I found this really weird issue.
bash-3.2# kubectl get services --namespace=es
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
elasticsearch-logging 10.67.244.176 <none> 9200/TCP name=elasticsearch-logging 5m
bash-3.2# kubectl describe service elasticsearch-logging --namespace=es
Name: elasticsearch-logging
Namespace: es
Labels: k8s-app=elasticsearch-logging,kubernetes.io/cluster-service=true,kubernetes.io/name=Elasticsearch
Selector: name=elasticsearch-logging
Type: ClusterIP
IP: 10.67.248.242
Port: <unnamed> 9200/TCP
Endpoints: <none>
Session Affinity: None
No events.
after exactly 5 minutes, the service was deleted automatically.
kubectl get events --namespace=es
1m 1m 1 elasticsearch-logging Service DeletingLoadBalancer {service-controller } Deleting load balancer
1m 1m 1 elasticsearch-logging Service DeletedLoadBalancer {service-controller } Deleted load balancer
Anyone got a clue why? thanks.
The label kubernetes.io/cluster-service=true is a special, reserved label that shouldn't be used by user resources. That's used by a system process that manages the cluster's addons, like the DNS and kube-ui pods that you'll see in your cluster's kube-system namespace.
The reason your service is being deleted is because the system process is checking for resources with that label, seeing one that it doesn't know about, and assuming that it's something that it started previously that isn't meant to exist anymore. This is explained a little more in this doc about cluster addons.
In general, you shouldn't have any labels that are prefixed with kubernetes.io/ on your resources, since that's a reserved namespace.
after removing the following from metadata/labels in the yaml file, the problem went away.
**kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Elasticsearch"**

Does the Google Container Engine support DNS based service discovery?

From the kubernetes docs I see that there is a DNS based service discovery mechanism. Does Google Container Engine support this. If so, what's the format of DNS name to discover a service running inside Container Engine. I couldn't find the relevant information in the Container Engine docs.
The DNS name for services is as follow: {service-name}.{namespace}.svc.cluster.local.
Assuming you configured kubectl to work with your cluster you should be able to get your service and namespace details by the following the steps below.
Get your namespace
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
kube-system <none> Active
You should ignore the kube-system entry, because that is for the cluster itself. All other entries are your namespaces. By default there will be one extra namespace called default.
Get your services
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
broker-partition0 name=broker-partition0,type=broker name=broker-partition0 10.203.248.95 5050/TCP
broker-partition1 name=broker-partition1,type=broker name=broker-partition1 10.203.249.91 5050/TCP
kubernetes component=apiserver,provider=kubernetes <none> 10.203.240.1 443/TCP
service-frontend name=service-frontend,service=frontend name=service-frontend 10.203.246.16 80/TCP
104.155.61.198
service-membership0 name=service-membership0,partition=0,service=membership name=service-membership0 10.203.246.242 80/TCP
service-membership1 name=service-membership1,partition=1,service=membership name=service-membership1 10.203.248.211 80/TCP
This command lists all the services available in your cluster. So for example, if I want to get the IP address of the service-frontend I can use the following DNS: service-frontend.default.svc.cluster.local.
Verify DNS with busybox pod
You can create a busybox pod and use that pod to execute nslookup command to query the DNS server.
$ kubectl create -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
EOF
Now you can do an nslookup from the pod in your cluster.
$ kubectl exec busybox -- nslookup broker-partition0.default.svc.cluster.local
Server: 10.203.240.10
Address 1: 10.203.240.10
Name: service-frontend.default.svc.cluster.local
Address 1: 10.203.246.16
Here you see that the Addres 1 entry is the IP of the service-frontend service, the same as the IP address listed by the kubectl get services.
It should work the same way as mentioned in the doc you linked to. Have you tried that? (i.e. "my-service.my-ns")