How to check what port a pod is listening on with kubectl and not looking at the dockerFile? - kubernetes

I have a pod running and want to port forward so i can access the pod from the internal network.
I don't know what port it is listening on though, there is no service yet.
I describe the pod:
$ kubectl describe pod queue-l7wck
Name: queue-l7wck
Namespace: default
Priority: 0
Node: minikube/192.168.64.3
Start Time: Wed, 18 Dec 2019 05:13:56 +0200
Labels: app=work-queue
chapter=jobs
component=queue
Annotations: <none>
Status: Running
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/queue
Containers:
queue:
Container ID: docker://13780475170fa2c0d8e616ba1a3b1554d31f404cc0a597877e790cbf01838e63
Image: gcr.io/kuar-demo/kuard-amd64:blue
Image ID: docker-pullable://gcr.io/kuar-demo/kuard-amd64#sha256:1ecc9fb2c871302fdb57a25e0c076311b7b352b0a9246d442940ca8fb4efe229
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 18 Dec 2019 05:14:02 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mbn5b (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-mbn5b:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mbn5b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/queue-l7wck to minikube
Normal Pulling 31h kubelet, minikube Pulling image "gcr.io/kuar-demo/kuard-amd64:blue"
Normal Pulled 31h kubelet, minikube Successfully pulled image "gcr.io/kuar-demo/kuard-amd64:blue"
Normal Created 31h kubelet, minikube Created container queue
Normal Started 31h kubelet, minikube Started container queue
even the JSON has nothing:
$ kubectl get pods queue-l7wck -o json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"creationTimestamp": "2019-12-18T03:13:56Z",
"generateName": "queue-",
"labels": {
"app": "work-queue",
"chapter": "jobs",
"component": "queue"
},
"name": "queue-l7wck",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "apps/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "ReplicaSet",
"name": "queue",
"uid": "a9ec07f7-07a3-4462-9ac4-a72226f54556"
}
],
"resourceVersion": "375402",
"selfLink": "/api/v1/namespaces/default/pods/queue-l7wck",
"uid": "af43027d-8377-4227-b366-bcd4940b8709"
},
"spec": {
"containers": [
{
"image": "gcr.io/kuar-demo/kuard-amd64:blue",
"imagePullPolicy": "Always",
"name": "queue",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
"name": "default-token-mbn5b",
"readOnly": true
}
]
}
],
"dnsPolicy": "ClusterFirst",
"enableServiceLinks": true,
"nodeName": "minikube",
"priority": 0,
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "default",
"serviceAccountName": "default",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"effect": "NoExecute",
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"tolerationSeconds": 300
},
{
"effect": "NoExecute",
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"tolerationSeconds": 300
}
],
"volumes": [
{
"name": "default-token-mbn5b",
"secret": {
"defaultMode": 420,
"secretName": "default-token-mbn5b"
}
}
]
},
"status": {
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:13:56Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:14:02Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:14:02Z",
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-12-18T03:13:56Z",
"status": "True",
"type": "PodScheduled"
}
],
"containerStatuses": [
{
"containerID": "docker://13780475170fa2c0d8e616ba1a3b1554d31f404cc0a597877e790cbf01838e63",
"image": "gcr.io/kuar-demo/kuard-amd64:blue",
"imageID": "docker-pullable://gcr.io/kuar-demo/kuard-amd64#sha256:1ecc9fb2c871302fdb57a25e0c076311b7b352b0a9246d442940ca8fb4efe229",
"lastState": {},
"name": "queue",
"ready": true,
"restartCount": 0,
"started": true,
"state": {
"running": {
"startedAt": "2019-12-18T03:14:02Z"
}
}
}
],
"hostIP": "192.168.64.3",
"phase": "Running",
"podIP": "172.17.0.2",
"podIPs": [
{
"ip": "172.17.0.2"
}
],
"qosClass": "BestEffort",
"startTime": "2019-12-18T03:13:56Z"
}
}
How do you checker what port a pod is listening on with kubectl?
Update
If I ssh into the pod and run netstat -tulpn as suggested in the comments I get:
$ kubectl exec -it queue-pfmq2 -- sh
~ $ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/kuard
But this method is not using kubectl.

Your container image has a port opened during the build (looks like port 8080 in your case) using the EXPOSE command in the Dockerfile. Since the exposed port is baked into the image, k8s does not keep track of this open port since k8s does not need to take steps to open it.
Since k8s is not responsible for opening the port, you won't be able to find the listening port using kubectl or checking the pod YAML

Try the combination of both kubectl and your Linux command to get the Port container is listening on:
kubectl exec <pod name here> -- netstat -tulpn
Further you can pipe this result with grep to narrow the findings if required eg.
kubectl exec <pod name here> -- netstat -tulpn | grep "search string"
Note: It will work only if your container's base image supports the command netstat. and as per your Update section it seems it supports.
Above solution is nothing but a smart use of the commands you have used in two parts first to exec the container in interactive mode using -it second in the container to list the listening port.

One answer suggested to run netstat inside the container.
This only works if netstat is part of the container's image.
As an alternative, you can run netstat on the host executing it in the container's network namespace..
Get the container's process ID on the host (this is the application running inside the container). Then change to the container's network namespace (run as root on the host):
host# PS1='container# ' nsenter -t <PID> -n
Modifying the PS1 environment variable is used to show a different prompt while you are in the container's network namespace.
Get the listening ports in the container:
container# netstat -na
....
container# exit

If who created the image added the right Openshift label then you can use the following command (unfortunately your image does not have the label) :
skopeo inspect docker://image-url:tag | grep expose-service
e.g.
skopeo inspect docker://quay.io/redhattraining/loadtest:v1.0 | grep expose-service
output:
"io.openshift.expose-services": "8080:http"
So 8080 is the port exposed by the image
Hope this helps

normally. a container will able to run curl . so you can use curl to check whether a port is open.
for port in 8080 50000 443 8443;do curl -I - connect-timeout 1 127.0.0.1:$port;done
this can be run with sh.

Related

Unable to open service via kubectl proxy

➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
airflow-flower-service ClusterIP 172.20.119.107 <none> 5555/TCP 54d
airflow-service ClusterIP 172.20.76.63 <none> 80/TCP 54d
backend-service ClusterIP 172.20.39.154 <none> 80/TCP 54d
➜ kubectl proxy
xdg-open http://127.0.0.1:8001/api/v1/namespaces/edna/services/http:airflow-service:/proxy/#q=ip-192-168-114-35
and it fails with
Error trying to reach service: 'dial tcp 10.0.102.174:80: i/o timeout'
However if I expose the service via kubectl port-forward I can open the service in the browser
kubectl port-forward service/backend-service 8080:80 -n edna
xdg-open HTTP://localhost:8080
So how to open the service via that long URL (similar how we open the kubernetes dashboard?
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default
If I query the API with CURL I see the output
➜ curl http://127.0.0.1:8001/api/v1/namespaces/edna/services/backend-service/
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "backend-service",
"namespace": "edna",
"selfLink": "/api/v1/namespaces/edna/services/backend-service",
"uid": "7163dd4e-e76d-4517-b0fe-d2d516b5dc16",
"resourceVersion": "6433582",
"creationTimestamp": "2020-08-14T05:58:45Z",
"labels": {
"app.kubernetes.io/instance": "backend-etl"
},
"annotations": {
"argocd.argoproj.io/sync-wave": "10",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{\"argocd.argoproj.io/sync-wave\":\"10\"},\"labels\":{\"app.kubernetes.io/instance\":\"backend-etl\"},\"name\":\"backend-service\",\"namespace\":\"edna\"},\"spec\":{\"ports\":[{\"port\":80,\"protocol\":\"TCP\",\"targetPort\":80}],\"selector\":{\"app\":\"edna-backend\"},\"type\":\"ClusterIP\"}}\n"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 80
}
],
"selector": {
"app": "edna-backend"
},
"clusterIP": "172.20.39.154",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {
}
}
}
Instead of your URL:
http://127.0.0.1:8001/api/v1/namespaces/edna/services/http:airflow-service:/proxy
Try without 'http:'
http://127.0.0.1:8001/api/v1/namespaces/edna/services/airflow-service/proxy

eureka pod turn to pending state when running a period of time in kubernetes cluster

I am deploy a eureka pod in kubernetes cluster(v1.15.2),today the pod turn to be pending state and the actual state is running.Other service could not access to eureka, the eureka icon to indicate pod status shows:this pod is in a pending state.This is my stateful deploy yaml:
{
"kind": "StatefulSet",
"apiVersion": "apps/v1beta2",
"metadata": {
"name": "eureka",
"namespace": "dabai-fat",
"selfLink": "/apis/apps/v1beta2/namespaces/dabai-fat/statefulsets/eureka",
"uid": "92eefc3d-4601-4ebc-9414-8437f9934461",
"resourceVersion": "20195760",
"generation": 21,
"creationTimestamp": "2020-02-01T16:55:54Z",
"labels": {
"app": "eureka"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "eureka"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "eureka"
}
},
"spec": {
"containers": [
{
"name": "eureka",
"image": "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/soa-eureka:v1.0.0",
"ports": [
{
"name": "server",
"containerPort": 8761,
"protocol": "TCP"
},
{
"name": "management",
"containerPort": 8081,
"protocol": "TCP"
}
],
"env": [
{
"name": "APP_NAME",
"value": "eureka"
},
{
"name": "POD_NAME",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.name"
}
}
},
{
"name": "APP_OPTS",
"value": " --spring.application.name=${APP_NAME} --eureka.instance.hostname=${POD_NAME}.${APP_NAME} --registerWithEureka=true --fetchRegistry=true --eureka.instance.preferIpAddress=false --eureka.client.serviceUrl.defaultZone=http://eureka-0.${APP_NAME}:8761/eureka/"
},
{
"name": "APOLLO_META",
"valueFrom": {
"configMapKeyRef": {
"name": "fat-config",
"key": "apollo.meta"
}
}
},
{
"name": "ENV",
"valueFrom": {
"configMapKeyRef": {
"name": "fat-config",
"key": "env"
}
}
}
],
"resources": {
"limits": {
"cpu": "2",
"memory": "1Gi"
},
"requests": {
"cpu": "2",
"memory": "1Gi"
}
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 10,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"imagePullSecrets": [
{
"name": "regcred"
}
],
"schedulerName": "default-scheduler"
}
},
"serviceName": "eureka-service",
"podManagementPolicy": "Parallel",
"updateStrategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"partition": 0
}
},
"revisionHistoryLimit": 10
},
"status": {
"observedGeneration": 21,
"replicas": 1,
"readyReplicas": 1,
"currentReplicas": 1,
"updatedReplicas": 1,
"currentRevision": "eureka-5976977b7d",
"updateRevision": "eureka-5976977b7d",
"collisionCount": 0
}
}
this is the describe output of the pending state pod:
$ kubectl describe pod eureka-0
Name: eureka-0
Namespace: dabai-fat
Priority: 0
Node: uat-k8s-01/172.19.104.233
Start Time: Mon, 23 Mar 2020 18:40:11 +0800
Labels: app=eureka
controller-revision-hash=eureka-5976977b7d
statefulset.kubernetes.io/pod-name=eureka-0
Annotations: <none>
Status: Running
IP: 172.30.248.8
IPs: <none>
Controlled By: StatefulSet/eureka
Containers:
eureka:
Container ID: docker://5e5eea624e1facc9437fef739669ffeaaa5a7ab655a1297c4acb1e4fd00701ea
Image: registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/soa-eureka:v1.0.0
Image ID: docker-pullable://registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/soa-eureka#sha256:7cd4878ae8efec32984a2b9eec623484c66ae11b9449f8306017cadefbf626ca
Ports: 8761/TCP, 8081/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 23 Mar 2020 18:40:18 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 2
memory: 1Gi
Environment:
APP_NAME: eureka
POD_NAME: eureka-0 (v1:metadata.name)
APP_OPTS: --spring.application.name=${APP_NAME} --eureka.instance.hostname=${POD_NAME}.${APP_NAME} --registerWithEureka=true --fetchRegistry=true --eureka.instance.preferIpAddress=false --eureka.client.serviceUrl.defaultZone=http://eureka-0.${APP_NAME}:8761/eureka/
APOLLO_META: <set to the key 'apollo.meta' of config map 'fat-config'> Optional: false
ENV: <set to the key 'env' of config map 'fat-config'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xnrwt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady True
PodScheduled True
Volumes:
default-token-xnrwt:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xnrwt
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 360s
node.kubernetes.io/unreachable:NoExecute for 360s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16h default-scheduler Successfully assigned dabai-fat/eureka-0 to uat-k8s-01
Normal Pulling 16h kubelet, uat-k8s-01 Pulling image "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/soa-eureka:v1.0.0"
Normal Pulled 16h kubelet, uat-k8s-01 Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/soa-eureka:v1.0.0"
Normal Created 16h kubelet, uat-k8s-01 Created container eureka
Normal Started 16h kubelet, uat-k8s-01 Started container eureka
how could this happen? what should I do to avoid this situation? After I restart the eureka pod,this problem disappeared,but I still want to know the reason cause this problem.
Sounds like a Kubernetes bug? Try to reproduce it on the current version of Kubernetes. You can also dive into the kubelet logs to see if there is anything useful on those.

How to access pod externally, i.e. bind localhost:8888 to cluster IP

I have a service running on localhost:8888 and trying to bind that to cluster's public IP so that I can open it up from my web browser. I created another service using the following yaml file:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "example-service"
},
"spec": {
"ports": [{
"port": 8888,
"targetPort": 8888
}],
"selector": {
"app": "example"
},
"type": "LoadBalancer"
}
}
Then I do kubectl describe services example-service:
Name: example-service
Namespace: spark-cluster
Labels: <none>
Selector: app=example
Type: LoadBalancer
IP: 10.3.0.66
LoadBalancer Ingress: a123b456c789.us-west-1.elb.amazonaws.com
Port: <unset> 8888/TCP
NodePort: <unset> 32767/TCP
Endpoints: <none>
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
14s 14s 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer
11s 11s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer
When I open up a123b456c789.us-west-1.elb.amazonaws.com:8888 in my web browser, it doesn't load. What are the correct steps to access my pod externally?
With your setup the application is available on ip adress of one of your node on port 32767 ( NodePort parameter ) if you want to give a NodePort yourself you need to change the code like this :
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "example-service"
},
"spec": {
"ports": [{
"port": 8888,
"targetPort": 8888
"nodePort": 8888
}],
"selector": {
"app": "example"
},
"type": "LoadBalancer"
}
}
http://kubernetes.io/docs/user-guide/services/#type-nodeport

Google Cloud Container: Can not connect to mongodb service

I created a mongodb replication controller and a mongo service. I tried to connect to it from a different mongo pod just to test the connection. But that does not work
root#mongo-test:/# mongo mongo-service/mydb
MongoDB shell version: 3.2.0
connecting to: mongo-service/mydb
2015-12-09T11:05:55.256+0000 E QUERY [thread1] Error: network error while attempting to run command 'isMaster' on host 'mongo-service:27017' :
connect#src/mongo/shell/mongo.js:226:14
#(connect):1:6
exception: connect failed
I am not sure what I have done wrong in the configuration. I may miss something here
kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
mongo mongo mongo:latest name=mongo 1 9s
kubectl get pods
NAME READY STATUS RESTARTS AGE
mongo-6bnak 1/1 Running 0 1m
mongo-test 1/1 Running 0 21m
kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.119.240.1 <none> 443/TCP <none> 23h
mongo-service 10.119.254.202 <none> 27017/TCP name=mongo,role=mongo 1m
I configured the RC and Service with the following configs
mongo-rc
{
"metadata": {
"name": "mongo",
"labels": { "name": "mongo" }
},
"kind": "ReplicationController",
"apiVersion": "v1",
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": { "name": "mongo" }
},
"spec": {
"volumes": [
{
"name": "mongo-disk",
"gcePersistentDisk": {
"pdName": "mongo-disk",
"fsType": "ext4"
}
}
],
"containers": [
{
"name": "mongo",
"image": "mongo:latest",
"ports": [{
"name":"mongo",
"containerPort": 27017
}],
"volumeMounts": [
{
"name": "mongo-disk",
"mountPath": "/data/db"
}
]
}
]
}
}
}
}
mongo-service:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "mongo-service"
},
"spec": {
"ports": [
{
"port": 27017,
"targetPort": "mongo"
}
],
"selector": {
"name": "mongo",
"role": "mongo"
}
}
}
Almost a bit embarrassing.
The issue was that I used the selector "role" in the service but did not define it on the RC.

Kubernetes Service does not get its external IP address

When I build a Kubernetes service in two steps (1. replication controller; 2. expose the replication controller) my exposed service gets an external IP address:
initially:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.241.95 80/TCP app=app-1 7s
and after about 30s:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.241.95 104.155.93.79 80/TCP app=app-1 35s
But when I do it in one step providing the Service and the ReplicationController to the kubectl create -f dir_with_2_files the service gets created but it does not get and External IP:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
app-1 10.67.251.171 <none> 80/TCP app=app-1 2m
The <none> under External IP worries me.
For the Service I use the JSON file:
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "app-1"
},
"spec": {
"selector": {
"app": "app-1"
},
"ports": [
{
"port": 80,
"targetPort": 8000
}
]
}
}
and for the ReplicationController:
{
"apiVersion": "v1",
"kind": "ReplicationController",
"metadata": {
"name": "app-1"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"app": "app-1"
}
},
"spec": {
"containers": [
{
"name": "service",
"image": "gcr.io/sigma-cairn-99810/service:latest",
"ports": [
{
"containerPort": 8000
}
]
}
]
}
}
}
}
and to expose the Service manually I use the command:
kubectl expose rc app-1 --port 80 --target-port=8000 --type="LoadBalancer"
If you don't specify the type of a Service it defaults to ClusterIP. If you want the equivalent of expose you must:
Make sure your Service selects pods from the RC via matching label selectors
Make the Service type=LoadBalancer