How to format the output in Kubernetes? - kubernetes

I want to get specific output for a command like getting the nodeports and loadbalancer of a service. How do I do that?

The question is pretty lacking on what exactly wants to be retrieved from Kubernetes but I think I can provide a good baseline.
When you use Kubernetes, you are most probably using kubectl to interact with kubeapi-server.
Some of the commands you can use to retrieve the information from the cluster:
$ kubectl get RESOURCE --namespace NAMESPACE RESOURCE_NAME
$ kubectl describe RESOURCE --namespace NAMESPACE RESOURCE_NAME
Example:
Let's assume that you have a Service of type LoadBalancer (I've redacted some output to be more readable):
$ kubectl get service nginx -o yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
clusterIP: 10.2.151.123
externalTrafficPolicy: Cluster
ports:
- nodePort: 30531
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: A.B.C.D
Getting a nodePort from this output could be done like this:
kubectl get svc nginx -o jsonpath='{.spec.ports[].nodePort}'
30531
Getting a loadBalancer IP from this output could be done like this:
kubectl get svc nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}"
A.B.C.D
You can also use kubectl with custom-columns:
kubectl get service -o=custom-columns=NAME:metadata.name,IP:.spec.clusterIP
NAME IP
kubernetes 10.2.0.1
nginx 10.2.151.123
There are a lot of possible ways to retrieve data with kubectl which you can read more by following the:
kubectl get --help:
-o, --output='': Output format. One of:
json|yaml|wide|name|custom-columns=...|custom-columns-file=...|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=...
See custom columns, golang template and jsonpath template.
Kubernetes.io: Docs: Reference: Kubectl: Cheatsheet: Formatting output
Additional resources:
Kubernetes.io: Docs: Reference: Kubectl: Overview
Github.com: Kubernetes client: Python - if you would like to retrieve this information with Python
Stackoverflow.com: Answer: How to parse kubectl describe output and get the required field value

If you want to extract just single values, perhaps as part of scripts, then what you are searching for is -ojsonpath such as this example:
kubectl get svc service-name -ojsonpath='{.spec.ports[0].port}'
which will extract jus the value of the first port listed into the service specs.
docs - https://kubernetes.io/docs/reference/kubectl/jsonpath/
If you want to extract the whole definition of an object, such as a service, then what you are searching for is -oyaml such as this example:
kubectl get svc service-name -oyaml
which will output the whole service definition, all in yaml format.
If you want to get a more user-friendly description of a resource, such as a service, then you are searching for a describe command, such as this example:
kubectl describe svc service-name
docs - https://kubernetes.io/docs/reference/kubectl/overview/#output-options

Related

k3s - Can't access my service based on service name

I have created a service like this:
apiVersion: v1
kind: Service
metadata:
name: amen-sc
spec:
ports:
- name: http
port: 3030
targetPort: 8000
selector:
component: scc-worker
I am able to access this service, from within my pods of the same cluster (& Namespace), using the IP address I get from kubectl get svc, but I am not able to access using the service name like curl amen-sc:3030.
Please advise what could possibly be wrong.
I intend to expose certain pods, only within my cluster and access them using the service-name:port format.
Make sure you have DNS service configured and corresponding pods are running.
kubectl get svc -n kube-system -l k8s-app=kube-dns
and
kubectl get pods -n kube-system -l k8s-app=kube-dns

Kubernetes Service get Connection Refused

I am trying to create an application in Kubernetes (Minikube) and expose its service to other applications in same clusters, but i get connection refused if i try to access this service in Kubernetes node.
This application just listen on HTTP 127.0.0.1:9897 address and send response.
Below is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: exporter-test
namespace: datenlord-monitoring
labels:
app: exporter-test
spec:
replicas: 1
selector:
matchLabels:
app: exporter-test
template:
metadata:
labels:
app: exporter-test
spec:
containers:
- name: prometheus
image: 34342/hello_world
ports:
- containerPort: 9897
---
apiVersion: v1
kind: Service
metadata:
name: exporter-test-service
namespace: datenlord-monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9897'
spec:
selector:
app: exporter-test
type: NodePort
ports:
- port: 8080
targetPort: 9897
nodePort: 30001
After I apply this yaml file, the pod and the service deployed correctly, and I am sure this pod works correctly, since when I login the pod by
kubectl exec -it exporter-test-* -- sh, then just run curl 127.0.0.1:9897, I can get the correct response.
Also, if I run kubectl port-forward exporter-test-* -n datenlord-monitoring 8080:9897, I can get correct response from localhost:8080. So this application should work well.
However, when I trying to access this service from other application in same K8s cluster by exporter-test-service.datenlord-monitoring.svc:30001 or just run curl nodeIp:30001 in k8s node or run curl clusterIp:8080 in k8s node, I got Connection refused
Anyone had same issue before? Appreciate for any help! Thanks!
you are mixing two things here. NodePort is the port the application is available from outside your cluster. Inside your cluster you need to access your service via the service port, not the NodePort.
Try changing exporter-test-service.datenlord-monitoring.svc:30001 to exporter-test-service.datenlord-monitoring.svc:8080
Welcome to the community!
There are no issues with behaviour you observed.
In short words kubernetes cluster (which is minikube in this case) has its own isolated network with internal DNS.
One way to access your service on the node: you specified nodePort for your service and this made the service accessible on the localhost:30001. You can check it by running on your host:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service NodePort 10.111.191.159 <none> 8080:30001/TCP 2m45s
# Test:
curl -I localhost:30001
HTTP/1.1 200 OK
Another way to expose service to the host network is to use minikube tunnel (run in the another console). You'll need to change service type from NodePort to LoadBalancer:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service LoadBalancer 10.111.191.159 10.111.191.159 8080:30001/TCP 18m
# Test:
$ curl -I 10.111.191.159:8080
HTTP/1.1 200 OK
Why some of options doesn't work.
Connection to the service by its DNS + NodePort. NodePort is used to link host IP and NodePort to service port inside kubernetes cluster. Internal DNS is not accessible outside kubernetes cluster (unless you don't add IPs to /etc/hosts on your host machine)
Inside the cluster you should use internal DNS with internal service port which is 8080 in your case. You can check how this works with a separate container in the same namespace (e.g. image curlimages/curl) and get following:
$ kubectl exec -it curl -n datenlord-monitoring -- curl -I exporter-test-service:8080
HTTP/1.1 200 OK
Or from the pod in a different namespace:
$ kubectl exec -it curl-default-ns -- curl -I exporter-test-service.datenlord-monitoring.svc:8080
HTTP/1.1 200 OK
I've attached useful links which help you to understand this difference.
Edit: DNS inside deployed pod
$ kubectl exec -it exporter-test-xxxxxxxx-yyyyy -n datenlord-monitoring -- bash
root#exporter-test-74cf9f94ff-fmcqp:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search datenlord-monitoring.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Useful links:
DNS for pods and services
Service types
Accessing apps in Minikube
you need to change 127.0.0.1:9897 to 0.0.0.0:9897 so that application listens to all incoming requests

Kunernetes/kustomize Service endpoints abnormal behavior

We're using kustomize with kubernetes on our project.
I'm trying to implement access to external service using IP as mentioned in this link
https://medium.com/#ManagedKube/kubernetes-access-external-services-e4fd643e5097
Here's my service
---
kind: Service
apiVersion: v1
metadata:
name: pgsql
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
name: "pg"
selector: {}
---
apiVersion: v1
kind: Endpoints
metadata:
name: pgsql
subsets:
- addresses:
- ip: 1.1.1.1
ports:
- port: 5432
name : "pg"
When I apply with kubectl command (kubectl apply -k ...) I have a warning
Warning: kubectl apply should be used on resource created by either
kubectl create --save-config or kubectl apply
However, this warning does not avoid endpoints and service creation.
kubectl get endpoints
NAME ENDPOINTS AGE
pgsql 172.12.xx.yy:5432 3m27s
Unfortunately, the ip address is different from the one I put in my yml (1.1.1.1)
If I apply a second time
kubectl apply -k ...
kubectl get endpoints
NAME ENDPOINTS AGE
pgsql 1.1.1.1:5432 10s
I do not have the warning above anymore.
The endpoint is the one expected.
I expect endpoint address to be the exact one (1.1.1.1:5432) from the first apply.
Any suggestions?
Thanks
It probably comes from the empty selector. Could you try to remove it completely?
This is supposed to work only if your service doesn't have any selector

Is there a command to check which pods have a service applied

I have made a service.yaml and have created the service.
kind: Service
apiVersion: v1
metadata:
name: cass-operator-service
spec:
type: LoadBalancer
ports:
- port: 9042
targetPort: 9042
selector:
name: cass-operator
Is there a way to check on which pods the service has been applied?
I want that using the above service, I connect to a cluster in Google Cloud running Kubernetes/Cassandra on external_ip/port (9042 port). But using the above service, I am not able to.
kubectl get svc shows
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cass-operator-service LoadBalancer 10.51.247.82 34.91.214.233 9042:31902/TCP 73s
So probably the service is listening on 9042 but is forwarding to pods in 31902. I want both ports to be 9042. Is it possible to do?
The best way is follow labels and selectros
Your pod have a label section, and the service use it in the selector section, some examples in:
https://kubernetes.io/docs/concepts/services-networking/service/
You can find the selectors of your service with:
kubectl describe svc cass-operator-service
You can list your labels with:
kubectl get pods --show-labels
You can get pods by querying with selector like following
kubectl get pods -l name=cass-operator
You can also list all pods which are serving traffic behind a kubernetes service by running
kubectl get ep <service name> -o=jsonpath='{.subsets[*].addresses[*].ip}' | tr ' ' '\n' | xargs -I % kubectl get pods -o=name --field-selector=status.podIP=%

kubectl expose pods using selector as a NodePort service via command-line

I have a requirement to expose pods using selector as a NodePort service from command-line. There can be one or more pods. And the service needs to include the pods dynamically as they come and go. For example,
NAMESPACE NAME READY STATUS RESTARTS AGE
rakesh rakesh-pod1 1/1 Running 0 4d18h
rakesh rakesh-pod2 1/1 Running 0 4d18h
I can create a service that selects the pods I want using a service-definition yaml file -
apiVersion: v1
kind: Service
metadata:
name: np-service
namespace: rakesh
spec:
type: NodePort
ports:
- name: port1
port: 30005
targetPort: 30005
nodePort: 30005
selector:
abc.property: rakesh
However, I need to achieve the same via commandline. I tried the following -
kubectl -n rakesh expose pod --selector="abc.property: rakesh" --port=30005 --target-port=30005 --name=np-service --type=NodePort
and a few other variations without success.
Note: I understand that currently, there is no way to specify the node-port using command-line. A random port is allocated between 30000-32767. I am fine with that as I can patch it later to the required port.
Also note: These pods are not part of a deployment unfortunately. Otherwise, expose deployment might have worked.
So, it kind of boils down to selecting pods based on selector and exposing them as a service.
My kubernetes version: 1.12.5 upstream
My kubectl version: 1.12.5 upstream
You can do:
kubectl expose $(kubectl get po -l abc.property=rakesh -o name) --port 30005 --name np-service --type NodePort
You can create a NodePort service with a specified name using create nodeport command:
kubectl create nodeport NAME [--tcp=port:targetPort] [--dry-run=server|client|none]
For example:
kubectl create service nodeport myservice --node-port=31000 --tcp=3000:80
You can check Kubectl reference for more:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-service-nodeport-em-