Kunernetes/kustomize Service endpoints abnormal behavior - kubernetes

We're using kustomize with kubernetes on our project.
I'm trying to implement access to external service using IP as mentioned in this link
https://medium.com/#ManagedKube/kubernetes-access-external-services-e4fd643e5097
Here's my service
---
kind: Service
apiVersion: v1
metadata:
name: pgsql
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
name: "pg"
selector: {}
---
apiVersion: v1
kind: Endpoints
metadata:
name: pgsql
subsets:
- addresses:
- ip: 1.1.1.1
ports:
- port: 5432
name : "pg"
When I apply with kubectl command (kubectl apply -k ...) I have a warning
Warning: kubectl apply should be used on resource created by either
kubectl create --save-config or kubectl apply
However, this warning does not avoid endpoints and service creation.
kubectl get endpoints
NAME ENDPOINTS AGE
pgsql 172.12.xx.yy:5432 3m27s
Unfortunately, the ip address is different from the one I put in my yml (1.1.1.1)
If I apply a second time
kubectl apply -k ...
kubectl get endpoints
NAME ENDPOINTS AGE
pgsql 1.1.1.1:5432 10s
I do not have the warning above anymore.
The endpoint is the one expected.
I expect endpoint address to be the exact one (1.1.1.1:5432) from the first apply.
Any suggestions?
Thanks

It probably comes from the empty selector. Could you try to remove it completely?
This is supposed to work only if your service doesn't have any selector

Related

k3s - Can't access my service based on service name

I have created a service like this:
apiVersion: v1
kind: Service
metadata:
name: amen-sc
spec:
ports:
- name: http
port: 3030
targetPort: 8000
selector:
component: scc-worker
I am able to access this service, from within my pods of the same cluster (& Namespace), using the IP address I get from kubectl get svc, but I am not able to access using the service name like curl amen-sc:3030.
Please advise what could possibly be wrong.
I intend to expose certain pods, only within my cluster and access them using the service-name:port format.
Make sure you have DNS service configured and corresponding pods are running.
kubectl get svc -n kube-system -l k8s-app=kube-dns
and
kubectl get pods -n kube-system -l k8s-app=kube-dns

Kubernetes Service get Connection Refused

I am trying to create an application in Kubernetes (Minikube) and expose its service to other applications in same clusters, but i get connection refused if i try to access this service in Kubernetes node.
This application just listen on HTTP 127.0.0.1:9897 address and send response.
Below is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: exporter-test
namespace: datenlord-monitoring
labels:
app: exporter-test
spec:
replicas: 1
selector:
matchLabels:
app: exporter-test
template:
metadata:
labels:
app: exporter-test
spec:
containers:
- name: prometheus
image: 34342/hello_world
ports:
- containerPort: 9897
---
apiVersion: v1
kind: Service
metadata:
name: exporter-test-service
namespace: datenlord-monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9897'
spec:
selector:
app: exporter-test
type: NodePort
ports:
- port: 8080
targetPort: 9897
nodePort: 30001
After I apply this yaml file, the pod and the service deployed correctly, and I am sure this pod works correctly, since when I login the pod by
kubectl exec -it exporter-test-* -- sh, then just run curl 127.0.0.1:9897, I can get the correct response.
Also, if I run kubectl port-forward exporter-test-* -n datenlord-monitoring 8080:9897, I can get correct response from localhost:8080. So this application should work well.
However, when I trying to access this service from other application in same K8s cluster by exporter-test-service.datenlord-monitoring.svc:30001 or just run curl nodeIp:30001 in k8s node or run curl clusterIp:8080 in k8s node, I got Connection refused
Anyone had same issue before? Appreciate for any help! Thanks!
you are mixing two things here. NodePort is the port the application is available from outside your cluster. Inside your cluster you need to access your service via the service port, not the NodePort.
Try changing exporter-test-service.datenlord-monitoring.svc:30001 to exporter-test-service.datenlord-monitoring.svc:8080
Welcome to the community!
There are no issues with behaviour you observed.
In short words kubernetes cluster (which is minikube in this case) has its own isolated network with internal DNS.
One way to access your service on the node: you specified nodePort for your service and this made the service accessible on the localhost:30001. You can check it by running on your host:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service NodePort 10.111.191.159 <none> 8080:30001/TCP 2m45s
# Test:
curl -I localhost:30001
HTTP/1.1 200 OK
Another way to expose service to the host network is to use minikube tunnel (run in the another console). You'll need to change service type from NodePort to LoadBalancer:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service LoadBalancer 10.111.191.159 10.111.191.159 8080:30001/TCP 18m
# Test:
$ curl -I 10.111.191.159:8080
HTTP/1.1 200 OK
Why some of options doesn't work.
Connection to the service by its DNS + NodePort. NodePort is used to link host IP and NodePort to service port inside kubernetes cluster. Internal DNS is not accessible outside kubernetes cluster (unless you don't add IPs to /etc/hosts on your host machine)
Inside the cluster you should use internal DNS with internal service port which is 8080 in your case. You can check how this works with a separate container in the same namespace (e.g. image curlimages/curl) and get following:
$ kubectl exec -it curl -n datenlord-monitoring -- curl -I exporter-test-service:8080
HTTP/1.1 200 OK
Or from the pod in a different namespace:
$ kubectl exec -it curl-default-ns -- curl -I exporter-test-service.datenlord-monitoring.svc:8080
HTTP/1.1 200 OK
I've attached useful links which help you to understand this difference.
Edit: DNS inside deployed pod
$ kubectl exec -it exporter-test-xxxxxxxx-yyyyy -n datenlord-monitoring -- bash
root#exporter-test-74cf9f94ff-fmcqp:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search datenlord-monitoring.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Useful links:
DNS for pods and services
Service types
Accessing apps in Minikube
you need to change 127.0.0.1:9897 to 0.0.0.0:9897 so that application listens to all incoming requests

How to format the output in Kubernetes?

I want to get specific output for a command like getting the nodeports and loadbalancer of a service. How do I do that?
The question is pretty lacking on what exactly wants to be retrieved from Kubernetes but I think I can provide a good baseline.
When you use Kubernetes, you are most probably using kubectl to interact with kubeapi-server.
Some of the commands you can use to retrieve the information from the cluster:
$ kubectl get RESOURCE --namespace NAMESPACE RESOURCE_NAME
$ kubectl describe RESOURCE --namespace NAMESPACE RESOURCE_NAME
Example:
Let's assume that you have a Service of type LoadBalancer (I've redacted some output to be more readable):
$ kubectl get service nginx -o yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
clusterIP: 10.2.151.123
externalTrafficPolicy: Cluster
ports:
- nodePort: 30531
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: A.B.C.D
Getting a nodePort from this output could be done like this:
kubectl get svc nginx -o jsonpath='{.spec.ports[].nodePort}'
30531
Getting a loadBalancer IP from this output could be done like this:
kubectl get svc nginx -o jsonpath="{.status.loadBalancer.ingress[0].ip}"
A.B.C.D
You can also use kubectl with custom-columns:
kubectl get service -o=custom-columns=NAME:metadata.name,IP:.spec.clusterIP
NAME IP
kubernetes 10.2.0.1
nginx 10.2.151.123
There are a lot of possible ways to retrieve data with kubectl which you can read more by following the:
kubectl get --help:
-o, --output='': Output format. One of:
json|yaml|wide|name|custom-columns=...|custom-columns-file=...|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=...
See custom columns, golang template and jsonpath template.
Kubernetes.io: Docs: Reference: Kubectl: Cheatsheet: Formatting output
Additional resources:
Kubernetes.io: Docs: Reference: Kubectl: Overview
Github.com: Kubernetes client: Python - if you would like to retrieve this information with Python
Stackoverflow.com: Answer: How to parse kubectl describe output and get the required field value
If you want to extract just single values, perhaps as part of scripts, then what you are searching for is -ojsonpath such as this example:
kubectl get svc service-name -ojsonpath='{.spec.ports[0].port}'
which will extract jus the value of the first port listed into the service specs.
docs - https://kubernetes.io/docs/reference/kubectl/jsonpath/
If you want to extract the whole definition of an object, such as a service, then what you are searching for is -oyaml such as this example:
kubectl get svc service-name -oyaml
which will output the whole service definition, all in yaml format.
If you want to get a more user-friendly description of a resource, such as a service, then you are searching for a describe command, such as this example:
kubectl describe svc service-name
docs - https://kubernetes.io/docs/reference/kubectl/overview/#output-options

How can I debug a service in K8s which get stuck on startup?

I am trying to refresh my K8s knowledge and am following this tutorial, but am running in some problems. My current cluster (minikube) contains one pod called kubia. This pod is alive and well and contains a simple Webserver.
I want to expose that server via a kubectl expose pod kubia --type=LoadBalancer --name kubia-http.
Problem: According to my K8s dashboard, kubia-http gets stuck on startup.
Debugging:
kubectl describe endpoints kubia-http gives me
Name: kubia-http
Namespace: default
Labels: run=kubia
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-11-20T15:41:29Z
Subsets:
Addresses: 172.17.0.5
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 8080 TCP
Events: <none>
When debugging I tried to answer the following questions:
1.) Is my service missing an endpoint?
kubectl get pods --selector=run=kubia gives me one kubia pod. So, I am not missing an endpoint.
2.) Does my service try to access the wrong port when communicating with the pod?
From my pod yaml:
containers:
- name: kubia
ports:
- containerPort: 8080
protocol: TCP
From my service yaml:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 32689
The service tries to access the correct port.
What is a good approach to debug this problem?
How does the below command output looks like?
kubectl get services kubia-http
kubectl describe services kubia-http
Does everything looks normal there?
I think you are facing similar issue mentioned in this question.
So if kubectl get services kubia-http looks good except the known expected behavior external ip pending on minikube, you should able to access the service using nodeport or clusterip

What is the meaning of this kubernetes UI error message?

I am running 3 ubuntu server VMs on my local machine and trying to manage with kubernetes.
The UI does not start by itself when using the start script, so I tried to start up the UI manually using:
kubectl create -f addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
The first command succeeds then I get the following for the second command:
error validating "addons/kube-ui/kube-ui-svc.yaml": error validating
data: [field nodePort: is required, field port: is required]; if you
choose to ignore these errors, turn validation off with
--validate=false
So I try editing the default kube-ui-scv file by adding nodePort to the config:
apiVersion: v1
kind: Service
metadata:
name: kube-ui
namespace: kube-system
labels:
k8s-app: kube-ui
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeUI"
spec:
selector:
k8s-app: kube-ui
ports:
- port: 80
targetPort: 8080
nodePort: 30555
But then I get another error after the edit or adding in nodePort:
The Service "kube-ui" is invalid. spec.ports[0].nodePort: invalid
value '30555': cannot specify a node port with services of type
ClusterIP
I cannot get the ui running at my master nodes IP. kubectl get nodes returns correct information. Thanks.
I believe you're running into https://github.com/kubernetes/kubernetes/issues/8901 with the first error, can you set it to 0? Setting NodePort with a service.Type=ClusterIP doesn't make sense, so the second error is legit.