Kubernetes service is configured but get connection refused - kubernetes

I have service configurd on my kuberntes cluster but when I try to curl ip:port I get connection refused
the following service configured :
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: production
spec:
type: NodePort
selector:
app: frontend
ports:
- name: control-center-web
port: 8008
protocol: TCP
targetPort: 8008
$ kubectl describe svc frontend
Name: frontend
Namespace: production
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"frontend","namespace":"production"},"spec":{"ports":[{"name":...
Selector: app=frontend
Type: NodePort
IP: <ip>
Port: control-center-web 8008/TCP
TargetPort: 8008/TCP
NodePort: control-center-web 7851/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 14m service-controller LoadBalancer -> NodePort
Why do I keep getting connection refuse for ip and port that I took from the svc?

The Endpoints in service has got None instead of IPs of the pods. This happens when the selector in service app: frontend does not match with the selector in pod spec,

Related

Kubernetes service URL not responding to API call

I've been following multiple tutorials on how to deploy my (Spring Boot) api on Minikube. I already got it (user-service running on 8081) working in a docker container with an api gateway (port 8080) and eureka (port 8087), but for starters I just want it to run without those. Steps I took:
Push docker container or image (?) to docker hub, I don't know the proper term.
Create a deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: kwetter-service
spec:
type: LoadBalancer
selector:
app: kwetter
ports:
- protocol: TCP
port: 8080
targetPort: 8081
nodePort: 30070
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kwetter-deployment
labels:
app: kwetter
spec:
replicas: 1
selector:
matchLabels:
app: kwetter
template:
metadata:
labels:
app: kwetter
spec:
containers:
- name: user-api
image: cazhero/s6-kwetter-backend_user:latest
ports:
- containerPort: 8081 #is the port it runs on when I manually start it up
kubectl apply -f deployment.yaml
minikube service kwetter-service
It takes me to an empty site with url: http://192.168.49.2:30070 which I thought I could use to make API calls to, but apparently not. How do I make api calls to my application running on minikube?
Get svc returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d4h
kwetter-service LoadBalancer 10.106.42.56 <pending> 8080:30070/TCP 4d
describe svc kwetter-service:
Name: kwetter-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=kwetter
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.106.42.56
IPs: 10.106.42.56
Port: <unset> 8080/TCP
TargetPort: 8081/TCP
NodePort: <unset> 30070/TCP
Endpoints: 172.17.0.4:8081
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 6s service-controller LoadBalancer -> NodePort
Made an Ingress in the yaml, used kubectl get ing:
NAME CLASS HOSTS ADDRESS PORTS AGE
kwetter-ingress <none> * 80 49m
To make some things clear:
You need to have pushed your docker image cazhero/s6-kwetter-backend_user:latest to docker hub, check that at https://hub.docker.com/, in your personal repository.
What's the output of minikube service kwetter-service, does it print the URL http://192.168.49.2:30070?
Make sure your pod is running correctly by the following minikube command:
# check pod status
minikube kubectl -- get pods
# if the pod is running, check its container logs
minikube kubectl -- logs po kwetter-deployment-xxxx-xxxx
I see that you are using LoadBalancer service, where a LoadBalancer service is the standard way to expose a service to the internet. With this method, each service gets its own IP address.
Check external IP
kubectl get svc
Use the external IP and the port number in this format to access the
application.
http://REPLACE_WITH_EXTERNAL_IP:8080
If you want to access the application using Nodeport (30070), use the Nodeport service instead of LoadBalancer service.
Refer to this documentation for more information on accessing applications through Nodeport and LoadBalancer services.

Kubernetes service port forwarding to multiple pods

I have configured a kubernetes service and starts 2 replicas/pods.
Here's the service settings and description.
apiVersion: v1
kind: Service
metadata:
name: my-integration
spec:
type: NodePort
selector:
camel.apache.org/integration: my-integration
ports:
- protocol: UDP
port: 8444
targetPort: 8444
nodePort: 30006
kubectl describe service my-service
Detail:
Name: my-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: camel.apache.org/integration=xxx
Type: NodePort
IP Families: <none>
IP: 10.xxx.16.235
IPs: 10.xxx.16.235
Port: <unset> 8444/UDP
TargetPort: 8444/UDP
NodePort: <unset> 30006/UDP
Endpoints: 10.xxx.1.39:8444,10.xxx.1.40:8444
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What I can see is 2 endpoints are listed, unfortunately only one pod receives events. When I kill the pod, the other one will start receiving events.
My goal is to make 2 pod receive events in Round Robin but don't know how to configure the traffic policies.

Kubernetes services endpoints healtcheck

I've created a Kubernetes Service whose backend nodes aren't part of the Cluster but a fixed set of nodes (having fixed IPs), so I've also created an Endpoints resource with the same name:
apiVersion: v1
kind: Service
metadata:
name: elk-svc
spec:
ports:
- port: 9200
targetPort: 9200
protocol: TCP
---
kind: Endpoints
apiVersion: v1
metadata:
name: elk-svc
subsets:
-
addresses:
- { ip: 172.21.0.40 }
- { ip: 172.21.0.41 }
- { ip: 172.21.0.42 }
ports:
- port: 9200
Description of Service and Endpoints:
$ kubectl describe svc elk-svc
Name: elk-svc
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"elk-svc","namespace":"default"},"spec":{"ports":[{"port":9200,"protocol":"TCP"...
Selector: <none>
Type: ClusterIP
IP: 10.233.17.18
Port: <unset> 9200/TCP
Endpoints: 172.21.0.40:9200,172.21.0.41:9200,172.21.0.42:9200
Session Affinity: None
Events: <none>
$ kubectl describe ep elk-svc
Name: elk-svc
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Endpoints","metadata":{"annotations":{},"name":"elk-svc","namespace":"default"},"subsets":[{"addresses":[{"ip":"172.21.0.40"...
Subsets:
Addresses: 172.21.0.40,172.21.0.41,172.21.0.42
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 9200 TCP
Events: <none>
My pods are able to communicate with ElasticSearch using the internal cluster IP 10.233.17.18. Everything goes fine !
My question is about if there's any way to have some kind of healthCheck mechanism for that Service I've created so if one of my ElasticSearch nodes goes down, ie: 172.21.0.40, then the Service is aware of that and and will no longer route traffic to that node, but to the others. Is that possible ?
Thanks.
This is not supported in k8s.
For more clarification refer this issue raised on your requirement ::
https://github.com/kubernetes/kubernetes/issues/77738#issuecomment-491560980
For this use-case best practice would be to use a loadbalancer like haproxy
My suggestion will be to have a reverse proxy such as nginx or haproxy infront of the elastic nodes which will do health check for those nodes.

Unable to connect to external load balancer even after exposing service in kubernetes

I have the following deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: family-tree-deployment
labels:
app: familytree
spec:
replicas: 1
selector:
matchLabels:
app: familytree
template:
metadata:
labels:
app: familytree
spec:
containers:
- name: familytree
image: index.docker.io/koustubh/familytree:v1.0
ports:
- containerPort: 8080
I could successfully create the deployment using kubectl create -f deploy.yml
Now, I simply exposed this deployment with the following command
kubectl expose deployment family-tree-deployment --type=LoadBalancer --name=familytree-service
The service was successfully created.
The output is
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
familytree-service LoadBalancer 10.51.244.161 35.221.113.235 8080:30505/TCP 1h
$ kubectl describe svc familytree-service
Name: familytree-service
Namespace: default
Labels: app=familytree
Annotations: <none>
Selector: app=familytree
Type: LoadBalancer
IP: 10.51.244.161
LoadBalancer Ingress: 35.221.113.235
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30505/TCP
Endpoints: 10.48.4.7:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I could login to the pod and I made sure the service is working.
However, when I use the external ip of the load balancer and query my api, the connection times out.
I have made sure firewall allows port 8080.
My application is running on port 8080
The generated Service object looks perfectly valid, so we can exclude a label issue or a missing public IP address. Besides you can access your Service internally, which means the firewall rule was applied incorrectly, most likely.
Please ensure you allow incoming traffic as follows
from the internet to the load balancer on TCP port 8080
from the load balancer to all Kubernetes nodes on TCP port 30505

Set static external IP for my load balancer on GKE

I am trying to set up a static external IP for my load balancer on GKE but having no luck. Here is my Kubernetes service config file:
kind: Service
apiVersion: v1
metadata:
name: myAppService
spec:
selector:
app: myApp
ports:
- protocol: TCP
port: 3001
targetPort: 3001
type: LoadBalancer
loadBalancerIP: *********
This doesn't work. I expect to see my external IP as ********* but it just says pending:
➜ git:(master) kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ********* <none> 443/TCP 5m
myAppService ********* <pending> 3001:30126/TCP 5m
More details:
➜ git:(master) kubectl describe services
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: *********
Port: https 443/TCP
Endpoints: *********
Session Affinity: ClientIP
Events: <none>
Name: myAppService
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myApp
Type: LoadBalancer
IP: *********
Port: <unset> 3001/TCP
NodePort: <unset> 30126/TCP
Endpoints:
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 20s 7 service-controller Normal CreatingLoadBalancer Creating load balancer
5m 19s 7 service-controller Warning CreatingLoadBalancerFailed Error creating load balancer (will retry): Failed to create load balancer for service default/myAppService: Cannot EnsureLoadBalancer() with no hosts
Any ideas?
I've encountered the same problem, but after reading the docs carefully, it turned out that I was just reserving the static IP incorrectly.
A service of type LoadBalancer creates a network load balancer, which is regional. Therefore, also the static IP address you reserve needs to be regional also (in the regoin of your cluster).
When I changed to this solution, everything worked fine for me...
This got me stuck as well, I hope someone finds this helpful.
In addition to what Dirk said, if you happen to reserve a global static IP address as oppose to a regional one; you need to use Ingres as describe here in documentation: Configuring Domain Names with Static IP Addresses specifically step 2b.
So basically you reserve the static ip gcloud compute addresses create helloweb-ip --global
and add an Ingres:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helloweb
# this is where you you add your reserved ip
annotations:
kubernetes.io/ingress.global-static-ip-name: helloweb-ip
labels:
app: hello
spec:
backend:
serviceName: helloweb-backend
servicePort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: helloweb-backend
labels:
app: hello
spec:
type: NodePort
selector:
app: hello
tier: web
ports:
- port: 8080
targetPort: 8080
The doc also describe how to assign a static ip if you choose type "LoadBalancer" under step 2a.