Kubenetes Load Balancer is not accessible - kubernetes

I am trying to host the below (deployment frontend) Kubernetes deployment in the AWS EKS cluster, after deploying deployment and created service and ingress, everything gets successfully deployed and created but when i try to access the Load Balancer DNS from outside then this LoadBalancer is not accessible.
Can someone please point the reason?
**Below code (deployment-2048) is working and Load Balancer is accessible but not in the case of (deployment frontend) **.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-03-09T14:08:45Z"
generation: 2
name: frontend
namespace: default
resourceVersion: "2864"
uid: a7682f3b-dffa-498f-be47-b231cce0720a
spec:
minReadySeconds: 20
progressDeadlineSeconds: 600
replicas: 4
revisionHistoryLimit: 10
selector:
matchLabels:
name: webapp
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: webapp
spec:
containers:
- image: kodekloud/webapp-color:v2
imagePullPolicy: IfNotPresent
name: simple-webapp
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 4
observedGeneration: 2
readyReplicas: 4
replicas: 4
updatedReplicas: 4
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: service-1
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
name: webapp
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: default
name: ingress-1
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: service-1
servicePort: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: game-2048
name: deployment-2048
spec:
selector:
matchLabels:
app.kubernetes.io/name: app-2048
replicas: 5
template:
metadata:
labels:
app.kubernetes.io/name: app-2048
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: app-2048
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: game-2048
name: service-2048
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app.kubernetes.io/name: app-2048
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: game-2048
name: ingress-2048
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: service-2048
servicePort: 80

In your original question (without edits and additional information), in frontend deployment you have port values misconfigured.
Port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
ContainerPort defines the port on which app can be reached out inside the container.
In short story, containerPort from deployment must have the same value as targetPort in Service.
User #herbertgoto had good idea, but unfortunately didn't specify exactly what should be done. When you have changed containerPort from 8080 to 80 it should work but I guess there was issue with repopulating this change in all resources (recreate ingress resource, redeploying pod).
One of the first troubleshooting steps should be to check if your container is listening on the proper port. That's why I requested to $ netstat output.
Useful command to check is to use $ kubectl get ep to check service endpoints.
Note
If you will skip targetPort in service and have only port, Kubernetes automatically assign targetPort value based on port.
When i kept only containerPort and TargetPort to 8080. and the others to 80. Why it worked like this.
Default port port for service is 80. So when you create service with port 80, you don't need to specify any additional things. When you have set port in service to 8080 you also need to specify it.
I've created service with port 8080 no my GKE cluster.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.104.0.1 <none> 443/TCP 153m
my-nginx LoadBalancer 10.104.14.137 34.91.230.207 8080:31311/TCP 9m39s
$ curl 34.91.230.207
curl: (7) Failed to connect to 34.91.230.207 port 80: Connection timed out
$ curl 34.91.230.207:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
...
Responses from Browser:
Only externalIP
ExternalIP:8080
As you see I needed to specify port 8080 in browser and curl command when I didn't use default 80 port.
Conclusions:
Deployment containerPort and Service targetPort must have the same value. When you are using service with port different than 80 you need to specify it by adding :<portNubmer>. That's why in almost all guides in the internet yo can see service port with value 80.

containerPort is set to 8080 and service targetPort is 80

Related

how to connect 2 webapps in 2 nodeport service?

I have a single node k8s cluster with 2 web applications running on 2 NGINX k8s pods.
nginx-deployment1 --> WEBAPP1 --> nginx-svc-app1 --> <K8s_controler_IP>:30080/webapp1
nginx-deployment2 --> WEBAPP2 --> nginx-svc-app2 --> <K8s_controler_IP>:30081/webapp2
Its connecting only to the respective nodeport ip but not connecting to <K8s_controler_IP>:30080/webapp1 and <K8s_controler_IP>:30081/webapp2. Could you please help me understand what am i missing?
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment1
labels:
name: nginx-app1
spec:
replicas: 3
selector:
matchLabels:
name: nginx-app1
template:
metadata:
labels:
name: nginx-app1
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-app1
labels:
name: nginx-svc-app1
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: app1_port
selector:
name: nginx-app1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment2
labels:
name: nginx-app2
spec:
replicas: 3
selector:
matchLabels:
name: nginx-app2
template:
metadata:
labels:
name: nginx-app2
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc-app2
labels:
name: nginx-svc-app2
spec:
type: NodePort
ports:
- port: 80
nodePort: 30081
name: http
selector:
name: nginx-app2
Your port configurations are incorrect. You need to set targetPort: 80 for both services. When you use the NodePort service type and specify the NodePort, the port field is irrelevant. That is the port that typically receives the incoming traffic for the service endpoint. So, specifying both would mean:
Receive incoming traffic on an endpoint with port 80.
Receive incoming traffic to Node port 30080.
And you are not specifying a targetPort which is the port that receives the traffic in the pods of the deployment. So, the traffic is coming in, but not being forwarded anywhere.
You need to add targetPort: 80 to both services and remove the port parameter.
Additionally, you're not running 2 pods, but you are running 2 deployments, each with 3 pods (replicas) in them. And the traffic you will send to each service, will be 'distributed' by the service, on all the pods that will match the selector - i.e. all 3 of your pods. It's important to understand how the communication takes place.
Also, k8s_controller_ip is an incorrect way of describing the IP address. You should use the term k8s Node IP or k8s_master_node_ip. The IP address belongs to the node that is running the cluster, not to any controller.

Ingress fanout gets Error: Server Error The server encountered a temporary error and could not complete your request. Please try again in 30 seconds

I have 2 services. I want to create ingress fanout for them. 1 service runs properly, the other one says
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
I opened all firewall settings. here are some details
bahaddin#b k get ingress -n ingress-nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
fanout-ingress <none> * 35.201.67.49 80 2m57s
bahaddin#bahaddin-ThinkPad-E15-Gen-2:~/projects/personal/exposer/k8s-ingress$ k get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.12.1.185 34.134.200.199 80:30803/TCP,443:32306/TCP 10m
ingress-nginx-controller-admission ClusterIP 10.12.0.83 <none> 443/TCP 10m
web NodePort 10.12.12.129 <none> 8080:30702/TCP 7m43s
web2 NodePort 10.12.5.55 <none> 8080:30160/TCP 7m42s
Here you see that external IP of ingress-nginx-controller (34.134.200.199) is different than fanout-ingress address. but when firstly the fanout-ingress has created , it was the same IP as ingress-nginx-controller (34.134.200.199). I cannot understand why, after some seconds it has gained the newer IP (35.201.67.49). I would be happy to get answer this as well.
The main question is , when I curl http://35.201.67.49/v2/ I get the result properly. But when I curl http://35.201.67.49/web1/hello - which is another service and defined on service, ingress and deployment files below - is not reachable. When I curl 35.201.67.49/web1/hello i got an error
Error: Server Error The server encountered a temporary error and could
not complete your request.
Please try again in 30 seconds.
fanput-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: fanout-ingress
namespace: ingress-nginx
spec:
rules:
- http:
paths:
- path: /web1/*
backend:
serviceName: web
servicePort: 8080
- path: /v2/*
backend:
serviceName: web2
servicePort: 8080
deployment and services
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: ingress-nginx
spec:
selector:
matchLabels:
run: web
template:
metadata:
labels:
run: web
spec:
containers:
- image: bago1/web1:latest
imagePullPolicy: IfNotPresent
name: web
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web2
namespace: ingress-nginx
spec:
selector:
matchLabels:
run: web2
template:
metadata:
labels:
run: web2
spec:
containers:
- image: gcr.io/google-samples/hello-app:2.0
imagePullPolicy: IfNotPresent
name: web2
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: web
namespace: ingress-nginx
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: web2
namespace: ingress-nginx
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web2
type: NodePort
Here

I can't curl nginx which I deployed on k8s cluster

my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
my service yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
enter image description here
enter image description here
and then I curl 10.104.239.140, but get an error curl: (7) Failed connect to 10.104.239.140:80; Connection timed out
Who can tell me what's wrong?
welcome to SO. That service you've deployed is of type ClusterIP which means it can only be accessed from within the cluster. In your case, it seems you're trying to access it from outside the cluster and thus the connection timed out.
What you can do is, deploy a service of type NodePort or LoadBalancer to access it from outside the cluster. You can read more about different service types here.
You're service would end up something like this:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort ## or LoadBalancer(supported by Cloud providers like AWS)
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30001

Cannot access file inside Kubernetes cluster that has load balancer externally

I have the cluster setup below in AKS
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-example
spec:
replicas: 3
selector:
matchLabels:
app: hpa-example
template:
metadata:
labels:
app: hpa-example
spec:
containers:
- name: hpa-example
image: gcr.io/google_containers/hpa-example
ports:
- name: http-port
containerPort: 80
resources:
requests:
cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
name: hpa-example
spec:
ports:
- port: 31001
nodePort: 31001
targetPort: http-port
protocol: TCP
selector:
app: hpa-example
type: NodePort
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-example-autoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpa-example
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
The idea of this is to check AutoScaling
I need to have this available externally so I added
apiVersion: v1
kind: Service
metadata:
name: load-balancer-autoscaler
spec:
selector:
app: hpa-example
ports:
- port: 31001
targetPort: 31001
type: LoadBalancer
This now gives me an external IP however, I cannot connect to it in Postman or via a browser
What have I missed?
I have tried to change the ports between 80 and 31001 but that makes no difference
As posted by user #David Maze:
What's the exact URL you're trying to connect to? What error do you get? (On the load-balancer-autoscaler service, the targetPort needs to match the name or number of a ports: in the pod, or you could just change the hpa-example service to type: LoadBalancer.)
I reproduced your scenario and found out issue in your configuration that could deny your ability to connect to this Deployment.
From the perspective of Deployment and Service of type NodePort everything seems to work okay.
If it comes to the Service of type LoadBalancer on the other hand:
apiVersion: v1
kind: Service
metadata:
name: load-balancer-autoscaler
spec:
selector:
app: hpa-example
ports:
- port: 31001
targetPort: 31001 # <--- CULPRIT
type: LoadBalancer
This definition will send your traffic directly to the pods on port 31001 and it should send it to the port 80 (this is the port your app is responding on). You can change it either by:
targetPort: 80
targetPort: http-port
You could also change the Service of the NodePort (hpa-example) to LoadBalancer as pointed by user #David Maze!
After changing this definition you will be able to run:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
load-balancer-autoscaler LoadBalancer 10.4.32.146 AA.BB.CC.DD 31001:31497/TCP 9m41s
curl AA.BB.CC.DD:31001 and get the reply of OK!
I encourage you to look on the additional resources regarding Kubernetes services:
Docs.microsoft.com: AKS: Network: Services
Stackoverflow.com: Questions: Difference between nodePort and LoadBalancer service types
Kubernetes.io: Docs: Concepts: Service

Configure Ingress Kubernetes - accessible only on single node

I had setup ingress on my Kubernetes Cluster running on VMWAre virtual machines by following everything similar to the specifications here. All the ports are open and accessible.
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example
My master is x.x.x.10 and nodes are x.x.x.12 and x.x.x.13.
After the creation of ingress/controllers, I need to get the IP where the nginx-controller runs
nginx-ingress-rc-kgfmd 1/1 Running 0 21h 172.16.5.5 x.x.x.12
so, it usually runs either on x.x.x.12 or x.x.x.13, and then when I do this it hits my web service
curl --resolve master.federated.fds:80:x.x.x.12 https://master.federated.fds/coffee
where master.federated.fds is the DNS resolvable name of Master.
I need to make it work without the help of IP address and only with the DNS resolvable name or else atleast with any of the node ip's
Eg: http://node2.federated.fds/coffee, when I curl this I get Connection refused error
Updating with specifications
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
labels:
app: coffee
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
# nodePort: 30080
type: NodePort
selector:
app: coffee
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
rules:
- host: jciamaster.federated.fds
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
nginx ing controller
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
hostPort: 80
I see that the port 80 is listening only on the node where nginx pod runs and not on any other node. Could someone pls let me know how to access the application through all node ip's or thro a url like jciamaster.federated.fds?
Thanks,
Update:
Tried to run with nginx controller as svc as suggested by Marc
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
#args:
#- -v=3
#- -nginx-configmaps=default/nginx-config
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx-ingress-label
name: nginx-ing-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
nodePort: 30000
type: NodePort
selector:
name: nginx-ingress
When I hit http://x.x.x.:30000/coffee it just hangs and does nothing.Anything I am doing wrong?
You can expose the nginx controller Pod with a NodePort Service, then you can access it on all nodes.