Can't curl AKS Load Balancer service - kubernetes

I have an AKS cluster with default settings. I'm trying to create a very simple Deployment/Service. The Service is type LoadBlanacer. I see the service is created, however I cannot curl the service public IP. I don't even get an error, curl just hangs.
$ kubectl get all --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pod/myapp-79579b5b68-npb2g 1/1 Running 0 104m app=myapp,pod-template-hash=79579b5b68
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 26h component=apiserver,provider=kubernetes
service/myapp-service LoadBalancer 10.0.223.167 $PUBLIC_IP 8080:31000/TCP 104m <none>
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
deployment.apps/myapp 1/1 1 1 104m app=myapp
NAME DESIRED CURRENT READY AGE LABELS
replicaset.apps/myapp-79579b5b68 1 1 1 104m app=myapp,pod-template-hash=79579b5b68
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: nginx:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: myapp-service
labels:
app: myapp
spec:
selector:
app: myapp
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080 # container port of Deployment; kubectl describe pod <podname> | grep Port
nodePort: 31000 # http://external-ip:nodePort

Depending on your requirements, you can create internal or public load balancer attached to application service. Post that you can access the service from outside the k8s cluster.

Related

Nginx minikube ingress : 503 Server error

I am trying to use minikube to deploy a sample flask app. But getting 503 nginx error. Please note I am able to access the app using the Nodeport service config.
I checked with minikube IP which is mapped to local host and tried to access the app, but getting 503 error. Not sure if I missed anything. I enable the minikube addons for nginx.
Here are my files -
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaskapp-deployment
labels:
app: flaskapp
spec:
replicas: 1
selector:
matchLabels:
app: flaskapp
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: flaskapp
image: <repo>/sample-flask-app:1.0
ports:
- containerPort: 5000
env:
- name: APPLICATION_SETTINGS
value: prd_config.py
imagePullSecrets:
- name: jfrog-secret
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: flaskapp-service
labels:
app: flaskapp
spec:
selector:
app: flaskapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flaskapp-ingress
labels:
app: flaskapp
spec:
defaultBackend:
service:
name: default-http-backend
port:
number: 80
rules:
- host: mydashboard.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: flaskapp-service
port:
number: 5000
Ingress status :
minikube kubectl -- get ingress flaskapp-ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
flaskapp-ingress nginx mydashboard.com localhost 80 18m
Cluster status:
minikube kubectl -- get all
NAME READY STATUS RESTARTS AGE
pod/flaskapp-deployment-7f59f96fd5-j9mv9 1/1 Running 1 (103m ago) 15h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/flaskapp-deployment ClusterIP 10.103.143.58 <none> 5000/TCP 34m
service/flaskapp-service ClusterIP 10.111.242.99 <none> 5000/TCP 15h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flaskapp-deployment 1/1 1 1 15h
NAME DESIRED CURRENT READY AGE
replicaset.apps/flaskapp-deployment-7f59f96fd5 1 1 1 15h

Kubernentes External Ip is working only in the cluster

I am new to Kubernetes and I am trying to host a testing site,I have pods running as below
NAME READY STATUS RESTARTS AGE
sasank-website-78864ff54b-656ld 1/1 Running 0 30m
sasank-website-78864ff54b-qdn65 1/1 Running 0 30m
Deployment file used:
piVersion: apps/v1
kind: Deployment
metadata:
name: sasank-website
labels:
app: website
spec:
replicas: 2
selector:
matchLabels:
app: website
template:
metadata:
labels:
app: website
spec:
containers:
- name: webtesting
image: 9110727495/userdetails:latest
ports:
- containerPort: 80
Service file used:
apiVersion: v1
kind: Service
metadata:
name: testingsite
labels:
app: website
spec:
type: NodePort
externalIPs:
- 192.168.1.10
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
selector:
app: website
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 102m
testingsite NodePort 10.96.246.110 192.168.1.10 80:31438/TCP 5m9s
When I try to access the Ip with port 31438 it is refusing to connect but it is using port 80 in the clustr. When I try to access with the same IP outside the cluster it is refusing to connect even to port 80. I am not sure how to understand this.. Please help. Thank you.

Cluster Kubernetes - Deploy httpd and access from external

I created my Kubernetes cluster and I try to deploy this yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
spec:
selector:
matchLabels:
app: httpd
replicas: 1
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: httpd-service
spec:
selector:
app: httpd-app
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30020
name: httpd-port
type: NodePort
This is the configuration:
[root#BCA-TST-K8S01 httpd-deploy]# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/httpd-deployment-57fc687dcc-rggx9 1/1 Running 0 8m51s 10.44.0.1 bcc-tst-docker02 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/httpd-service NodePort 10.102.138.175 <none> 8080:30020/TCP 8m51s app=httpd-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 134m <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/httpd-deployment 1/1 1 1 8m51s httpd httpd app=httpd
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/httpd-deployment-57fc687dcc 1 1 1 8m51s httpd httpd app=httpd,pod-template-hash=57fc687dcc
But I can't connect to the worker or from the cluster IP:
curl http://bcc-tst-docker02:30020
curl: (7) Failed to connect to bcc-tst-docker02 port 30020: Connection refused
How can I fix the problem?
How can expose the cluster using the internal Matser IP (for example I need to access to the httpd-deploy from the master IP 10.100.170.150 open a browser in the same network)
UPDATE:
I modified my yaml file as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
spec:
selector:
matchLabels:
app: httpd-app
replicas: 2
template:
metadata:
labels:
app: httpd-app
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: http-service
spec:
externalIPs:
- 10.100.170.150 **--> IP K8S**
externalTrafficPolicy: Cluster
ports:
- name: httpd-port
protocol: TCP
port: 8080
targetPort: 80
nodePort: 30020
selector:
app: httpd-app
sessionAffinity: None
type: LoadBalancer
And these are the result after I run apply command:
[root#K8S01 LoadBalancer]# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/httpd-deployment-65d64d47c5-72xp4 1/1 Running 0 60s 10.44.0.2 bcc-tst-docker02 <none> <none>
pod/httpd-deployment-65d64d47c5-fc645 1/1 Running 0 60s 10.36.0.1 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/http-service LoadBalancer 10.100.236.203 10.100.170.150 8080:30020/TCP 60s app=httpd-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13d <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/httpd-deployment 2/2 2 2 60s httpd httpd app=httpd-app
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/httpd-deployment-65d64d47c5 2 2 2 60s httpd httpd app=httpd-app,pod-template-hash=65d64d47c5
but now when I try to connect to the httpd using K8S IP I receive these error:
[root#K8S01 LoadBalancer]# curl http://10.100.170.150:8080
curl: (7) Failed to connect to 10.100.170.150 port 8080: No route to host
[root#K8S01 LoadBalancer]# curl http://10.100.236.203:8080
curl: (7) Failed to connect to 10.100.236.203 port 8080: No route to host
If I try to connect directly to the node I can connect:
[root#K8S01 LoadBalancer]# curl http://bca-tst-docker01:30020
<html><body><h1>It works!</h1></body></html>
[root#K8S01 LoadBalancer]# curl http://bcc-tst-docker02:30020
<html><body><h1>It works!</h1></body></html>
You're are getting the connection refused because the service does not have any endpoints behind it since your label selector is different from the deployment level.
The deployment has httpd label while the service is trying to catch all the deployments with httpd-app. Below you can find corrected selector:
kind: Service
apiVersion: v1
metadata:
name: httpd-service
spec:
selector:
app: httpd <-------
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30020
name: httpd-port
type: NodePort
You can always verify if the service has endpoints. Kubernetes has a great section about debugging services and one of it is called: Does the Service have any Endpoints?

GCP GKE load balancer connectio refused

I'm doing a deployment on the GKE service and I find that when I try to access the page the message
ERR_CONNECTION_REFUSED
I have defined a load balancing service for deployment and the configuration is as follows.
This is the .yaml for the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: "eu.gcr.io/diaphanum/onboarding-iocash-master_web:v1"
ports:
- containerPort: 3000
This is the service .yaml file.
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This working fine, and all is green in GKE :)
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-8586b9b699-flhbn 1/1 Running 0 3h23m
pod/bonsai-onboarding-8586b9b699-p9sn9 1/1 Running 0 3h23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.xx.yy.YY <none> 443/TCP 29d
service/lb-onboarding LoadBalancer XX.xx.yy.YY XX.xx.yy.YY 3000:32618/TCP 3h
Then when i tried to connect the error is ERR_CONNECTION_REFUSED
I think is about the network because y did the next test from my local machine
Ping [load balancer IP] ---> Correct
Telnet [Load Balancer IP] 3000 ---> Correct
From cloud shell i forward the port 3000 to 8080 and in other cloudShell make a Curl http://localhost:8080, and work fine.
Any idea about the problem?
Thanks in advance
I've changed a little bit your deployment to check it on my cluster because your image was unreachable:
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: nginx:latest
ports:
- containerPort: 80
service:
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 80
and it works out of the box:
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-7bdf584499-j2nv7 1/1 Running 0 6m58s
pod/bonsai-onboarding-7bdf584499-vc7kh 1/1 Running 0 6m58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.XXX.XXX.1 <none> 443/TCP 8m35s
service/lb-onboarding LoadBalancer 10.XXX.XXX.230 35.XXX.XXX.235 3000:31637/TCP 67s
and I'm able reach 35.XXX.XXX.235:3000 from any IP:
Welcome to nginx!
...
Thank you for using nginx.
You can check if your app is reachable using this command:
nmap -Pn $(kubectl get svc lb-onboarding -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
Maybe the cause of your problem with "ERR_CONNECTION_REFUSED" in configuration of your image? I found no problem with your deployment and load balancer configuration.
Ensure containerPort is defined in the spec of the deployment/statefulset/pod and the application is listening on that port. Also ensure your firewall rules are not blocking the nodeport.
gcloud compute firewall-rules create myservice --allow tcp:3000

Minikube unable to expose service with yaml

Trying to run a local registry. I have the following configuration:
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: registry
labels:
app: registry
role: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:latest
ports:
- containerPort: 5000
volumeMounts:
- mountPath: '/registry'
name: registry-volume
volumes:
- name: registry-volume
hostPath:
path: '/data'
type: Directory
Service:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
role: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
It all works well when I create deployment/service. kubectl shows status as Running for both service and deployment:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME READY STATUS RESTARTS AGE
po/registry-6549cbc974-mmqpj 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 37m
svc/registry NodePort 10.0.0.6 <none> 5000:31001/TCP 7m
However, when I try to get external IP for service using: minikube service registry --url, it times-out/fails: Waiting, endpoint for service is not ready yet....
When I delete the service (keeping deployment intact), and manually expose the deployment using kubectl expose deployment registry --type=NodePort, I am able to get it working.
Minikube log can be found here.
You need to specify the correct spec.selector in registry service manifest:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
app: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
Now registry service correctly points to the registry pod:
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 14m
registry 172.17.0.4:5000 4s
And you can get external url as well:
$ minikube service registry --url
http://192.168.99.106:31001