Cluster Kubernetes - Deploy httpd and access from external - kubernetes

I created my Kubernetes cluster and I try to deploy this yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
spec:
selector:
matchLabels:
app: httpd
replicas: 1
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: httpd-service
spec:
selector:
app: httpd-app
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30020
name: httpd-port
type: NodePort
This is the configuration:
[root#BCA-TST-K8S01 httpd-deploy]# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/httpd-deployment-57fc687dcc-rggx9 1/1 Running 0 8m51s 10.44.0.1 bcc-tst-docker02 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/httpd-service NodePort 10.102.138.175 <none> 8080:30020/TCP 8m51s app=httpd-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 134m <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/httpd-deployment 1/1 1 1 8m51s httpd httpd app=httpd
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/httpd-deployment-57fc687dcc 1 1 1 8m51s httpd httpd app=httpd,pod-template-hash=57fc687dcc
But I can't connect to the worker or from the cluster IP:
curl http://bcc-tst-docker02:30020
curl: (7) Failed to connect to bcc-tst-docker02 port 30020: Connection refused
How can I fix the problem?
How can expose the cluster using the internal Matser IP (for example I need to access to the httpd-deploy from the master IP 10.100.170.150 open a browser in the same network)
UPDATE:
I modified my yaml file as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
spec:
selector:
matchLabels:
app: httpd-app
replicas: 2
template:
metadata:
labels:
app: httpd-app
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: http-service
spec:
externalIPs:
- 10.100.170.150 **--> IP K8S**
externalTrafficPolicy: Cluster
ports:
- name: httpd-port
protocol: TCP
port: 8080
targetPort: 80
nodePort: 30020
selector:
app: httpd-app
sessionAffinity: None
type: LoadBalancer
And these are the result after I run apply command:
[root#K8S01 LoadBalancer]# kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/httpd-deployment-65d64d47c5-72xp4 1/1 Running 0 60s 10.44.0.2 bcc-tst-docker02 <none> <none>
pod/httpd-deployment-65d64d47c5-fc645 1/1 Running 0 60s 10.36.0.1 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/http-service LoadBalancer 10.100.236.203 10.100.170.150 8080:30020/TCP 60s app=httpd-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13d <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/httpd-deployment 2/2 2 2 60s httpd httpd app=httpd-app
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/httpd-deployment-65d64d47c5 2 2 2 60s httpd httpd app=httpd-app,pod-template-hash=65d64d47c5
but now when I try to connect to the httpd using K8S IP I receive these error:
[root#K8S01 LoadBalancer]# curl http://10.100.170.150:8080
curl: (7) Failed to connect to 10.100.170.150 port 8080: No route to host
[root#K8S01 LoadBalancer]# curl http://10.100.236.203:8080
curl: (7) Failed to connect to 10.100.236.203 port 8080: No route to host
If I try to connect directly to the node I can connect:
[root#K8S01 LoadBalancer]# curl http://bca-tst-docker01:30020
<html><body><h1>It works!</h1></body></html>
[root#K8S01 LoadBalancer]# curl http://bcc-tst-docker02:30020
<html><body><h1>It works!</h1></body></html>

You're are getting the connection refused because the service does not have any endpoints behind it since your label selector is different from the deployment level.
The deployment has httpd label while the service is trying to catch all the deployments with httpd-app. Below you can find corrected selector:
kind: Service
apiVersion: v1
metadata:
name: httpd-service
spec:
selector:
app: httpd <-------
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30020
name: httpd-port
type: NodePort
You can always verify if the service has endpoints. Kubernetes has a great section about debugging services and one of it is called: Does the Service have any Endpoints?

Related

Can't curl AKS Load Balancer service

I have an AKS cluster with default settings. I'm trying to create a very simple Deployment/Service. The Service is type LoadBlanacer. I see the service is created, however I cannot curl the service public IP. I don't even get an error, curl just hangs.
$ kubectl get all --show-labels
NAME READY STATUS RESTARTS AGE LABELS
pod/myapp-79579b5b68-npb2g 1/1 Running 0 104m app=myapp,pod-template-hash=79579b5b68
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 26h component=apiserver,provider=kubernetes
service/myapp-service LoadBalancer 10.0.223.167 $PUBLIC_IP 8080:31000/TCP 104m <none>
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
deployment.apps/myapp 1/1 1 1 104m app=myapp
NAME DESIRED CURRENT READY AGE LABELS
replicaset.apps/myapp-79579b5b68 1 1 1 104m app=myapp,pod-template-hash=79579b5b68
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: nginx:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: myapp-service
labels:
app: myapp
spec:
selector:
app: myapp
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080 # container port of Deployment; kubectl describe pod <podname> | grep Port
nodePort: 31000 # http://external-ip:nodePort
Depending on your requirements, you can create internal or public load balancer attached to application service. Post that you can access the service from outside the k8s cluster.

Kubernentes External Ip is working only in the cluster

I am new to Kubernetes and I am trying to host a testing site,I have pods running as below
NAME READY STATUS RESTARTS AGE
sasank-website-78864ff54b-656ld 1/1 Running 0 30m
sasank-website-78864ff54b-qdn65 1/1 Running 0 30m
Deployment file used:
piVersion: apps/v1
kind: Deployment
metadata:
name: sasank-website
labels:
app: website
spec:
replicas: 2
selector:
matchLabels:
app: website
template:
metadata:
labels:
app: website
spec:
containers:
- name: webtesting
image: 9110727495/userdetails:latest
ports:
- containerPort: 80
Service file used:
apiVersion: v1
kind: Service
metadata:
name: testingsite
labels:
app: website
spec:
type: NodePort
externalIPs:
- 192.168.1.10
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
selector:
app: website
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 102m
testingsite NodePort 10.96.246.110 192.168.1.10 80:31438/TCP 5m9s
When I try to access the Ip with port 31438 it is refusing to connect but it is using port 80 in the clustr. When I try to access with the same IP outside the cluster it is refusing to connect even to port 80. I am not sure how to understand this.. Please help. Thank you.

Simple kubernetes deployment on minikube with helm 3 not working (cant reach app)

I am trying to deploy a simple FLASK app (python web framework) on a Kubernetes cluster. I am using minikube.
Here's my Helm 3 stuff:
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app-deployment
labels:
app: flask-app
some: label
spec:
replicas: 1
selector:
matchLabels:
app: flask-app-pod
template:
metadata:
labels:
app: flask-app-pod
spec:
containers:
- name: flask-app-container
image: flask_app:0.0.1
imagePullPolicy: Never
ports:
- name: app
containerPort: 5000
protocol: TCP
securityContext: # root access for debugging
allowPrivilegeEscalation: false
runAsUser: 0
Service:
apiVersion: v1
kind: Service
metadata:
name: flak-app-service
labels:
service: flask-app-services
spec:
type: NodePort
ports:
- port: 5000
targetPort: 5000
protocol: TCP
name: https
selector:
app: flask-app-pod
Chart:
apiVersion: v2
name: flask-app
type: application
version: 0.0.1
appVersion: 0.0.1
I deploy this by doing helm install test-chart/ --generate-name.
Sample output of kubectl get all:
NAME READY STATUS RESTARTS AGE
pod/flask-app-deployment-d94b86cc9-jcmxg 1/1 Running 0 8m19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/flak-app-service NodePort 10.98.48.114 <none> 5000:30317/TCP 8m19s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d2h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flask-app-deployment 1/1 1 1 8m19s
NAME DESIRED CURRENT READY AGE
replicaset.apps/flask-app-deployment-d94b86cc9 1 1 1 8m19s
I exec'd into the pod to check if it's listening on the correct port, looks fine (netstat output):
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 1/python3
My Dockerfile should be fine. I can create a container and call the app then running a "normal" dcker container.
Must be something stupid. What am I not seeing here?
I would expect to be able to go https://localhost:30317 which gets forwarded to the service listening on port 5000 internally, which forwards it into the pod that also listens on port 5000.
To validate traffic you can use following as where it is breaking:
kubectl port-forward pods/flask-app-deployment-d94b86cc9-jcmxg 5000:12345
or
kubectl port-forward deployment/flask-app-deployment 5000:12345
or
kubectl port-forward service/flak-app-service 5000:12345
depending upon where you want to debug.
Also please validate by running netstat -tunlp whether your host is listening on the allotted port or not.
Hope this solves your error, or let me know if it does not.

GCP GKE load balancer connectio refused

I'm doing a deployment on the GKE service and I find that when I try to access the page the message
ERR_CONNECTION_REFUSED
I have defined a load balancing service for deployment and the configuration is as follows.
This is the .yaml for the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: "eu.gcr.io/diaphanum/onboarding-iocash-master_web:v1"
ports:
- containerPort: 3000
This is the service .yaml file.
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This working fine, and all is green in GKE :)
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-8586b9b699-flhbn 1/1 Running 0 3h23m
pod/bonsai-onboarding-8586b9b699-p9sn9 1/1 Running 0 3h23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.xx.yy.YY <none> 443/TCP 29d
service/lb-onboarding LoadBalancer XX.xx.yy.YY XX.xx.yy.YY 3000:32618/TCP 3h
Then when i tried to connect the error is ERR_CONNECTION_REFUSED
I think is about the network because y did the next test from my local machine
Ping [load balancer IP] ---> Correct
Telnet [Load Balancer IP] 3000 ---> Correct
From cloud shell i forward the port 3000 to 8080 and in other cloudShell make a Curl http://localhost:8080, and work fine.
Any idea about the problem?
Thanks in advance
I've changed a little bit your deployment to check it on my cluster because your image was unreachable:
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: nginx:latest
ports:
- containerPort: 80
service:
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 80
and it works out of the box:
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-7bdf584499-j2nv7 1/1 Running 0 6m58s
pod/bonsai-onboarding-7bdf584499-vc7kh 1/1 Running 0 6m58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.XXX.XXX.1 <none> 443/TCP 8m35s
service/lb-onboarding LoadBalancer 10.XXX.XXX.230 35.XXX.XXX.235 3000:31637/TCP 67s
and I'm able reach 35.XXX.XXX.235:3000 from any IP:
Welcome to nginx!
...
Thank you for using nginx.
You can check if your app is reachable using this command:
nmap -Pn $(kubectl get svc lb-onboarding -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
Maybe the cause of your problem with "ERR_CONNECTION_REFUSED" in configuration of your image? I found no problem with your deployment and load balancer configuration.
Ensure containerPort is defined in the spec of the deployment/statefulset/pod and the application is listening on that port. Also ensure your firewall rules are not blocking the nodeport.
gcloud compute firewall-rules create myservice --allow tcp:3000

kube-proxy Couldn't find an endpoint for default/tomcat:http: missing service entry

I use OS Centos 7.
My Pod:
apiVersion: v1
kind: Pod
metadata:
name: tomcat
spec:
containers:
- image: ec2-73-99-254-8.eu-central-1.compute.amazonaws.com:5000/tom
name: tomcat
command: ["sh","-c","/opt/tomcat/bin/deploy-and-run.sh"]
volumeMounts:
- mountPath: /maven
name: app-volume
ports:
- containerPort: 8080
volumes:
- name: app-volume
hostPath:
path: /maven
My Sevice:
apiVersion: v1
kind: Service
metadata:
name: tomcat
spec:
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
selector:
name: tomcat
Services looks like:
# kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
kubernetes 10.254.0.1 <none> 443/TCP <none> 14h
tomcat 10.254.206.26 <none> 80/TCP name=tomcat 13h
And Pods:
# kubectl get pod
NAME READY STATUS RESTARTS AGE
tomcat 1/1 Running 0 13h
And when I run Curl:
curl 10.254.206.26
curl: (56) Recv failure: Connection reset by peer
Kube-proxy logs at that moment show somthing like this:
kube-proxy[22273]: Couldn't find an endpoint for default/tomcat:http: missing service entry
kube-proxy[22273]: Failed to connect to balancer: missing service entry
But when I run curl directly to the pod ip address and port 8080 - it works fine.
When I run command kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 195.234.109.11:6443 14h
tomcat <none> 14h
Field ENDPOINTS in this output with "none" looks strange.
What's wrong?
Services work by matching labels. You are attempting to match based on the name of your pod. Try changing the metadata for your pod to
metadata:
name: tomcat
labels:
name: tomcat
and see if that helps.