Unable to connect to pod using ingress - kubernetes

I am trying to configure one python flask application running in port 5000 in kubernetes. I have created the deployment, service and ingress. It is not working using the domain name which is added to hosts file, but python application is working when i have tried from port forwarding.
I have tried a lot changing the configurations, but no thing worked.
Please let me know your suggestions.
kind: Deployment
metadata:
name: web-app
namespace: production
labels:
app: web-app
platform: python
spec:
replicas:
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: XXXXXX/XXXXXX:XXXXXX
imagePullPolicy: Always
ports:
- containerPort: 5000
apiVersion: v1
kind: Service
metadata:
name: web-app
namespace: production
spec:
selector:
app: web-app
ports:
- protocol: TCP
port: 5000
targetPort: 5000
selector:
run: web-app
kind: Ingress
metadata:
name: name-virtual-host-ingress
namespace: production
spec:
rules:
- host: first.bar.com
http:
paths:
- backend:
serviceName: web-app
servicePort: 5000
kubectl get all -n production
NAME READY STATUS RESTARTS AGE
pod/web-app-559df5fc4-67nbn 1/1 Running 0 24m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/web-app ClusterIP 10.100.122.15 <none> 5000/TCP 24m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/web-app 1 1 1 1 24m
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-app-559df5fc4 1 1 1 24m
kubectl get ing -n production
NAME HOSTS ADDRESS PORTS AGE
name-virtual-host-ingress first.bar.com 80 32s
kubectl get ep web-app -n production
NAME ENDPOINTS AGE
web-app <none> 23m

You need to run a Ingress Controller. The Prerequisites part of https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites says:
You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
One example would be https://kubernetes.github.io/ingress-nginx/deploy/. Be sure to run the Mandatory Command and the one that pertains to your provider. You can then get the service to see the assigned IP:
kubectl get -n ingress-nginx svc/ingress-nginx

Related

cannot access application via service in kubernetes [duplicate]

This question already has answers here:
Expose port in minikube
(5 answers)
Closed 6 months ago.
In kubernetes (I am using minikube) I have deployed the following deployment using kubectl apply -f nginx-deployment:
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
I get deployment.apps/nginx-deployment created as an output, and when I run kubectl get deployment I get:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 22s
I have also deployed the following service file using kubectl apply -f nginx-service.yml command
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- name: "http"
port: 80
targetPort: 80
nodePort: 30080
The output is service/nginx-service created and the output of kubectl get service is:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 127d
nginx-service NodePort 10.99.253.196 <none> 80:30080/TCP 75s
However, when I try to access the app by entering 10.99.253.196 into the browser, it doesn't load and when I try localhost:30080 it says Unable to connect. Could someone help me to understand why this is happening/provide further directions for troubleshooting?
Since you are using minikube you might need to run minikube service nginx-service --url, this will create a tunnel to the cluster and expose the service.

Kubernetes (on-premises) Metallb LoadBalancer and sticky sessions

I installed one Kubernetes Master and two kubernetes worker on-premises.
After I installed Metallb as LoadBalancer using commands below:
$ kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxy
Configuration mode:
"ipvs" ipvs:
strictARP: true
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
vim config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 10.100.170.200-10.100.170.220
kubectl apply -f config-map.yaml
kubectl describe configmap config -n metallb-system
I created my yaml file as below:
myapp-tst-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-tst-deployment
labels:
app: myapp-tst
spec:
replicas: 2
selector:
matchLabels:
app: myapp-tst
template:
metadata:
labels:
app: myapp-tst
spec:
containers:
- name: myapp-tst
image: myapp-tomcat
securityContext:
privileged: true
capabilities:
add:
- SYS_ADMIN
myapp-tst-service.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-tst-service
labels:
app: myapp-tst
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
ports:
- name: myapp-tst-port
nodePort: 30080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: myapp-tst
sessionAffinity: None
myapp-tst-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-tst-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/affinity-mode: "persistent"
nginx.ingress.kubernetes.io/session-cookie-name: "INGRESSCOOKIE"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: myapp-tst-service
servicePort: myapp-tst-port
I run kubectl -f apply for all three files, and these is my result:
kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/myapp-tst-deployment-54474cd74-p8cxk 1/1 Running 0 4m53s 10.36.0.1 bcc-tst-docker02 <none> <none>
pod/myapp-tst-deployment-54474cd74-pwlr8 1/1 Running 0 4m53s 10.44.0.2 bca-tst-docker01 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/myapp-tst-service LoadBalancer 10.110.184.237 10.100.170.15 80:30080/TCP 4m48s app=myapp-tst,tier=backend
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d22h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/myapp-tst-deployment 2/2 2 2 4m53s myapp-tst mferraramiki/myapp-test app=myapp-tst
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/myapp-tst-deployment-54474cd74 2 2 2 4m53s myapp-tst myapp/myapp-test app=myapp-tst,pod-template-hash=54474cd74
But when I try to connect using LB external IP (10.100.170.15) the system redirect the browser request
(on the same browser) on a pod, if I refresh or open a new tab (on the same url) the system reply redirect the request to another pod.
I need when a user digit url in the browser, he must be connect to a specific pod during all session, and not switch to other pods.
How can solve this problem if is it possible?
In my VM I resolved this issue using stickysession, how can enable it on LB or in Kubernetes components?
In the myapp-tst-service.yaml file the "sessionAffinity" is set to "None".
You should try to set it to "ClientIP".
From page https://kubernetes.io/docs/concepts/services-networking/service/ :
"If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on the client's IP addresses by setting service.spec.sessionAffinity to "ClientIP" (the default is "None"). You can also set the maximum session sticky time by setting service.spec.sessionAffinityConfig.clientIP.timeoutSeconds appropriately. (the default value is 10800, which works out to be 3 hours)."

Kubernetes Ingress issue baremetal

I am new to kubernetes, installed 3 nodes k8s cluster through kubeadm in my personal Laptop on top of VMware Workstation
a master and 2 worker nodes.
I have deployed nginx ingress controller through below URL, seems nginx ingress pods are working fine, I have deployed a httpd pod, service and ingress to point to the http server, but I am not able to point to the http URL, pasted all files.
But I didn't deploy any LoadBalancers(HAproxy/MetalLB), I am in a dilemma whether LoadBalancer or Proxy required to make ingress working on BareMetal multinode cluster.
# nginx-ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
[root#kube-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master01 Ready master 197d v1.19.0
kube-node01.example.com Ready worker 197d v1.19.0
kube-node02.example.com Ready worker 197d v1.19.0
[root#kube-master01 ~]# kubectl -n ingress-nginx get pods
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-5zcd5 0/1 Completed 0 41h
ingress-nginx-controller-67897c9494-pt5nl 1/1 Running 0 3h4m
[root#minikube01 httpd]# cat httpd-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpd-deployment
labels:
app: http-server
spec:
replicas: 1
selector:
matchLabels:
app: http-server
template:
metadata:
labels:
app: http-server
spec:
containers:
- name: http-server
image: httpd
ports:
- containerPort: 80
[root#minikube01 httpd]# cat httpd-service.yaml
apiVersion: v1
kind: Service
metadata:
name: httpd-service
spec:
selector:
app: http-server
ports:
- protocol: TCP
port: 8081
targetPort: 80
[root#minikube01 httpd]# cat httpd-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: httpd-ingress
spec:
rules:
- host: httpd.com
http:
paths:
- backend:
serviceName: httpd-service
servicePort: 8081
The same above files works fine in a minikube node without any issues.
Any assiatnace is appreciated.
Thanks in Advance
Niru

GCP GKE load balancer connectio refused

I'm doing a deployment on the GKE service and I find that when I try to access the page the message
ERR_CONNECTION_REFUSED
I have defined a load balancing service for deployment and the configuration is as follows.
This is the .yaml for the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: "eu.gcr.io/diaphanum/onboarding-iocash-master_web:v1"
ports:
- containerPort: 3000
This is the service .yaml file.
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This working fine, and all is green in GKE :)
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-8586b9b699-flhbn 1/1 Running 0 3h23m
pod/bonsai-onboarding-8586b9b699-p9sn9 1/1 Running 0 3h23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.xx.yy.YY <none> 443/TCP 29d
service/lb-onboarding LoadBalancer XX.xx.yy.YY XX.xx.yy.YY 3000:32618/TCP 3h
Then when i tried to connect the error is ERR_CONNECTION_REFUSED
I think is about the network because y did the next test from my local machine
Ping [load balancer IP] ---> Correct
Telnet [Load Balancer IP] 3000 ---> Correct
From cloud shell i forward the port 3000 to 8080 and in other cloudShell make a Curl http://localhost:8080, and work fine.
Any idea about the problem?
Thanks in advance
I've changed a little bit your deployment to check it on my cluster because your image was unreachable:
deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bonsai-onboarding
spec:
selector:
matchLabels:
app: bonsai-onboarding
replicas: 2
template:
metadata:
labels:
app: bonsai-onboarding
spec:
containers:
- name: bonsai-onboarding
image: nginx:latest
ports:
- containerPort: 80
service:
apiVersion: v1
kind: Service
metadata:
name: lb-onboarding
spec:
type: LoadBalancer
selector:
app: bonsai-onboarding
ports:
- protocol: TCP
port: 3000
targetPort: 80
and it works out of the box:
kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/bonsai-onboarding-7bdf584499-j2nv7 1/1 Running 0 6m58s
pod/bonsai-onboarding-7bdf584499-vc7kh 1/1 Running 0 6m58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.XXX.XXX.1 <none> 443/TCP 8m35s
service/lb-onboarding LoadBalancer 10.XXX.XXX.230 35.XXX.XXX.235 3000:31637/TCP 67s
and I'm able reach 35.XXX.XXX.235:3000 from any IP:
Welcome to nginx!
...
Thank you for using nginx.
You can check if your app is reachable using this command:
nmap -Pn $(kubectl get svc lb-onboarding -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
Maybe the cause of your problem with "ERR_CONNECTION_REFUSED" in configuration of your image? I found no problem with your deployment and load balancer configuration.
Ensure containerPort is defined in the spec of the deployment/statefulset/pod and the application is listening on that port. Also ensure your firewall rules are not blocking the nodeport.
gcloud compute firewall-rules create myservice --allow tcp:3000

Minikube unable to expose service with yaml

Trying to run a local registry. I have the following configuration:
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: registry
labels:
app: registry
role: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
image: registry:latest
ports:
- containerPort: 5000
volumeMounts:
- mountPath: '/registry'
name: registry-volume
volumes:
- name: registry-volume
hostPath:
path: '/data'
type: Directory
Service:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
role: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
It all works well when I create deployment/service. kubectl shows status as Running for both service and deployment:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/registry 1 1 1 1 30m
NAME DESIRED CURRENT READY AGE
rs/registry-6549cbc974 1 1 1 30m
NAME READY STATUS RESTARTS AGE
po/registry-6549cbc974-mmqpj 1/1 Running 0 30m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 37m
svc/registry NodePort 10.0.0.6 <none> 5000:31001/TCP 7m
However, when I try to get external IP for service using: minikube service registry --url, it times-out/fails: Waiting, endpoint for service is not ready yet....
When I delete the service (keeping deployment intact), and manually expose the deployment using kubectl expose deployment registry --type=NodePort, I am able to get it working.
Minikube log can be found here.
You need to specify the correct spec.selector in registry service manifest:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: default
labels:
app: registry
spec:
selector:
app: registry
type: NodePort
ports:
- name: registry
nodePort: 31001
port: 5000
protocol: TCP
Now registry service correctly points to the registry pod:
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:8443 14m
registry 172.17.0.4:5000 4s
And you can get external url as well:
$ minikube service registry --url
http://192.168.99.106:31001