Ingress -> Cluster IP back-end - got ERR_CONNECTION_REFUSED - kubernetes

I got ingress defined as:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wp-ingress
namespace: wordpress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 25m
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /
backend:
serviceName: wordpress
servicePort: 6002
And the back-end, defined as Cluster IP running on 6002 port.
When I am trying to reach ingress by its ADDRESS in the browser I get ERR_CONNECTION_REFUSED.
I suspect it has to do with the back-end?
Q: What could be a problem? How to analyse it? to make it work.
See the picture below, it is on GCP, all ips are resolved. All seems connected to each other.
The nginx-ingress (ingress-controller, default backed) was installed as helm chart.
helm install --namespace wordpress --name wp-nginx-ingress stable/nginx-ingress --tls
UPDATE:
I do not use yet (https) for back-end, tried to remove http redirect from the ingress yml: nginx.ingress.kubernetes.io/ssl-redirect: "true" removed - did not help.
UPDATE2: wordpress yaml - got from running service at yaml tab in GCP->KE->Services
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-03-30T04:11:12Z"
labels:
app.kubernetes.io/instance: wordpress
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: wordpress
helm.sh/chart: wordpress-9.0.4
name: wordpress
namespace: wordpress
resourceVersion: "2518308"
selfLink: /api/v1/namespaces/wordpress/services/wordpress
uid: 7dac1a73-723c-11ea-af1a-42010a800084
spec:
clusterIP: xxx.xx.xxx.xx
ports:
- name: http
port: 6002
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/instance: wordpress
app.kubernetes.io/name: wordpress
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
UPDATE: 3
I tried:
kubectl -n wordpress exec -it wordpress-xxxx-xxxx -- /bin/bash
curl http://wordpress.wordpress.svc.cluster.local:6002 and it works - it gets me the html from the wordpress.

Related

Unable to reach the URL AWS provides for my LoadBalancer service in EKS

Why is my LoadBalancer service in Kubernetes not reachable?
I have deployed an nginx-ingress-controller helm chart and it has a service of LoadBalancer type in EKS. This service receives a url (EXTERNAL-IP) and this url has an IP but when I'm trying the reach this url it's not reachable.
I did kubectl port-forward -n ingress-nginx services/ingress-nginx-controller8080:80 and then I can reach nginx in localhost:8080 so I know the problem is to reach the service itself from the internet. I've checked and VPC and subnets security-groups and inbound/outbound rules and it seems ok. .
can anyone provide some guidance on how to troubleshoot this issue?
This is the definition of the
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:xxx:certificate/xxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.5.1
helm.sh/chart: ingress-nginx-4.4.2
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
allocateLoadBalancerNodePorts: true
clusterIP: xxx
clusterIPs:
- xxx
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
nodePort: xxx
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: xxx
port: 443
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
sessionAffinity: None
type: LoadBalancer
This is the command I'm using to deploy the nginx helm chart:
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace --version 4.4.2 -f values.yaml
And this is my values.yaml:
controller:
config:
allow-snippet-annotations: "true"
http-snippet: |
server {
listen 2443;
return 308 https://$host$request_uri;
}
use-forwarded-headers: "false"
service:
enabled: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:xxx:certificate/xxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-type: nlb
ports:
http: 80
https: 443
targetPorts:
http: http
https: http
type: LoadBalancer
to reach the nginx service I either go to it on browser or just do:
curl xxx-xxx.elb.eu-central-1.amazonaws.com
but I always get This site can’t be reached
First and foremost: give up. Nginx controller won't just work with ACM properly, I've wasted enourmous hours to accept this and move on.
Now that you did, here's approach I employed just yesterday and it worked brilliant.
Install nginx controller as helm release, without making any changes to controller service, this will create CLB (instead of NLB) and this is fine. NLB is mentioned in the guides on the internet as a crutch to get ACM certificate working, but all it does is produces redirect loops.
MOST IMPORTANT - Please, go through this to install cert-manager to manage LE certificates for you - ACM won't just let you export certificate. If you have external one already - good, just put it as secret for TLS definition below, otherwise after installation of cert-manager (if it takes too long to install helm for it, you didn't specify true for CRD installation - THIS IS CRUCIAL and if it gets stuck you will need to uninstall release and retry again properly).
Here's example of nginx ingress you can adapt to your needs:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: yeaboi-lb
namespace: yeaboi
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
rewrite ^/(/?)$ /yeaboi$1 break;
nginx.ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt"
acme.cert-manager.io/http01-edit-in-place: "true"
spec:
tls:
- secretName: yeaboi-tls
hosts:
- yeaboi.io
rules:
- host: yeaboi.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: yeaboi-web
port:
number: 80
ingressClassName: nginx
Here's example of yeaboi-web service you can use in conjunction with abovementioned ingress (obviously make sure to specify targetPort exposed for your deployment):
apiVersion: v1
kind: Service
metadata:
name: yeaboi-web
namespace: yeaboi
labels:
app: yeaboi-web
spec:
ports:
- port: 80
targetPort: 9000
protocol: TCP
selector:
app: yeaboi-web
Point your domain to CLB created by nginx (just in case make sure it has your EC2 in inService status etc).
Enjoy having working nginx ingress you can customize a lot (unlike ALB ingress which is severely limited in comparison).

Ingress return 404

Rancher ingress return 404 to service.
Setup: I have 6 VMs, one Rancher server x.x.x.51 (where dns domain.company is pointing to, TLS), and 5 VMs (one master and 4 worker x.x.x.52-56).
My service, gvm-gsad running in gvm namespace:
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/publicEndpoints: "null"
meta.helm.sh/release-name: gvm
meta.helm.sh/release-namespace: gvm
creationTimestamp: "2021-11-15T21:14:21Z"
labels:
app.kubernetes.io/instance: gvm-gvm
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: gvm-gsad
app.kubernetes.io/version: "21.04"
helm.sh/chart: gvm-1.3.0
name: gvm-gsad
namespace: gvm
resourceVersion: "3488107"
uid: c1ddfdfa-3799-4945-841d-b6aa9a89f93a
spec:
clusterIP: 10.43.195.239
clusterIPs:
- 10.43.195.239
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: gsad
port: 80
protocol: TCP
targetPort: gsad
selector:
app.kubernetes.io/instance: gvm-gvm
app.kubernetes.io/name: gvm-gsad
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
My ingress configure: Ingress controller is default one from rancher.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["172.16.12.53"],"port":443,"protocol":"HTTPS","serviceName":"gvm:gvm-gsad","ingressName":"gvm:gvm","hostname":"dtl.miproad.ad","path":"/gvm","allNodes":true}]'
creationTimestamp: "2021-11-16T19:22:45Z"
generation: 10
name: gvm
namespace: gvm
resourceVersion: "3508472"
uid: e99271a8-8553-45c8-b027-b259a453793c
spec:
rules:
- host: domain.company
http:
paths:
- backend:
service:
name: gvm-gsad
port:
number: 80
path: /gvm
pathType: Prefix
tls:
- hosts:
- domain.company
status:
loadBalancer:
ingress:
- ip: x.x.x.53
- ip: x.x.x.54
- ip: x.x.x.55
- ip: x.x.x.56
When i access it with https://domain.company/gvm then i get 404.
However, when i change the service to NodePort, i could access it with x.x.x.52:PORT normally. Meaning the deployment is running fine, just some configuration issue in ingress.
I checked this one: rancher 2.x thru ingress controller returns 404 but it does not help.
Thank you in advance!
Figured out the solution.
The domain.company is pointing to rancher (x.x.x.51). Where the ingress is running on (x.x.x.53,.54,.55,.56).
So, the solution is to create a new DNS called gvm.domain.company pointing to any ingress (x.x.x.53,.54,.55,.56) (you can have LoadBalancer here or use round robin DNS).
Then, the ingress definition is gvm.domain.company and path is "/".
Hope it helps others!

Minikube NGINX Ingress return 404 Not Found

I created a deployment, a service and an Ingress to be able to access a NGINX webserver from my host, but I keep getting 404 Not Found. After a lot of hours troubleshooting, I'm getting to a point where some help would be very welcomed.
The steps and related yaml files are below.
Enable Minikube NGINX Ingress controller
minikube addons enable ingress
Create NGINX web server deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver-white
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx-webserver-white
template:
metadata:
labels:
app: nginx-webserver-white
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
Create ClusterIP Service to manage the access to the pods
apiVersion: v1
kind: Service
metadata:
name: webserver-white-svc
labels:
run: webserver-white-svc
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-webserver-white
Create Ingress to access service from outside the Cluster
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webserver-white-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
defaultBackend:
service:
name: webserver-white-svc
port:
number: 80
rules:
- host: white.example.com # This is pointing to the control plane IP
http:
paths:
- backend:
service:
name: webserver-white-svc
port:
number: 80
path: /
pathType: ImplementationSpecific
Tests
When connecting to one pod and executing curl http://localhost, it return the NGINX homepage html, so the pod looks good.
When creating a testing pod and executing curl http://<service-cluster-ip>, it return the NGINX homepage html, so the service looks good.
When connecting to the ingress nginx controller pod and executing curl http://<service-cluster-ip>, it also return the NGINX homepage html, so the connection between the ingress controller and the service looks good.
When connecting to the control plane with minikube ssh and executing ping <nginx-controller-ip> I see that it reaches the nginx controller.
I tested the same, but with a NodePort Service instead of ClusterIP and noticed that I could access the NGINX homepage using the node port, but not the Ingress port.
Any idea what I could be doing wrong and/or what else I could do to better troubleshoot this issue?
Other notes
minikube version: v1.23.0
kubectl version on the client and server: v1.22.1
OS: Ubuntu 18.04.5 LTS (Bionic Beaver)
UPDATE/SOLUTION:
The solution was to add the missing annotation kubernetes.io/ingress.class: "nginx" on the Ingress.
The solution was to add the missing annotation kubernetes.io/ingress.class: "nginx" on the Ingress.

AKS ingress address is empty. Grafana not being exposed through ingress

I have my AKS cluster where I am running my application without any problem. I have two deployments (backend-frontend) and I am using a service of type ClusterIP and an ingress controller to expose them publicly. I am trying to do the same with my Grafana deployment but it's not working. I am getting 404 error. If I go and execute the port forward I can access it over http://localhost:3000 and login successfully. The failure part is with ingress as it's created but with an empty address. I have three ingresses two of them shows the public IP in Address column and they are accessible through the URL while grafana url is not working.
Here is my Grafana Ingress:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: loki-grafana-ingress
namespace: ingress-basic
labels:
app.kubernetes.io/instance: loki
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 6.7.0
helm.sh/chart: grafana-5.7.10
annotations:
kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: ingress-basic
spec:
tls:
- hosts:
- xxx.xxx.me
secretName: sec-tls
rules:
- host: xxx.xxx.me
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: loki-grafana
servicePort: 80
status:
loadBalancer: {}
Attached are the Ingresses created:
Ingresses
Grafana Service
kind: Service
apiVersion: v1
metadata:
name: loki-grafana
namespace: ingress-basic
labels:
app.kubernetes.io/instance: loki
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 6.7.0
helm.sh/chart: grafana-5.7.10
annotations:
meta.helm.sh/release-name: loki
meta.helm.sh/release-namespace: ingress-basic
spec:
ports:
- name: service
protocol: TCP
port: 80
targetPort: 3000
selector:
app.kubernetes.io/instance: loki
app.kubernetes.io/name: grafana
clusterIP: 10.0.189.19
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
When doing kubectl port-forward --namespace ingress-basic service/loki-grafana 3000:80
Grafana_localhost
Grana_URL

How to get around specifying "Host" header to access services pointed by Ingress controllers?

My ingress controller is working and I am able to access the service outside the cluster using http://(externalIP)/path using an HTTP GET request from a RestClient. However, I had to specify the "Host" header with value = "host" (value of my Ingress Resource) for this to work. Because of this, I am not able to hit http://(externalIP)/path from my web browser. Is there any way I can enable access from my external web browser without having to specify "Host" in the request header?
Ingress Resource :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: master1.saas.com
http:
paths:
- backend:
serviceName: gen-devops
servicePort: 10311
path: /*
Ingress Service :
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
externalIPs:
- 172.16.32.85
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
I assume you want to use this for testing.
If you are using any *nix flavor OS (MacOS, Linux) you can add an entry to your /etc/hosts file, something like this:
172.16.32.85 master1.saas.com
If you are using any Windows box you can add the same entry in C:\Windows\System32\Drivers\etc\hosts