How to expose a service to outside Kubernetes cluster via ingress? - kubernetes

I'm struggling to expose a service in an AWS cluster to outside and access it via a browser. Since my previous question haven't drawn any answers, I decided to simplify the issue in several aspects.
First, I've created a deployment which should work without any configuration. Based on this article, I did
kubectl create namespace tests
created file probe-service.yaml based on paulbouwer/hello-kubernetes:1.8 and deployed it kubectl create -f probe-service.yaml -n tests:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
created ingress.yaml and applied it (kubectl apply -f .\probes\ingress.yaml -n tests)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
spec:
rules:
- host: test.projectname.org
http:
paths:
- pathType: Prefix
path: "/test"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
- host: test2.projectname.org
http:
paths:
- pathType: Prefix
path: "/test2"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
ingressClassName: nginx
Second, I can see that DNS actually point to the cluster and ingress rules are applied:
if I open http://test.projectname.org/test or any irrelevant path (http://test.projectname.org/test3), I'm shown NET::ERR_CERT_AUTHORITY_INVALID, but
if I use "open anyway" in browser, irrelevant paths give ERR_TOO_MANY_REDIRECTS while http://test.projectname.org/test gives Cannot GET /test
Now, TLS issues aside (those deserve a separate question), why can I get Cannot GET /test? It looks like ingress controller (ingress-nginx) got the rules (otherwise it wouldn't descriminate paths; that's why I don't show DNS settings, although they are described in the previous question) but instead of showing the simple hello-kubernetes page at /test it returns this simple 404 message. Why is that? What could possibly go wrong? How to debug this?
Some debug info:
kubectl version --short tells Kubernetes Client Version is v1.21.5 and Server Version is v1.20.7-eks-d88609
kubectl get ingress -n tests shows that hello-kubernetes-ingress exists indeed, with nginx class, 2 expected hosts, address equal to that shown for load balancer in AWS console
kubectl get all -n tests shows
NAME READY STATUS RESTARTS AGE
pod/hello-kubernetes-first-6f77d8ff99-gjw5d 1/1 Running 0 5h4m
pod/hello-kubernetes-first-6f77d8ff99-ptwsn 1/1 Running 0 5h4m
pod/hello-kubernetes-first-6f77d8ff99-x8w87 1/1 Running 0 5h4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-kubernetes-first ClusterIP 10.100.18.189 <none> 80/TCP 5h4m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hello-kubernetes-first 3/3 3 3 5h4m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hello-kubernetes-first-6f77d8ff99 3 3 3 5h4m
ingress-nginx was installed before me via the following chart:
apiVersion: v2
name: nginx
description: A Helm chart for Kubernetes
type: application
version: 4.0.6
appVersion: "1.0.4"
dependencies:
- name: ingress-nginx
version: 4.0.6
repository: https://kubernetes.github.io/ingress-nginx
and the values overwrites applied with the chart differ from the original ones mostly (well, those got updated since the installation) in extraArgs: default-ssl-certificate: "nginx-ingress/dragon-family-com" is uncommneted
PS To answer Andrew, I indeed tried to setup HTTPS but it seemingly didn't help, so I haven't included what I tried into the initial question. Yet, here's what I did:
installed cert-manager, currently without a custom chart: kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml
based on cert-manager's tutorial and SO question created a ClusterIssuer with the following config:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-backoffice
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
# use https://acme-v02.api.letsencrypt.org/directory after everything is fixed and works
privateKeySecretRef: # this secret will be created in the namespace of cert-manager
name: letsencrypt-backoffice-private-key
# email: <will be used for urgent alerts about expiration etc>
solvers:
# TODO: add for each domain/second-level domain/*.projectname.org
- selector:
dnsZones:
- test.projectname.org
- test2.projectname.org
# haven't made it to work yet, so switched to the simpler to configure http01 challenge
# dns01:
# route53:
# region: ... # that of load balancer (but we also have ...)
# accessKeyID: <of IAM user with access to Route53>
# secretAccessKeySecretRef: # created that
# name: route53-credentials-secret
# key: secret-access-key
# role: arn:aws:iam::645730347045:role/cert-manager
http01:
ingress:
class: nginx
and applied it via kubectl apply -f issuer.yaml
created 2 certificates in the same file and applied it again:
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-certificate
spec:
secretName: tls-secret
issuerRef:
kind: ClusterIssuer
name: letsencrypt-backoffice
commonName: test.projectname.org
dnsNames:
- test.projectname.org
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-certificate-2
spec:
secretName: tls-secret-2
issuerRef:
kind: ClusterIssuer
name: letsencrypt-backoffice
commonName: test2.projectname.org
dnsNames:
- test2.projectname.org
made sure that the certificates are issued correctly (skipping the pain part, the result is: kubectl get certificates shows that both certificates have READY = true and both tls secrets are created)
figured that my ingress is in another namespace and secrets for tls in ingress spec can only be referred in the same namespace (haven't tried the wildcard certificate and --default-ssl-certificate option yet), so for each one copied them to tests namespace:
opened existing secret, like kubectl edit secret tls-secret-2, copied data and annotations
created an empty (Opaque) secret in tests: kubectl create secret generic tls-secret-2-copy -n tests
opened it (kubectl edit secret tls-secret-2-copy -n tests) and inserted data an annotations
in ingress spec, added the tls bit:
tls:
- hosts:
- test.projectname.org
secretName: tls-secret-copy
- hosts:
- test2.projectname.org
secretName: tls-secret-2-copy
I hoped that this will help, but actually it made no difference (I get ERR_TOO_MANY_REDIRECTS for irrelevant paths, redirect from http to https, NET::ERR_CERT_AUTHORITY_INVALID at https and Cannot GET /test if I insist on getting to the page)

Since you've used your own answer to complement the question, I'll kind of answer all the things you asked, while providing a divide and conquer strategy to troubleshooting kubernetes networking.
At the end I'll give you some nginx and IP answers
This is correct
- host: test3.projectname.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
Breaking down troubleshooting with Ingress
DNS
Ingress
Service
Pod
Certificate
1.DNS
you can use the command dig to query the DNS
dig google.com
Ingress
the ingress controller doesn't look for the IP, it just looks for the headers
you can force a host using any tool that lets you change the headers, like curl
curl --header 'Host: test3.projectname.com' http://123.123.123.123 (your public IP)
Service
you can be sure that your service is working by creating ubuntu/centos pod, using kubectl exec -it podname -- bash and trying to curl your service form withing the cluster
Pod
You're getting this
192.168.14.57 - - [14/Nov/2021:12:02:58 +0000] "GET /test2 HTTP/2.0" 404 144
"-" "<browser's user-agent header value>" 448 0.002
This part GET /test2 means that the request got the address from the DNS, went all the way from the internet, found your clusters, found your ingress controller, got through the service and reached your pod. Congratz! Your ingress is working!
But why is it returning 404?
The path that was passed to the service and from the service to the pod is /test2
Do you have a file called test2 that nginx can serve? Do you have an upstream config in nginx that has a test2 prefix?
That's why, you're getting a 404 from nginx, not from the ingress controller.
Those IPs are internal, remember, the internet traffic ended at the cluster border, now you're in an internal network. Here's a rough sketch of what's happening
Let's say that you're accessing it from your laptop. Your laptop has the IP 192.168.123.123, but your home has the address 7.8.9.1, so when your request hits the cluster, the cluster sees 7.8.9.1 requesting test3.projectname.com.
The cluster looks for the ingress controller, which finds a suitable configuration and passed the request down to the service, which passes the request down to the pod.
So,
your router can see your private IP (192.168.123.123)
Your cluster(ingress) can see your router's IP (7.8.9.1)
Your service can see the ingress's IP (192.168.?.?)
Your pod can see the service's IP (192.168.14.57)
It's a game of pass around.
If you want to see the public IP in your nginx logs, you need to customize it to get the X-Real-IP header, which is usually where load-balancers/ingresses/ambassador/proxies put the actual requester public IP.

Well, I haven't figured this out for ArgoCD yet (edit: figured, but the solution is ArgoCD-specific), but for this test service it seems that path resolving is the source of the issue. It may be not the only source (to be retested on test2 subdomain), but when I created a new subdomain in the hosted zone (test3, not used anywhere before) and pointed it via A entry to the load balancer (as "alias" in AWS console), and then added to the ingress a new rule with / path, like this:
- host: test3.projectname.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
I've finally got the hello kubernetes thing on http://test3.projectname.org. I have succeeded with TLS after a number of attempts/research and some help in a separate question.
But I haven't succeeded with actual debugging: looking at kubectl logs -n nginx <pod name, see kubectl get pod -n nginx> doesn't really help understanding what path was passed to the service and is rather difficult to understand (can't even find where those IPs come from: they are not mine, LB's, cluster IP of the service; neither I understand what tests-hello-kubernetes-first-80 stands for – it's just a concatenation of namespace, service name and port, no object has such name, including ingress):
192.168.14.57 - - [14/Nov/2021:12:02:58 +0000] "GET /test2 HTTP/2.0" 404 144
"-" "<browser's user-agent header value>" 448 0.002
[tests-hello-kubernetes-first-80] [] 192.168.49.95:8080 144 0.000 404 <some hash>
Any more pointers on debugging will be helpful; also suggestions regarding correct path ~rewriting for nginx-ingress are welcome.

Related

GKE Ingres and GCE Load balancer : Always get 404

I try to deploy 2 applications (behind 2 separates Deployments objects). I have 1 Service per Deployment, with type NodePort.
application1_service.yaml
apiVersion: v1
kind: Service
metadata:
name: application1-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: application1
type: NodePort
application2_service.yaml is the exact same (except for name and run)
I use an Ingress to make the 2 services available,
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-static-ip"
networking.gke.io/managed-certificates: "my-certificate"
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: "my.host.com"
http:
paths:
- path: /*
backend:
serviceName: application1-service
servicePort: 80
- path: /application2/*
backend:
serviceName: application2-service
servicePort: 80
I also create a ManagedCertificate object, to be able to handle HTTPS requests.
managed_certificate.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: my-certificate
spec:
domains:
- my.host.com
The weird thing here is that curl https://my.host.com/ works fine and I can access my service, but when I try curl https://my.host.com/application2/, I keep getting 404 Not Found.
Why is the root working and not the other ?
Additional info:
The ManagedCertificate is valid and works fine with /.
application1 and application2 are the exact same app and if I swap them in the ingress, the output is the same.
Thanks for your help !
EDIT:
Here is the 404 I get when I try to access application2
Don't know if it can help but here is also the part of the Ingress access logs showing the 404
i think you can't use the same port for 2 different applications because this port is used on every node to route to one app.
From docs:
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
So in your case one app is already using port 80, you could try to use a different one for application 2
I have replicated the problem without a domain and managed a
Certificate. For me it's working fine using the Load Balancer IP
address.
(ingress-316204)$ curl http://<LB IP address>/v2/
Hello, world!
Version: 2.0.0
Hostname: web2-XXX
(ingress-316204)$ curl http://<LB IP address>/
Hello, world!
Version: 1.0.0
Hostname: web-XXX
Ingress configuration seems to be correct as per doc. Check if curl https://LB IP address/application2/ is working or not, if it's working then there might be some issue with the host name.
Check if you have updated the host file (/etc/hosts) with line LB IP
address my.host.com.
Check if host, path and backend are configured correctly in Load
Balancer configuration.
If still having the problem then check Port,Nodeport and Targetport configured correctly or else share the output of ‘kubectl describe ing my-ingress’ for further investigation.
Answering my own question:
After searching for days, I ultimately found the reason of the problem.
Everything was fine with the cluster and the configs, my problem was from my Flask API.
All the URLs was like this one:
#app.route("/my_function")
So it was working fine on root path with my.host.com/my_function, but when I was typing my.host.com/application1/my_function it wasn't working...
I just changed my app to
#app.route("/application1/my_function")
Everything works fine now :) Hope it will help !

Nginx Ingress returns 502 Bad Gateway on Kubernetes

I have a Kubernetes cluster deployed on AWS (EKS). I deployed the cluster using the “eksctl” command line tool. I’m trying to deploy a Dash python app on the cluster without success. The default port for Dash is 8050. For the deployment I used the following resources:
pod
service (ClusterIP type)
ingress
You can check the resource configuration files below:
pod-configuration-file.yml
kind: Pod
apiVersion: v1
metadata:
name: dashboard-app
labels:
app: dashboard
spec:
containers:
- name: dashboard
image: my_image_from_ecr
ports:
- containerPort: 8050
service-configuration-file.yml
kind: Service
apiVersion: v1
metadata:
name: dashboard-service
spec:
selector:
app: dashboard
ports:
- port: 8050 # exposed port
targetPort: 8050
ingress-configuration-file.yml (host based routing)
kind: Ingress
metadata:
name: dashboard-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: dashboard.my_domain
http:
paths:
- backend:
serviceName: dashboard-service
servicePort: 8050
path: /
I followed the steps below:
kubectl apply -f pod-configuration-file.yml
kubectl apply -f service-configuration-file.yml
kubectl apply -f ingress-confguration-file.yml
I also noticed that the pod deployment works as expected:
kubectl logs my_pod:
and the output is:
Dash is running on http://127.0.0.1:8050/
Warning: This is a development server. Do not use app.run_server
in production, use a production WSGI server like gunicorn instead.
* Serving Flask app "annotation_analysis" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
You can see from the ingress configuration file that I want to do host based routing using my domain. For this to work, I have also deployed an nginx-ingress. I have also created an “A” record set using Route53
that maps the “dashboard.my_domain” to the nginx-ingress:
kubectl get ingress
and the output is:
NAME HOSTS ADDRESS. PORTS. AGE
dashboard-ingress dashboard.my_domain nginx-ingress.elb.aws-region.amazonaws.com 80 93s
Moreover,
kubectl describe ingress dashboard-ingress
and the output is:
Name: dashboard-ingress
Namespace: default
Address: nginx-ingress.elb.aws-region.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
host.my-domain
/ dashboard-service:8050 (192.168.36.42:8050)
Annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: false
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
Unfortunately, when I try to access the Dash app on the browser, I get a
502 Bad Gateway error from the nginx. Could you please help me because my Kubernetes knowledge is limited.
Thanks in advance.
It had nothing to do with Kubernetes or AWS settings. I had to change my python Dash code from:
if __name__ == "__main__":
app.run_server(debug=True)
to:
if __name__ == "__main__":
app.run_server(host='0.0.0.0',debug=True).
The addition of host='0.0.0.0' did the trick!
I think you'll need to check whether any other service is exposed at path / on the same host.
Secondly, try removing rewrite-target annotation. Also can you please update your question with output of kubectl describe ingress <ingress_Name>
I would also suggest you to use backend-protocol annotation with value as HTTP (default value, you can avoid using this if dashboard application is not SSL Configured, and only this application is served at the said host.) But, you may need to add this if multiple applications are served at this host, and create one Ingress with backend-protocol: HTTP for non SSL services, and another with backend-protocol: HTTPS to serve traffic to SSL enabled services.
For more information on backend-protocol annotation, kindly refer this link.
I have often faced this issue in my Ingress Setup and these steps have helped me resolve it.

Google Kubernetes Ingress health check always failing

I have configured a web application pod exposed via apache on port 80. I'm unable to configure a service + ingress for accessing from the internet. The issue is that the backend services always report as UNHEALTHY.
Pod Config:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: webapp
name: webapp
namespace: my-app
spec:
replicas: 1
selector:
matchLabels:
name: webapp
template:
metadata:
labels:
name: webapp
spec:
containers:
- image: asia.gcr.io/my-app/my-app:latest
name: webapp
ports:
- containerPort: 80
name: http-server
Service Config:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
name: webapp
ports:
- protocol: TCP
port: 50000
targetPort: 80
Ingress Config:
kind: Ingress
metadata:
name: webapp-ingress
spec:
backend:
serviceName: webapp-service
servicePort: 50000
This results in backend services reporting as UNHEALTHY.
The health check settings:
Path: /
Protocol: HTTP
Port: 32463
Proxy protocol: NONE
Additional information: I've tried a different approach of exposing the deployment as a load balancer with external IP and that works perfectly. When trying to use a NodePort + Ingress, this issue persists.
With GKE, the health check on the Load balancer is created automatically when you create the ingress. Since the HC is created automatically, so are the firewall rules.
Since you have no readinessProbe configured, the LB has a default HC created (the one you listed). To debug this properly, you need to isolate where the point of failure is.
First, make sure your pod is serving traffic properly;
kubectl exec [pod_name] -- wget localhost:80
If the application has curl built in, you can use that instead of wget.
If the application has neither wget or curl, skip to the next step.
get the following output and keep track of the output:
kubectl get po -l name=webapp -o wide
kubectl get svc webapp-service
You need to keep the service and pod clusterIPs
SSH to a node in your cluster and run sudo toolbox bash
Install curl:
apt-get install curl`
Test the pods to make sure they are serving traffic within the cluster:
curl -I [pod_clusterIP]:80
This needs to return a 200 response
Test the service:
curl -I [service_clusterIP]:80
If the pod is not returning a 200 response, the container is either not working correctly or the port is not open on the pod.
if the pod is working but the service is not, there is an issue with the routes in your iptables which is managed by kube-proxy and would be an issue with the cluster.
Finally, if both the pod and the service are working, there is an issue with the Load balancer health checks and also an issue that Google needs to investigate.
As Patrick mentioned, the checks will be created automatically by GCP.
By default, GKE will use readinessProbe.httpGet.path for the health check.
But if there is no readinessProbe configured, then it will just use the root path /, which must return an HTTP 200 (OK) response (and that's not always the case, for example, if the app redirects to another path, then the GCP health check will fail).

How do I make my admin ui of cockroachdb publicly available via traefik ingress controller on kubernetes?

Kubernetes dedicated cockroachdb node - accessing admin ui via traefik ingress controller fails - page isn't redirecting properly
I have a dedicated kubernetes node running cockroachdb. The pods get scheduled and everything is setup. I want to access the admin UI from a subdomain like so: cockroachdb.hostname.com. I have done this with traefik dashboard and ceph dashboard so I know my ingress setup is working. I even have cert-manager running to have https enabled. I get the error from the browser that the page is not redirecting properly.
Do I have to specify the host name somewhere special?
I have tried adding this with no success: --http-host cockroachdb.hostname.com
This dedicated node has its own public ip which is not mapped to hostname.com. I think I need to change a setting in cockroachdb, but I don't know which because I am new to it.
Does anyone know how to publish admin UI via an ingress?
EDIT01: Added ingress and service config files
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-public
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/ssl-temporary-redirect: "true"
ingress.kubernetes.io/ssl-host: "cockroachdb.hostname.com"
traefik.frontend.rule: "Host:cockroachdb.hostname.com,www.cockroachdb.hostname.com"
traefik.frontend.redirect.regex: "^https://www.cockroachdb.hostname.com(.*)"
traefik.frontend.redirect.replacement: "https://cockroachdb.hostname.com/$1"
spec:
rules:
- host: cockroachdb.hostname.com
http:
paths:
- path: /
backend:
serviceName: cockroachdb-public
servicePort: http
- host: www.cockroachdb.hostname.com
http:
paths:
- path: /
backend:
serviceName: cockroachdb-public
servicePort: http
tls:
- hosts:
- cockroachdb.hostname.com
- www.cockroachdb.hostname.com
secretName: cockroachdb-secret
Serice:
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: grpc
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
EDIT02:
I can access the Admin UI page now but only by going over the external ip address of the server with port 8080. I think I need to tell my server that its ip address is mapped to the correct sub domain?
EDIT03:
On both scheduled traefik-ingress pods the following logs are created:
time="2019-04-29T04:31:42Z" level=error msg="Service not found for default/cockroachdb-public"
Your referencing looks good on the ingress side. You are using quite a few redirects, unless you really know what each one is accomplishing, don't use them, you might end up in an infinite loop of redirects.
You can take a look at the following logs and methods to debug:
Run kubectl logs <traefik pod> and see the last batch of logs.
Run kubectl get service, and from what I hear, this is likely your main issue. Make sure your service exists in the default namespace.
Run kubectl port-forward svc/cockroachdb-public 8080:8080 and try connecting to it through localhost:8080 and see terminal for potential error messages.
Run kubectl describe ingress cockroachdb-public and look at the events, this should give you something to work with.
Try accessing the service from another pod you have running ping cockroachdb-public.default.svc.cluster.local and see if it resolves the IP address.
Take a look at your clusterrolebindings and serviceaccount, it might be limited and not have permission to list services in the default namespace: kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --serviceaccount=default:default

Allow pod to access domain names of service through ingress

I have a service and ingress setup on my minikube kubernetes cluster which exposes the domain name hello.life.com
Now I need to access this domain from inside another pod as
curl http://hello.life.com
and this should return proper html
My service is as follows:
apiVersion: v1
kind: Service
metadata:
labels:
app: bulging-zorse-key
chart: key-0.1.0
heritage: Tiller
release: bulging-zorse
name: bulging-zorse-key-svc
namespace: abc
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
name: bulging-zorse-key
type: ClusterIP
status:
loadBalancer: {}
My ingress is as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: bulging-zorse-key
chart: key-0.1.0
heritage: Tiller
release: bulging-zorse
name: bulging-zorse-key-ingress
namespace: dev
spec:
rules:
- host: hello.life.com
http:
paths:
- backend:
serviceName: bulging-zorse-key-svc
servicePort: 80
path: /
status:
loadBalancer:
ingress:
- {}
Can someone please help me out as to what changes do I need to make to get it working?
Thanks in advance!!!
I found a good explanation of your problem and the solution in the Custom DNS Entries For Kubernetes article:
Suppose we have a service, foo.default.svc.cluster.local that is available to outside clients as foo.example.com. That is, when looked up outside the cluster, foo.example.com will resolve to the load balancer VIP - the external IP address for the service. Inside the cluster, it will resolve to the same thing, and so using this name internally will cause traffic to hairpin - travel out of the cluster and then back in via the external IP.
The solution is:
Instead, we want foo.example.com to resolve to the internal ClusterIP, avoiding the hairpin.
To do this in CoreDNS, we make use of the rewrite plugin. This plugin can modify a query before it is sent down the chain to whatever backend is going to answer it.
To get the behavior we want, we just need to add a rewrite rule mapping foo.example.com to foo.default.svc.cluster.local:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
rewrite name foo.example.com foo.default.svc.cluster.local
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2019-01-09T15:02:52Z"
name: coredns
namespace: kube-system
resourceVersion: "8309112"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: a2ef5ff1-141f-11e9-9043-42010a9c0003
Note: In your case, you have to put ingress service name as a destination for the alias.
(E.g.: rewrite name hello.life.com ingress-service-name.ingress-namespace.svc.cluster.local) Make sure you're using correct service name and namespace name.
Once we add that to the ConfigMap via kubectl edit configmap coredns -n kube-system or kubectl apply -f patched-coredns-deployment.yaml -n kube-system, we have to wait 10-15 minutes. Recent CoreDNS versions includes reload plugin.
reload
Name
reload - allows automatic reload of a changed Corefile.
Description
This plugin periodically checks if the Corefile has changed by reading it and calculating its MD5 checksum. If the file has changed, it reloads CoreDNS with the new Corefile. This eliminates the need to send a SIGHUP or SIGUSR1 after changing the Corefile.
The reloads are graceful - you should not see any loss of service when the reload happens. Even if the new Corefile has an error, CoreDNS will continue to run the old config and an error message will be printed to the log. But see the Bugs section for failure modes.
In some environments (for example, Kubernetes), there may be many CoreDNS instances that started very near the same time and all share a common Corefile. To prevent these all from reloading at the same time, some jitter is added to the reload check interval. This is jitter from the perspective of multiple CoreDNS instances; each instance still checks on a regular interval, but all of these instances will have their reloads spread out across the jitter duration. This isn't strictly necessary given that the reloads are graceful, and can be disabled by setting the jitter to 0s.
Jitter is re-calculated whenever the Corefile is reloaded.
Running our test pod, we can see this works:
$ kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
If you don't see a command prompt, try pressing enter.
/ # host foo
foo.default.svc.cluster.local has address 10.0.0.72
/ # host foo.example.com
foo.example.com has address 10.0.0.72
/ # host bar.example.com
Host bar.example.com not found: 3(NXDOMAIN)
/ #