Cannot access to Kubernetes Ingress (Istio) on GKE - kubernetes

I set up Istio (Kubernetes Ingress mode, NOT Istio Gateway) on GKE. However, I cannot access from outside using curl
kubectl get svc -n istio-system | grep ingressgateway
istio-ingressgateway LoadBalancer 10.48.11.240 35.222.111.100
15020:30115/TCP,80:31420/TCP,443:32019/TCP,31400:31267/TCP,15029:30180/TCP,15030:31055/TCP,15031:32226/TCP,15032:30437/TCP,15443:31792/TCP
41h
curl 35.222.111.100
curl: (7) Failed to connect to 35.222.111.100 port 80: Connection
refused
This is the config of Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: in-keycloak
port:
number: 8080
This is the config of the Service:
apiVersion: v1
kind: Service
metadata:
name: in-keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: ClusterIP
If I use the same config for Docker Desktop on local machine (MacOS), it works fine.

There are 2 things that must be met on GKE to make it work with istio on private cluster.
1.To make istio work on GKE you should follow these instructions to prepare a GKE cluster for Istio. It also inclused to open a 15017 port so istio could work.
For private GKE clusters
An automatically created firewall rule does not open port 15017. This is needed by the Pilot discovery validation webhook.
To review this firewall rule for master access:
$ gcloud compute firewall-rules list --filter="name~gke-${CLUSTER_NAME}-[0-9a-z]*-master"
To replace the existing rule and allow master access:
$ gcloud compute firewall-rules update <firewall-rule-name> --allow tcp:10250,tcp:443,tcp:15017
2.Comparing to istio documentation I would say your ingress is not properly configured, below you can find an ingress resource from the documentation you might try to use instead:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: istio
spec:
controller: istio.io/ingress-controller
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
ingressClassName: istio
rules:
- host: httpbin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000

Related

How to access an application/container from dns/hostname in k8s?

I have a k8s cluster where I deploy some containers.
The cluster is accessible at microk8s.hostname.internal.
At this moment I have an application/container deployed that is accessible here: microk8s.hostname.internal/myapplication with the help of a service and an ingress.
And this works great.
Now I would like to deploy another application/container but I would like it accessible like this: otherapplication.microk8s.hostname.internal.
How do I do this?
Currently installed addons in microk8s:
aasa#bolsrv0891:/snap/bin$ microk8s status
microk8s is running
high-availability: no
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
Update 1:
If I portforward to my service it works.
I have tried this ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
namespace: jupyter-notebook
annotations:
kubernetes.io/ingress.class: public
spec:
rules:
- host: jupyter.microk8s.hostname.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
But I cant access it nor ping it. Chrome says:
jupyter.microk8s.hostname.internal’s server IP address could not be found.
My service looks like this:
apiVersion: v1
kind: Service
metadata:
name: jupyter-service
namespace: jupyter-notebook
spec:
ports:
- name: 7070-8888
port: 7070
protocol: TCP
targetPort: 8888
selector:
app: jupyternotebook
type: ClusterIP
status:
loadBalancer: {}
I can of course ping microk8s.hostname.internal.
Update 2:
The ingress that is working today that has a context path: microk8s.boliden.internal/myapplication looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: jupyter-ingress
namespace: jupyter-notebook
spec:
rules:
- http:
paths:
- path: "/jupyter-notebook/?(.*)"
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
This is accessible externally by accessing microk8s.hostname.internal/jupyter-notebook.
To do this you would have to configure a kube service, kube ingress and the configure your DNS.
Adding an entry into the hosts file would allow DNS resolution to otherapplication.microk8s.hostname.internal
You could use dnsmasq to allow for wildcard resolution e.g. *.microk8s.hostname.internal
You can test the dns reoslution using nslookup or dig
You can copy the same ingress and update name of it and Host inside it, that's all change you need.
For ref:
kind: Ingress
metadata:
name: second-ingress <<- make sure to update name else it will overwrite if the same
spec:
rules:
- host: otherapplication.microk8s.hostname.internal
http:
paths:
- path: /
backend:
serviceName: service-name
servicePort: service-port
You can create the subdomain with ingress just update the Host in ingress and add the necessary serviceName and servicePort to route traffic to specific service.
Feel free to append the necessary fields, and annotation if any to the above ingress from the existing ingress which is working for you.
If you are running it locally you might have to map the IP to the subdomain locally in /etc/hosts file
/etc/hosts
otherapplication.microk8s.hostname.internal <IP address>

Correct way to expose ingress service using baremetal Kubernetes Cluster

I have the following topology in my kubernetes cluster:
So, I have 2 Nodes: 1 Master and 1 Worker Node.
Now I created an application with my deployment.yml and my service.yml, using nodePort configuration, see:
apiVersion: v1
kind: Service
metadata:
name: administrativo-service
spec:
type: NodePort
selector:
app: administrativo
ports:
- protocol: TCP
port: 80
targetPort: 8080
And this is my service:
Now I need to access this API using my DNS, something like: myapi.localdns, so I followed this steps to install Ingress Controller based on nginx:
https://kubernetes.github.io/ingress-nginx/deploy/#quick-start
https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters
After 1 hour this is POD status in ingress-nginx namespace:
And finally, this is my Ingress yml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: administrativo-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: myapi.localdns
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: administrativo-service
port:
number: 80
Well, my idea is to create an entry in my company to DNS to point to this DNS myapi.localdns:
but to do it I need the Ingress Address, that don't show in my ingress resource, see:
I solved the problem, using this steps:
First create in my company DNS the CNAMEs pointing to my kubernetes workernode IP.
Reinstall ingress-nginx controller using bare-metal configuration: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.5/deploy/static/provider/baremetal/deploy.yaml.
Change the deploy.yaml to use NodePort before use kubectl apply
Use externalIPS to expose my service in port 80.

Minikube ingress controller not forwarding request to deployed service properly

I have following setup in minikube cluster
SpringBoot app deployed in minikube cluster
name : opaapp and containerPort: 9999
Service use to expose service app as below
apiVersion: v1
kind: Service
metadata:
name: opaapp
namespace: default
labels:
app: opaapp
spec:
selector:
app: opaapp
ports:
- name: http
port: 9999
targetPort: 9999
type: NodePort
Created an ingreass controller and ingress resource as below
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: opaapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: opaapp.info
http:
paths:
- path: /
backend:
serviceName: opaapp
servicePort: 9999
I have setup host file as below
172.17.0.2 opaapp.info
Now, if I access service as below
http://opaapp.info:32746/api/ping : I am getting the response back
But if I try to access as
http://opaapp.info/api/ping : Getting 404 error
Not able to find the error on configuration
The nginx ingress controller has been exposed via NodePort 32746 which means nginx is not listening on port 80/443 in the host's(172.17.0.2) network, rather nginx is listening on port 80/443 on Kubernetes pod network which is different than host network. Hence accessing it via http://opaapp.info/api/ping is not working. To make it work the way you are expecting the nginx ingress controller need to be deployed with hostNetwork: true option so that it can listen on 80/443 port directly in the host(172.17.0.2) network which can be done as discussed here.

Ingress without ip address

I create a ingress to expose my internal service.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /app
backend:
serviceName: my-app
servicePort: 80
But when I try to get this ingress, it show it has not ip address.
NAME HOSTS ADDRESS PORTS AGE
app-ingress example.com 80 10h
The service show under below.
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
selector:
app: my-app
ports:
- name: my-app
nodePort: 32000
port: 3000
targetPort: 3000
type: NodePort
Note: I'm guessing because of the other question you asked that you are trying to create an ingress on a manually created cluster with kubeadm.
As described in the docs, in order for ingress to work, you need to install ingress controller first. An ingress object itself is merely a configuration slice for the installed ingress controller.
Nginx based controller is one of the most popular choice. Similarly to services, in order to get a single failover-enabled VIP for your ingress, you need to use MetalLB. Otherwise you can deploy ingress-nginx over a node port: see details here
Finally, servicePort in your ingress object should be 3000, same as port of your service.

kubernetes: access Ingress within a Pod

I have an Ingress object set up to route traffic to the appropriate Service based on the Url path. I would like to access/expose this Ingress object within another Pod. I'm wondering if this is possible?
I tried to set up a Service on the Ingress but that didn't seem to work.
So, for whatever reason (ssr, lots of microservices, etc) you want to access k8s resources using their ingress path mapping, instead of calling each service by its internal name.
For example, you have an ingress config like that:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: example.com
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-service
servicePort: 80
- path: /api/cart/?(.*)
backend:
serviceName: cart-service
servicePort: 80
and you want to access auth-service using http://example.com/api/auth instead of http://auth-service.
All you have to do is replace domain part (example.com in our case) with ingress service url. It depends on your configuration and environment, but usually it looks like http://[SERVICE_NAME].[NAMESPACE], for example:
GCP - http://ingress-nginx-controller.ingress-nginx
Helm ingress nginx - http://my-release-ingress-nginx-controller (here we are
using only service name part, because helm installs ingress in
default namespace)
Minikube - if you are using minikube ingress
addon, then you might run into problem where you cannot access
ingress, then just use helm version. (dont disable ingress addon - just install helm version alongside of it)
Get namespaces: kubectl get namespaces
Get service names inside namespace kubectl get services -n [NAMESPACE].
If you have assigned a host name, you also have to provide the domain name and IP address of the cluster to the /etc/hosts file. When you access a service via Ingress from outside the cluster, this is the file that is consulted for host name resolution.
However, a pod running inside a cluster does not have access to this /etc/hosts file. It has its own /etc/hosts file. To use ingress, the pod needs to have the same domain name and IP address entry in it's own /etc/hosts file.
To achieve this, you have to use hostAliases. Here's a sample of how that works:
apiVersion: v1
kind: Pod
metadata:
...
spec:
hostAliases:
- ip:<IP address>
hostnames:
- <host name>
For more detail on hostAliases, go to this link
I have spend so much time on this issue. I found very simple solution. I am using Mac Docker Desktop 3.3.1.
My Kubernetes Version: 1.19.7
I am trying to access UI URL from another pod running in the cluster.
My UI Service
apiVersion: v1
kind: Service
metadata:
name: my-ui-service
spec:
type: LoadBalancer
selector:
app: my-ui
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Ingress for the service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: my-site.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: my-ui-service
port:
number: 8080
I have used NGINX Ingress Controller.
Command to run the Ingress Controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/cloud/deploy.yaml
Once the controller is ready run the command to see the status of ingress.
kubectl get ingress
Now see the description of the ingress:
kubectl describe ingress my-ingress
Here you will find
Rules:
Host Path Backends
---- ---- --------
my-site.com
/ my-ui-service:8080 (10.1.2.198:8080)
In any pod in the cluster you can access the domain my-site.com by using my-ui-service:8080.
Inside your cluster your pods use services to reach other pods.
From outside the cluster a client may use ingress to reach services.
Ingress resource allows connection to services.
So your pod need to be reachable by a service (my-svc-N in the following example), which you're going to use in your ingress definition.
Take a look at this example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ing
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: my-kube.info
http:
paths:
- path: /
backend:
serviceName: my-svc-1
servicePort: 80
- host: cheeses.all
http:
paths:
- path: /aaa
backend:
serviceName: my-svc-2
servicePort: 80
- path: /bbb
backend:
serviceName: my-svc-3
servicePort: 80