How to access an application/container from dns/hostname in k8s? - kubernetes

I have a k8s cluster where I deploy some containers.
The cluster is accessible at microk8s.hostname.internal.
At this moment I have an application/container deployed that is accessible here: microk8s.hostname.internal/myapplication with the help of a service and an ingress.
And this works great.
Now I would like to deploy another application/container but I would like it accessible like this: otherapplication.microk8s.hostname.internal.
How do I do this?
Currently installed addons in microk8s:
aasa#bolsrv0891:/snap/bin$ microk8s status
microk8s is running
high-availability: no
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
Update 1:
If I portforward to my service it works.
I have tried this ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
namespace: jupyter-notebook
annotations:
kubernetes.io/ingress.class: public
spec:
rules:
- host: jupyter.microk8s.hostname.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
But I cant access it nor ping it. Chrome says:
jupyter.microk8s.hostname.internal’s server IP address could not be found.
My service looks like this:
apiVersion: v1
kind: Service
metadata:
name: jupyter-service
namespace: jupyter-notebook
spec:
ports:
- name: 7070-8888
port: 7070
protocol: TCP
targetPort: 8888
selector:
app: jupyternotebook
type: ClusterIP
status:
loadBalancer: {}
I can of course ping microk8s.hostname.internal.
Update 2:
The ingress that is working today that has a context path: microk8s.boliden.internal/myapplication looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: jupyter-ingress
namespace: jupyter-notebook
spec:
rules:
- http:
paths:
- path: "/jupyter-notebook/?(.*)"
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
This is accessible externally by accessing microk8s.hostname.internal/jupyter-notebook.

To do this you would have to configure a kube service, kube ingress and the configure your DNS.
Adding an entry into the hosts file would allow DNS resolution to otherapplication.microk8s.hostname.internal
You could use dnsmasq to allow for wildcard resolution e.g. *.microk8s.hostname.internal
You can test the dns reoslution using nslookup or dig

You can copy the same ingress and update name of it and Host inside it, that's all change you need.
For ref:
kind: Ingress
metadata:
name: second-ingress <<- make sure to update name else it will overwrite if the same
spec:
rules:
- host: otherapplication.microk8s.hostname.internal
http:
paths:
- path: /
backend:
serviceName: service-name
servicePort: service-port
You can create the subdomain with ingress just update the Host in ingress and add the necessary serviceName and servicePort to route traffic to specific service.
Feel free to append the necessary fields, and annotation if any to the above ingress from the existing ingress which is working for you.
If you are running it locally you might have to map the IP to the subdomain locally in /etc/hosts file
/etc/hosts
otherapplication.microk8s.hostname.internal <IP address>

Related

Link service deployment Kubernetes

I have two services running on k8s and I am using an ingress to access my services. One of the service requires access to another view env but I added the cluster IP and the port of the required service but it seems to be unaccessible.
User Deployment yaml
...
- name: WALLET_SERVICE_URL
value: 'http://10.103.xxx.xx:30611'
- name: WALLET_SERVICE_API_VERSION
value: /api/v1
...
my Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dev-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /api/v1$uri
spec:
ingressClassName: nginx
rules:
- host: demo.localdev.me
http:
paths:
- path: /user
pathType: Prefix
backend:
service:
name: accounts-service
port:
number: 80
- path: /wallet
pathType: Prefix
backend:
service:
name: wallet-service
port:
number: 80
Wallet service
apiVersion: v1
kind: Service
metadata:
name: wallet-service
spec:
selector:
app: wallet-service
ports:
- port: 80
targetPort: 3007
type: NodePort
Use ClusterIP for wallet-service. There's no reason to use NodePort -- the ingress controller will handle routing to the internal IP.
Your value for the WALLET_SERVICE_URL should be pointing to your service by DNS name, using the port you define for your ClusterIP service. i.e. http://wallet-service:80.
Unless wallet-service should be accessible outside of the cluster, you don't need to configure ingress for it.
Ingress are for traffic from outside of the cluster, for internal network you can use the dns name of your service, you can read more about service dns in the docs

Correct way to expose ingress service using baremetal Kubernetes Cluster

I have the following topology in my kubernetes cluster:
So, I have 2 Nodes: 1 Master and 1 Worker Node.
Now I created an application with my deployment.yml and my service.yml, using nodePort configuration, see:
apiVersion: v1
kind: Service
metadata:
name: administrativo-service
spec:
type: NodePort
selector:
app: administrativo
ports:
- protocol: TCP
port: 80
targetPort: 8080
And this is my service:
Now I need to access this API using my DNS, something like: myapi.localdns, so I followed this steps to install Ingress Controller based on nginx:
https://kubernetes.github.io/ingress-nginx/deploy/#quick-start
https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters
After 1 hour this is POD status in ingress-nginx namespace:
And finally, this is my Ingress yml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: administrativo-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: myapi.localdns
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: administrativo-service
port:
number: 80
Well, my idea is to create an entry in my company to DNS to point to this DNS myapi.localdns:
but to do it I need the Ingress Address, that don't show in my ingress resource, see:
I solved the problem, using this steps:
First create in my company DNS the CNAMEs pointing to my kubernetes workernode IP.
Reinstall ingress-nginx controller using bare-metal configuration: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.5/deploy/static/provider/baremetal/deploy.yaml.
Change the deploy.yaml to use NodePort before use kubectl apply
Use externalIPS to expose my service in port 80.

Cannot access to Kubernetes Ingress (Istio) on GKE

I set up Istio (Kubernetes Ingress mode, NOT Istio Gateway) on GKE. However, I cannot access from outside using curl
kubectl get svc -n istio-system | grep ingressgateway
istio-ingressgateway LoadBalancer 10.48.11.240 35.222.111.100
15020:30115/TCP,80:31420/TCP,443:32019/TCP,31400:31267/TCP,15029:30180/TCP,15030:31055/TCP,15031:32226/TCP,15032:30437/TCP,15443:31792/TCP
41h
curl 35.222.111.100
curl: (7) Failed to connect to 35.222.111.100 port 80: Connection
refused
This is the config of Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: in-keycloak
port:
number: 8080
This is the config of the Service:
apiVersion: v1
kind: Service
metadata:
name: in-keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: ClusterIP
If I use the same config for Docker Desktop on local machine (MacOS), it works fine.
There are 2 things that must be met on GKE to make it work with istio on private cluster.
1.To make istio work on GKE you should follow these instructions to prepare a GKE cluster for Istio. It also inclused to open a 15017 port so istio could work.
For private GKE clusters
An automatically created firewall rule does not open port 15017. This is needed by the Pilot discovery validation webhook.
To review this firewall rule for master access:
$ gcloud compute firewall-rules list --filter="name~gke-${CLUSTER_NAME}-[0-9a-z]*-master"
To replace the existing rule and allow master access:
$ gcloud compute firewall-rules update <firewall-rule-name> --allow tcp:10250,tcp:443,tcp:15017
2.Comparing to istio documentation I would say your ingress is not properly configured, below you can find an ingress resource from the documentation you might try to use instead:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: istio
spec:
controller: istio.io/ingress-controller
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
ingressClassName: istio
rules:
- host: httpbin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000

Minikube change address of ingress addon

When I deploy ingress using ingress addon (minikube addons enable ingress) it address sets to 192.168.49.2:
NAME CLASS HOSTS ADDRESS PORTS AGE
some-ingress <none> application.com 192.168.49.2 80, 443 86s
How do I change it to 127.0.0.1 (or external ip) to be able to receive requests from outside?
UPD:
Using vm_driver=docker; minikube ip returns 192.168.49.2.
UPD2: Ingress config:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: application-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- application.com
secretName: tls-secret
rules:
- host: application.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: application-back
port:
number: 80
Actually, you don't have to change it in order to receive external requests.
The rules: part of your configuration:
spec:
rules:
- host: application.com
expects a DNS record with application.com name. Later in the guide you can see:
Add the following line to the bottom of the /etc/hosts file.
Note: If you are running Minikube locally, use minikube ip to get the
external IP. The IP address displayed within the ingress list will be
the internal IP.
172.17.0.15 hello-world.info
This sends requests from hello-world.info to Minikube.
That entry points to your ingress IP or Minikube IP. Of course you need to adjust your IP and DNS name.
If you would like to curl the IP instead of DNS name you can do so by removing the host rule from your ingress. For example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: application-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: "/"
backend:
serviceName: application-back
servicePort: 80
After that you should be able to curl -Lk by IP with no issues.

kubernetes: access Ingress within a Pod

I have an Ingress object set up to route traffic to the appropriate Service based on the Url path. I would like to access/expose this Ingress object within another Pod. I'm wondering if this is possible?
I tried to set up a Service on the Ingress but that didn't seem to work.
So, for whatever reason (ssr, lots of microservices, etc) you want to access k8s resources using their ingress path mapping, instead of calling each service by its internal name.
For example, you have an ingress config like that:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: example.com
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-service
servicePort: 80
- path: /api/cart/?(.*)
backend:
serviceName: cart-service
servicePort: 80
and you want to access auth-service using http://example.com/api/auth instead of http://auth-service.
All you have to do is replace domain part (example.com in our case) with ingress service url. It depends on your configuration and environment, but usually it looks like http://[SERVICE_NAME].[NAMESPACE], for example:
GCP - http://ingress-nginx-controller.ingress-nginx
Helm ingress nginx - http://my-release-ingress-nginx-controller (here we are
using only service name part, because helm installs ingress in
default namespace)
Minikube - if you are using minikube ingress
addon, then you might run into problem where you cannot access
ingress, then just use helm version. (dont disable ingress addon - just install helm version alongside of it)
Get namespaces: kubectl get namespaces
Get service names inside namespace kubectl get services -n [NAMESPACE].
If you have assigned a host name, you also have to provide the domain name and IP address of the cluster to the /etc/hosts file. When you access a service via Ingress from outside the cluster, this is the file that is consulted for host name resolution.
However, a pod running inside a cluster does not have access to this /etc/hosts file. It has its own /etc/hosts file. To use ingress, the pod needs to have the same domain name and IP address entry in it's own /etc/hosts file.
To achieve this, you have to use hostAliases. Here's a sample of how that works:
apiVersion: v1
kind: Pod
metadata:
...
spec:
hostAliases:
- ip:<IP address>
hostnames:
- <host name>
For more detail on hostAliases, go to this link
I have spend so much time on this issue. I found very simple solution. I am using Mac Docker Desktop 3.3.1.
My Kubernetes Version: 1.19.7
I am trying to access UI URL from another pod running in the cluster.
My UI Service
apiVersion: v1
kind: Service
metadata:
name: my-ui-service
spec:
type: LoadBalancer
selector:
app: my-ui
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Ingress for the service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: my-site.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: my-ui-service
port:
number: 8080
I have used NGINX Ingress Controller.
Command to run the Ingress Controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/cloud/deploy.yaml
Once the controller is ready run the command to see the status of ingress.
kubectl get ingress
Now see the description of the ingress:
kubectl describe ingress my-ingress
Here you will find
Rules:
Host Path Backends
---- ---- --------
my-site.com
/ my-ui-service:8080 (10.1.2.198:8080)
In any pod in the cluster you can access the domain my-site.com by using my-ui-service:8080.
Inside your cluster your pods use services to reach other pods.
From outside the cluster a client may use ingress to reach services.
Ingress resource allows connection to services.
So your pod need to be reachable by a service (my-svc-N in the following example), which you're going to use in your ingress definition.
Take a look at this example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ing
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: my-kube.info
http:
paths:
- path: /
backend:
serviceName: my-svc-1
servicePort: 80
- host: cheeses.all
http:
paths:
- path: /aaa
backend:
serviceName: my-svc-2
servicePort: 80
- path: /bbb
backend:
serviceName: my-svc-3
servicePort: 80