Access service in remote Kubernetes cluster using ingress - kubernetes

I'm attempting to access a service in an existing kubernetes cluster deployed in a remote machine. I've configured the cluster to be accessible through kubectl from my local mac.
$ kubectl cluster-info
Kubernetes master is running at https://192.168.58.114:6443
KubeDNS is running at https://192.168.58.114:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
The ingress configuration for the service I want to connect is:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: gw-ingress
namespace: vick-system
selfLink: /apis/extensions/v1beta1/namespaces/vick-system/ingresses/gw-ingress
uid: 52b62da6-01c1-11e9-9f59-fa163eb296d8
resourceVersion: '2695'
generation: 1
creationTimestamp: '2018-12-17T06:02:23Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/affinity":"cookie","nginx.ingress.kubernetes.io/session-cookie-hash":"sha1","nginx.ingress.kubernetes.io/session-cookie-name":"route"},"name":"gw-ingress","namespace":"vick-system"},"spec":{"rules":[{"host":"wso2-apim-gateway","http":{"paths":[{"backend":{"serviceName":"gateway","servicePort":8280},"path":"/"}]}}],"tls":[{"hosts":["wso2-apim-gateway"]}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: route
spec:
tls:
- hosts:
- wso2-apim-gateway
rules:
- host: wso2-apim-gateway
http:
paths:
- path: /
backend:
serviceName: gateway
servicePort: 8280
status:
loadBalancer:
ingress:
- ip: 172.17.17.100
My list of services are:
My /etc/hosts file looks like below:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
172.17.17.100 wso2-apim-gateway wso2-apim wso2sp-dashboard
What is the URL I should use to access this service from my local browser? Should I do any more configurations?

The easiest way to access this would be a port-forward, which requires no modification of your hosts file.
kubectl -n vick-system port-forward svc/wso2sp-dashboard 9643
This will allow you to browse to http://localhost:9643 and access that service.
Please note, the svc/name syntax is only supported in kubectl >= 1.10

Related

How to access an application/container from dns/hostname in k8s?

I have a k8s cluster where I deploy some containers.
The cluster is accessible at microk8s.hostname.internal.
At this moment I have an application/container deployed that is accessible here: microk8s.hostname.internal/myapplication with the help of a service and an ingress.
And this works great.
Now I would like to deploy another application/container but I would like it accessible like this: otherapplication.microk8s.hostname.internal.
How do I do this?
Currently installed addons in microk8s:
aasa#bolsrv0891:/snap/bin$ microk8s status
microk8s is running
high-availability: no
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
Update 1:
If I portforward to my service it works.
I have tried this ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
namespace: jupyter-notebook
annotations:
kubernetes.io/ingress.class: public
spec:
rules:
- host: jupyter.microk8s.hostname.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
But I cant access it nor ping it. Chrome says:
jupyter.microk8s.hostname.internal’s server IP address could not be found.
My service looks like this:
apiVersion: v1
kind: Service
metadata:
name: jupyter-service
namespace: jupyter-notebook
spec:
ports:
- name: 7070-8888
port: 7070
protocol: TCP
targetPort: 8888
selector:
app: jupyternotebook
type: ClusterIP
status:
loadBalancer: {}
I can of course ping microk8s.hostname.internal.
Update 2:
The ingress that is working today that has a context path: microk8s.boliden.internal/myapplication looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: jupyter-ingress
namespace: jupyter-notebook
spec:
rules:
- http:
paths:
- path: "/jupyter-notebook/?(.*)"
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
This is accessible externally by accessing microk8s.hostname.internal/jupyter-notebook.
To do this you would have to configure a kube service, kube ingress and the configure your DNS.
Adding an entry into the hosts file would allow DNS resolution to otherapplication.microk8s.hostname.internal
You could use dnsmasq to allow for wildcard resolution e.g. *.microk8s.hostname.internal
You can test the dns reoslution using nslookup or dig
You can copy the same ingress and update name of it and Host inside it, that's all change you need.
For ref:
kind: Ingress
metadata:
name: second-ingress <<- make sure to update name else it will overwrite if the same
spec:
rules:
- host: otherapplication.microk8s.hostname.internal
http:
paths:
- path: /
backend:
serviceName: service-name
servicePort: service-port
You can create the subdomain with ingress just update the Host in ingress and add the necessary serviceName and servicePort to route traffic to specific service.
Feel free to append the necessary fields, and annotation if any to the above ingress from the existing ingress which is working for you.
If you are running it locally you might have to map the IP to the subdomain locally in /etc/hosts file
/etc/hosts
otherapplication.microk8s.hostname.internal <IP address>

Nginx ingress configuration to expose different ports of a cluster ip

I am trying to install neo4j on a kubernetes cluster . I have installed the helm chart
neo4j/neo4j-standalone. I have selected service type as ClusterIp.
The service exposes 3 ports as can be seen
kubectl get svc neo4j-dev-neo4j -n neo4j
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
neo4j-dev-neo4j ClusterIP 10.100.228.68 <none> 7474/TCP,7473/TCP,7687/TCP 154m
I have created a ingress which exposes the port 7474 , it is for browser ui. This part opens up correctly
below is the ingress
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-06-15T06:56:56Z"
generation: 19
name: neo4j-ingress
namespace: neo4j
resourceVersion: "5602507"
uid: f1548432-f051-4a44-9c98-4f23ee0ace33
spec:
rules:
- host: neo4j.dev.com
http:
paths:
- backend:
service:
name: neo4j-dev-neo4j
port:
number: 7474
path: /
pathType: Prefix
- backend:
service:
name: neo4j-dev-neo4j
port:
number: 7687
path: /db
pathType: Prefix
tls:
- hosts:
- neo4j.dev.com
secretName: dev-tls
status:
loadBalancer:
ingress:
- hostname: ALB
kind: List
metadata:
resourceVersion: ""
After accessing the ui , we also need to access the db which is on port 7687 , that i am not able to connect from the ui.
Want to know how the ingress configuration should be , i have tried the above but no luck.
I have tried connecting to the db from within cluster and it connects fine , I am not able to connect to the db from ui, So i suppose i have to expose the db port 7687 as well in same ingress , which i have tried .
kubectl run --rm -it --namespace "neo4j" --image "neo4j:4.4.6" cypher-shell -- cypher-shell -a "neo4j://neo4j-dev.neo4j.svc.cluster.local:7687" -u neo4j -p
If you don't see a command prompt, try pressing enter.
Connected to Neo4j using Bolt protocol version 4.4 at neo4j://neo4j-dev.neo4j.svc.cluster.local:7687 as user neo4j.
Type :help for a list of available commands or :exit to exit the shell.
Note that Cypher queries must end with a semicolon.
neo4j#neo4j>
How can i connect to db from the ui ?

Cannot access to Kubernetes Ingress (Istio) on GKE

I set up Istio (Kubernetes Ingress mode, NOT Istio Gateway) on GKE. However, I cannot access from outside using curl
kubectl get svc -n istio-system | grep ingressgateway
istio-ingressgateway LoadBalancer 10.48.11.240 35.222.111.100
15020:30115/TCP,80:31420/TCP,443:32019/TCP,31400:31267/TCP,15029:30180/TCP,15030:31055/TCP,15031:32226/TCP,15032:30437/TCP,15443:31792/TCP
41h
curl 35.222.111.100
curl: (7) Failed to connect to 35.222.111.100 port 80: Connection
refused
This is the config of Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: in-keycloak
port:
number: 8080
This is the config of the Service:
apiVersion: v1
kind: Service
metadata:
name: in-keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: ClusterIP
If I use the same config for Docker Desktop on local machine (MacOS), it works fine.
There are 2 things that must be met on GKE to make it work with istio on private cluster.
1.To make istio work on GKE you should follow these instructions to prepare a GKE cluster for Istio. It also inclused to open a 15017 port so istio could work.
For private GKE clusters
An automatically created firewall rule does not open port 15017. This is needed by the Pilot discovery validation webhook.
To review this firewall rule for master access:
$ gcloud compute firewall-rules list --filter="name~gke-${CLUSTER_NAME}-[0-9a-z]*-master"
To replace the existing rule and allow master access:
$ gcloud compute firewall-rules update <firewall-rule-name> --allow tcp:10250,tcp:443,tcp:15017
2.Comparing to istio documentation I would say your ingress is not properly configured, below you can find an ingress resource from the documentation you might try to use instead:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: istio
spec:
controller: istio.io/ingress-controller
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
ingressClassName: istio
rules:
- host: httpbin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000

Could not access Kubernetes Ingress in Browser on Windows Home with Minikube?

I am facing the problem which is that I could not access the Kubernetes Ingress on the Browser using it's IP. I have installed K8s and Minikube on Windows 10 Home.
I am following this official document - https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
First I created the deployment by running this below command on Minikube.
kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
The deployment get created which can be seen on the below image:
Next, I exposed the deployment that I created above. For this I ran the below command.
kubectl expose deployment web --type=NodePort --port=8080
This created a service which can be seen by running the below command:
kubectl get service web
The screenshot of the service is shown below:
I can now able to visit the service on the browser by running the below command:
minikube service web
In the below screenshot you can see I am able to view it on the browser.
Next, I created an Ingress by running the below command:
kubectl apply -f https://k8s.io/examples/service/networking/example-ingress.yaml
By the way the ingress yaml code is:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
The ingress gets created and I can verify it by running the below command:
kubectl get ingress
The screenshot for this is given below:
The ingress ip is listed as 192.168.49.2. So that means if I should open it in the browser then it should open, but unfortunately not. It is showing site can't be reached. See the below screeshot.
What is the problem. Please provide me a solution for it?
I also added the mappings on etc\hosts file.
192.168.49.2 hello-world.info
Then I also tried opening hello-world.info on the browser but no luck.
In the below picture I have done ping to hello-world.info which is going to IP address 192.168.49.2. This shows etc\hosts mapping is correct:
I also did curl to minikube ip and to hello-world.info and both get timeout. See below image:
The kubectl describe services web provides the following details:
Name: web
Namespace: default
Labels: app=web
Annotations: <none>
Selector: app=web
Type: NodePort
IP: 10.100.184.92
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31880/TCP
Endpoints: 172.17.0.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
The kubectl describe ingress example-ingress gives the following output:
Name: example-ingress
Namespace: default
Address: 192.168.49.2
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
hello-world.info
/ web:8080 172.17.0.4:8080)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1
Events: <none>
Kindly help. Thank you.
Having same issue as OP and things only work in minikube ssh, sharing the ingress.yaml below.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
defaultBackend:
service:
name: default-http-backend
port:
number: 80
rules:
- host: myapp-com # domain (i.e. need to change host table)
http:
paths: # specified path below, only be working when there is more than 1 path; If only having 1 path, it's always using / as path
- path: /
pathType: Prefix
backend:
service:
name: frontend-service # internal service
port:
number: 8080 # port number that internal service exposes
- path: /e($|/)(.*)
pathType: Prefix
backend:
service:
name: express-service # internal service
port:
number: 3000 # port number that internal service exposes
In my case (win10 + minikube + ingress minikube addon) the following helped:
Set custom domain IP to 127.0.01 in %WINDIR%\System32\drivers\etc\hosts file, i.e. by adding line 127.0.0.1 my-k8s.com
Get ingress pod name: kubectl get pods -n ingress-nginx
Start port forwarding: kubectl -n ingress-nginx port-forward pod/ingress-nginx-controller-5d88495688-dxxgw --address 0.0.0.0 80:80 443:443, where you should replace ingress-nginx-controller-5d88495688-dxxgw with your ingress pod name.
Enjoy using ingress on custom domain in any browser (but only when port forwarding is active)
Make sure pod to pod communication is open in minikube cluster. You can enable it by running below commands
minikube ssh
sudo ip link set docker0 promisc on
Make sure to install minikube ingress, ingress dns.
minikube addons enable ingress
minikube addons enable ingress-dns
For those wondering, this is a known issue with minikube, ingress is supported out-of-the-box on linux only.
minikube tunnel is a good fix, see this answer.
Try removing this annotation.
nginx.ingress.kubernetes.io/rewrite-target: /$1
And add this annotation:
annotations:
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
kubernetes.io/ingress.class: nginx
## tells ingress to check for regex in the config file
nginx.ingress.kubernetes.io/use-regex: "true"
Also, update your route as:
- path: /?(.*) ## instead of just '/'
backend:
serviceName: web
servicePort: 8080
I was in the same problem, the easiest solution that I found was modified the host windows file, but instead using the "minikube ip" use 127.0.0.1, and in ahotner terimnal run
$ minikube tunnel
With this you can open hello-world.info in the browser
I believe that if you check the ingress details you will find the right IP
kubectl describe ingress example-ingress
Check the Docs for more details about ingress
If the above doesn't help try this manifest. Check this Source
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: test
servicePort: 1111
If you're running an Ingress controller on any OS other than Linux you need to pay attention to the message displayed when you enable the Ingress addon. To wit...
PS C:\Development\kubernetes\service\ingress> minikube addons enable ingress
� ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
� After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "
127.0.0.1"
▪ Using image k8s.gcr.io/ingress-nginx/controller:v1.2.1
▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
� Verifying ingress addon...
� The 'ingress' addon is enabled
PS C:\Development\kubernetes\service\ingress>
The thing to take away from this is that - on an O/S other than Linux - the IP address is 127.0.0.1 NOT whatever IP you see when you run > kubectl get ingress. This is because - on an OS other than Linux - you need minikube tunnel running as a 'bridge' between 127.0.0.1 and whatever IP the Ingress controller is using. It's 127.0.0.1 you need to reference in your hosts file, not the IP shown in > kubectl get ingress. Luck.

Access .NET Core app on Kubernetes on both http and https

Being new to Kubernetes, I am trying to make a simple .NET Core 3 MVC app run on Kubernetes and reply on port 443 as well as port 80. I have a working Docker-Compose setup which I am trying to port to Kubernetes.
Running Docker Desktop CE with nginx-ingress on Win 10 Pro.
So far it is working on port 80. (http://mymvc.local on host Win 10 - hosts file redirects mymvc.local to 127.0.0.1)
My MVC app is running behind service mvc on port 5000.
I've made a self-signed certificate for the domain 'mymvc.local', which is working in the Docker-Compose setup.
This is my ingress file
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mvc-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- mymvc.local
secretName: mvcsecret-tls
rules:
- host: mymvc.local
http:
paths:
- path: /
backend:
serviceName: mvc
servicePort: 5000
This is my secrets file (keys abbreviated):
apiVersion: v1
kind: Secret
metadata:
name: mvcsecret-tls
data:
tls.crt: MIIDdzCCAl+gAwIBAgIUIok60uPHId5kve+/bZAw/ZGftIcwDQYJKoZIhvcNAQELBQAwKTELMAkGBxGjAYBgN...
tls.key: MIIEvwIBADANBgkqhkiG9w0BAQEFAASCBKkwggSlAgEAAoIBAQDPGN6yq9yzxvDL8fEUJChqlnaTQW6bQX+H0...
type: kubernetes.io/tls
kubectl describes the ingress as follows:
Name: mvc-ingress
Namespace: default
Address: localhost
Default backend: default-http-backend:80 (<none>)
TLS:
mvcsecret-tls terminates mymvc.local
Rules:
Host Path Backends
---- ---- --------
mymvc.local
/ mvc:5000 (10.1.0.27:5000)
Annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 11m nginx-ingress-controller Ingress default/mvc-ingress
Normal UPDATE 11m nginx-ingress-controller Ingress default/mvc-ingress
In my Docker-Compose setup, I have an Nginx reverse proxy redirecting 80 and 443 to my MVC service, but I figured that is the role of ingress on Kubernetes?
My service YAML:
apiVersion: v1
kind: Service
metadata:
name: mvc
labels:
app: mymvc
spec:
ports:
- name: "mvc"
port: 5000
targetPort: 5000
selector:
app: mymvc
type: ClusterIP
EDIT:
Adding 'nginx.ingress.kubernetes.io/rewrite-target: /' to ingress annotations males the https forward work, but the certificate presented is the 'Kubernetes Ingress Controller Fake Certificate' - not my self-signed one.
The solution turned out to be the addition of a second kind of certificate.
Instead of using the secrets file above (where I pasted the contents of my certificates files), I issued kubectl to use my certificate files directly:
kubectl create secret tls mvcsecret-tls --key MyCert.key --cert MyCert.crt
kubectl create secret generic tls-rootca --from-file=RootCA.pem