Nginx ingress configuration to expose different ports of a cluster ip - kubernetes

I am trying to install neo4j on a kubernetes cluster . I have installed the helm chart
neo4j/neo4j-standalone. I have selected service type as ClusterIp.
The service exposes 3 ports as can be seen
kubectl get svc neo4j-dev-neo4j -n neo4j
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
neo4j-dev-neo4j ClusterIP 10.100.228.68 <none> 7474/TCP,7473/TCP,7687/TCP 154m
I have created a ingress which exposes the port 7474 , it is for browser ui. This part opens up correctly
below is the ingress
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
creationTimestamp: "2022-06-15T06:56:56Z"
generation: 19
name: neo4j-ingress
namespace: neo4j
resourceVersion: "5602507"
uid: f1548432-f051-4a44-9c98-4f23ee0ace33
spec:
rules:
- host: neo4j.dev.com
http:
paths:
- backend:
service:
name: neo4j-dev-neo4j
port:
number: 7474
path: /
pathType: Prefix
- backend:
service:
name: neo4j-dev-neo4j
port:
number: 7687
path: /db
pathType: Prefix
tls:
- hosts:
- neo4j.dev.com
secretName: dev-tls
status:
loadBalancer:
ingress:
- hostname: ALB
kind: List
metadata:
resourceVersion: ""
After accessing the ui , we also need to access the db which is on port 7687 , that i am not able to connect from the ui.
Want to know how the ingress configuration should be , i have tried the above but no luck.
I have tried connecting to the db from within cluster and it connects fine , I am not able to connect to the db from ui, So i suppose i have to expose the db port 7687 as well in same ingress , which i have tried .
kubectl run --rm -it --namespace "neo4j" --image "neo4j:4.4.6" cypher-shell -- cypher-shell -a "neo4j://neo4j-dev.neo4j.svc.cluster.local:7687" -u neo4j -p
If you don't see a command prompt, try pressing enter.
Connected to Neo4j using Bolt protocol version 4.4 at neo4j://neo4j-dev.neo4j.svc.cluster.local:7687 as user neo4j.
Type :help for a list of available commands or :exit to exit the shell.
Note that Cypher queries must end with a semicolon.
neo4j#neo4j>
How can i connect to db from the ui ?

Related

Why not working request to deployment via service request and via ingress request?

Install minikube version: v1.29.0 on MacOs.
I create API endpoint on flask and build in docker image
FROM debian:latest
COPY . /app
WORKDIR /app
RUN pip3 install --no-cache-dir -r requirements.txt
CMD ["uwsgi", "--socket", "0.0.0.0:5001", "--protocol=http", "-w", "wsgi:app", "--ini", "wsgi.ini"]
after load docker image into minikube
minikube image load drnoreg/devops_blog:0.0.1
check minikube
% minikube image ls
docker.io/drnoreg/devops_blog:0.0.1
create deployment, service and ingress yaml
app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: devops-blog
spec:
selector:
matchLabels:
run: devops-blog
replicas: 1
template:
metadata:
labels:
run: devops-blog
spec:
containers:
- name: devops-blog
image: docker.io/drnoreg/devops_blog:0.0.1
ports:
- name: pod-port
containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: devops-blog
labels:
run: devops-blog
spec:
type: NodePort
ports:
- name: pod-port
port: 5001
targetPort: 5001
protocol: TCP
nodePort: 30001
selector:
run: devops-blog
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: devops-blog
namespace: devops-blog
spec:
rules:
- host: devops-blog.cluster.local
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: devops-blog
port:
number: 5001
execute create namespace
kubectl create namespace devops-blog
set current namespace
kubectl config set-context --current --namespace=devops-blog
and create deployment, service and ingress
kubectl create -f app.yaml
after try forwarding port for check working flask API
kubectl port-forward devops-blog-f666d8cd7-njp95 5001:5001
Forwarding from 127.0.0.1:5001 -> 5001
Forwarding from [::1]:5001 -> 5001
Handling connection for 5001
Handling connection for 5001
flask API service in minikube is working.
% kubectl get service -n devops-blog -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
devops-blog NodePort 10.99.37.126 <none> 5001:30001/TCP 45s run=devops-blog
% kubectl get pod -n devops-blog -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
devops-blog-f666d8cd7-b9n7j 1/1 Running 0 57s 10.244.0.34 minikube <none> <none>
% kubectl get node -n devops-blog -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane 16h v1.26.1 192.168.49.2 <none> Ubuntu 20.04.5 LTS 5.10.47-linuxkit docker://20.10.23
Now I try to check working API via minikube service
% telnet 192.168.49.2 30001
Trying 192.168.49.2...
not working
add to /etc/hosts
127.0.0.1 devops-blog.cluster.local
try to check working API via ingress minikube
% telnet devops-blog.cluster.local 80
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
not working too.
Why not working request to deployment via service request and via ingress request?
How solve this problem?
In case you did not enable the ingress addon try enable it by executing the following command
$ minikube addons enable ingress
Instead of NodePort service try using the clusterIP service for the app and when you are creating ingress you can give this service as backend like this
service.yaml
apiVersion: v1
kind: Service
metadata:
name: devops-blog
labels:
run: devops-blog
spec:
type: ClusterIP
ports:
- name: pod-port
port: 5001
targetPort: 5001
protocol: TCP
selector:
run: devops-blog
ingres.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: devops-blog
namespace: devops-blog
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false" #Since you are using localhost
spec:
rules:
- host: devops-blog.cluster.local
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: devops-blog
port:
number: 5001
path: /
Once the ingress is generated the IP try opening it in local browser with http://devops-blog.cluster.local/ or curl it like curl $ curl http://devops-blog.cluster.local/.
Note: In case you are deploying this app in the cloud try LoadBalancer as a service.
Try this tutorial as it explained in detail

How to access an application/container from dns/hostname in k8s?

I have a k8s cluster where I deploy some containers.
The cluster is accessible at microk8s.hostname.internal.
At this moment I have an application/container deployed that is accessible here: microk8s.hostname.internal/myapplication with the help of a service and an ingress.
And this works great.
Now I would like to deploy another application/container but I would like it accessible like this: otherapplication.microk8s.hostname.internal.
How do I do this?
Currently installed addons in microk8s:
aasa#bolsrv0891:/snap/bin$ microk8s status
microk8s is running
high-availability: no
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
Update 1:
If I portforward to my service it works.
I have tried this ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
namespace: jupyter-notebook
annotations:
kubernetes.io/ingress.class: public
spec:
rules:
- host: jupyter.microk8s.hostname.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
But I cant access it nor ping it. Chrome says:
jupyter.microk8s.hostname.internal’s server IP address could not be found.
My service looks like this:
apiVersion: v1
kind: Service
metadata:
name: jupyter-service
namespace: jupyter-notebook
spec:
ports:
- name: 7070-8888
port: 7070
protocol: TCP
targetPort: 8888
selector:
app: jupyternotebook
type: ClusterIP
status:
loadBalancer: {}
I can of course ping microk8s.hostname.internal.
Update 2:
The ingress that is working today that has a context path: microk8s.boliden.internal/myapplication looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: jupyter-ingress
namespace: jupyter-notebook
spec:
rules:
- http:
paths:
- path: "/jupyter-notebook/?(.*)"
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
This is accessible externally by accessing microk8s.hostname.internal/jupyter-notebook.
To do this you would have to configure a kube service, kube ingress and the configure your DNS.
Adding an entry into the hosts file would allow DNS resolution to otherapplication.microk8s.hostname.internal
You could use dnsmasq to allow for wildcard resolution e.g. *.microk8s.hostname.internal
You can test the dns reoslution using nslookup or dig
You can copy the same ingress and update name of it and Host inside it, that's all change you need.
For ref:
kind: Ingress
metadata:
name: second-ingress <<- make sure to update name else it will overwrite if the same
spec:
rules:
- host: otherapplication.microk8s.hostname.internal
http:
paths:
- path: /
backend:
serviceName: service-name
servicePort: service-port
You can create the subdomain with ingress just update the Host in ingress and add the necessary serviceName and servicePort to route traffic to specific service.
Feel free to append the necessary fields, and annotation if any to the above ingress from the existing ingress which is working for you.
If you are running it locally you might have to map the IP to the subdomain locally in /etc/hosts file
/etc/hosts
otherapplication.microk8s.hostname.internal <IP address>

Problem configuring websphere application server behind ingress

I am running websphere application server deployment and service (type LoadBalancer). The websphere admin console works fine at URL https://svcloadbalancerip:9043/ibm/console/logon.jsp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
was-svc LoadBalancer x.x.x.x x.x.x.x 9080:30810/TCP,9443:30095/TCP,9043:31902/TCP,7777:32123/TCP,31199:30225/TCP,8880:31027/TCP,9100:30936/TCP,9403:32371/TCP 2d5h
But if i configure that websphere service behind ingress using ingress file like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-check
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /ibm/console/logon.jsp
backend:
serviceName: was-svc
servicePort: 9043
- path: /v1
backend:
serviceName: web
servicePort: 8080
The url https://ingressip//ibm/console/logon.jsp doesn't works.
I have tried the rewrite annotation too.
Can anyone help to just deploy the ibmcom/websphere-traditional docker image in kubernetes using deployment and service. With the service mapped behind the ingress and the websphere admin console should somehow be opened from ingress
There is a helm chart available from IBM team which has the ingress resource as well. In your code snippet, you are missing SSL related annotations as well.
https://hub.helm.sh/charts/ibm-charts/ibm-websphere-traditional
https://github.com/IBM/charts/tree/master/stable/ibm-websphere-traditional
I have added the Virtual Host configuration for admin console to work with port 443 in the following code sample.
Please Note: Exposing admin console on the ingress is not a good practice. Configuration should be done via wsadmin or by extending the base Dockerfile. Any changes done through the console will be lost when the container restarts.
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: websphere
spec:
type: NodePort
ports:
- name: admin
port: 9043
protocol: TCP
targetPort: 9043
nodePort: 30510
- name: app
port: 9443
protocol: TCP
targetPort: 9443
nodePort: 30511
selector:
run: websphere
status:
loadBalancer: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: websphere-admin-vh
namespace: default
data:
ingress_vh.props: |+
#
# Header
#
ResourceType=VirtualHost
ImplementingResourceType=VirtualHost
ResourceId=Cell=!{cellName}:VirtualHost=admin_host
AttributeInfo=aliases(port,hostname)
#
#
#Properties
#
443=*
EnvironmentVariablesSection
#
#
#Environment Variables
cellName=DefaultCell01
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: websphere
name: websphere
spec:
containers:
- image: ibmcom/websphere-traditional
name: websphere
volumeMounts:
- name: admin-vh
mountPath: /etc/websphere/
ports:
- name: app
containerPort: 9443
- name: admin
containerPort: 9043
volumes:
- name: admin-vh
configMap:
name: websphere-admin-vh
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-check
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- http:
paths:
- path: /ibm/console
backend:
serviceName: websphere
servicePort: 9043
Exposing both the adminhost and defaulthost via ingress isn't possible, or at least I've never figured out how to accomplish it. The crux of the issue is that ingress listens on port 80 or port 443 and forwards your request to the corresponding port on the container. Thus, the Host header of your request contains that port. I don't know enough about WAS channels/virtualhosts to understand how this works exactly, but in order for accessing WAS endpoints over any port other than the one listed for the endpoint in WAS config to work, the websphere-traditional image has to set a property to extract the port it should use for things like checking against virtualhost hostalias entries and issuing redirects from the Host header (com.ibm.ws.webcontainer.extractHostHeaderPort).
The problem becomes, when it uses that port, that port needs to be listed as a host alias for the virtual host in order for the traffic to be let through to the application. And since a combination of wildcard host and specific port can only be a host alias on one virtual host at a time, they were set up as host aliases on defaulthost so that web applications will work via ingress, but this makes it impossible to also access the admin console since that is served via a separate virtualhost which doesn't (and as far as I know can't) have the host alias entries set up to allow traffic with port 443 in its host header through. I haven't had to figure out how to get this working because kubectl port-forward has been sufficient to get at the admin console for the times I've needed to consult something, and you can't make changes anyway because they'll disappear when the pod restarts and a new one is started from the same (unchanged) image.

Find why i am getting 502 Bad gateway error on kubernetes

I am using kubernetes. I have Ingress service which talks my container service. We have exposed a webapi which works all fine. But we keep getting 502 bad gateway error. I am new to kubernetes and i have no clue how to go about debugging this issue. Server is a nodejs server connected to database. Is there anything wrong with configuration?
My Deployment file--
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-pod
spec:
replicas: 1
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: my-pod
image: my-image
ports:
- name: "http"
containerPort: 8086
resources:
limits:
memory: 2048Mi
cpu: 1020m
---
apiVersion: v1
kind: Service
metadata:
name: my-pod-serv
spec:
ports:
- port: 80
targetPort: "http"
selector:
app: my-pod
My Ingress Service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: abc.test.com
http:
paths:
- path: /abc
backend:
serviceName: my-pod-serv
servicePort: 80
In Your case:
I think that you get this 502 gateway error because you don't have Ingress controller configured correctly.
Please try do do it with installed Ingress like in example below. It will do all automatically.
Nginx Ingress step by step:
1) Install helm
2) Install nginx controller using helm
$ helm install stable/nginx-ingress --name nginx-ingress
It will create 2 services. You can get their details via
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 29d
nginx-ingress-controller LoadBalancer 10.39.243.140 35.X.X.15 80:32324/TCP,443:31425/TCP 19m
nginx-ingress-default-backend ClusterIP 10.39.252.175 <none> 80/TCP 19m
nginx-ingress-controller - in short, it's dealing with requests to Ingress and directing
nginx-ingress-default-backend - in short, default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand
3) Create 2 deployments (or use yours)
$ kubectl run my-pod --image=nginx
deployment.apps/my-pod created
$ kubectl run nginx1 --image=nginx
deployment.apps/nginx1 created
4) Connect to one of the pods
$ kubectl exec -ti my-pod-675799d7b-95gph bash
And add additional line to the output to see which one we will try to connect later.
$ echo "HELLO THIS IS INGRESS TEST" >> /usr/share/nginx/html/index.html
$ exit
5) Expose deployments.
$ kubectl expose deploy nginx1 --port 80
service/nginx1 exposed
$ kubectl expose deploy my-pod --port 80
service/my-pod exposed
This will automatically create service and will looks like
apiVersion: v1
kind: Service
metadata:
labels:
app: my-pod
name: my-pod
selfLink: /api/v1/namespaces/default/services/my-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: my-pod
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
6) Now its the time to create Ingress.yaml and deploy it. Each rule in ingress need to be specified. Here I have 2 services. Each service specification starts with -host under rule parameter.
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: two-svc-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /pod
backend:
serviceName: my-pod
servicePort: 80
- host: nginx.test.svc
http:
paths:
- path: /abc
backend:
serviceName: nginx1
servicePort: 80
$ kubectl apply -f Ingress.yaml
ingress.extensions/two-svc-ingress created
7) You can check Ingress and hosts
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
two-svc-ingress my.pod.svc,nginx.test.svc 35.228.230.6 80 57m
8) Eplanation why I installed Ingress.
Connect to the ingress controller pod
$ kubectl exec -ti nginx-ingress-controller-76bf4c745c-prp8h bash
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ cat /etc/nginx/nginx.conf
Because I have installed nginx ingress earlier, after deploying Ingress.yaml, the nginx-ingress-controller found changes and automatically added necessary code.
In this file you should be able to find whole configuration for two services. I will not copy configuration but only headers.
start server my.pod.svc
start server nginx.test.svc
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ exit
9) Test
$ kubectl get svc to get your nginx-ingress-controller external IP
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/
default backend - 404
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/pod
<!DOCTYPE html>
...
</html>
HELLO THIS IS INGRESS TEST
Please keep in mind Ingress needs to be in the same namespace like services. If you have a few services in many namespace you need to create Ingress for each namespace.
I would need to set up a cluster in order to test your yml files.
Just to help you debugging, follow this steps:
1- get the logs of the my-pod container using kubectl logs my-pod-container-name, make sure everything is working
2- Use port-forward to expose your container and test it.
3- Make sure the service is working properly, change its type to load balancer, so you can reach it from outside the cluster.
If the three things are working there is a problem with your ingress configuration.
I am not sure if I explained it in a detailed way, let me know if something is not clear

Access service in remote Kubernetes cluster using ingress

I'm attempting to access a service in an existing kubernetes cluster deployed in a remote machine. I've configured the cluster to be accessible through kubectl from my local mac.
$ kubectl cluster-info
Kubernetes master is running at https://192.168.58.114:6443
KubeDNS is running at https://192.168.58.114:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
The ingress configuration for the service I want to connect is:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: gw-ingress
namespace: vick-system
selfLink: /apis/extensions/v1beta1/namespaces/vick-system/ingresses/gw-ingress
uid: 52b62da6-01c1-11e9-9f59-fa163eb296d8
resourceVersion: '2695'
generation: 1
creationTimestamp: '2018-12-17T06:02:23Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/affinity":"cookie","nginx.ingress.kubernetes.io/session-cookie-hash":"sha1","nginx.ingress.kubernetes.io/session-cookie-name":"route"},"name":"gw-ingress","namespace":"vick-system"},"spec":{"rules":[{"host":"wso2-apim-gateway","http":{"paths":[{"backend":{"serviceName":"gateway","servicePort":8280},"path":"/"}]}}],"tls":[{"hosts":["wso2-apim-gateway"]}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: route
spec:
tls:
- hosts:
- wso2-apim-gateway
rules:
- host: wso2-apim-gateway
http:
paths:
- path: /
backend:
serviceName: gateway
servicePort: 8280
status:
loadBalancer:
ingress:
- ip: 172.17.17.100
My list of services are:
My /etc/hosts file looks like below:
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
172.17.17.100 wso2-apim-gateway wso2-apim wso2sp-dashboard
What is the URL I should use to access this service from my local browser? Should I do any more configurations?
The easiest way to access this would be a port-forward, which requires no modification of your hosts file.
kubectl -n vick-system port-forward svc/wso2sp-dashboard 9643
This will allow you to browse to http://localhost:9643 and access that service.
Please note, the svc/name syntax is only supported in kubectl >= 1.10