Problem configuring websphere application server behind ingress - kubernetes

I am running websphere application server deployment and service (type LoadBalancer). The websphere admin console works fine at URL https://svcloadbalancerip:9043/ibm/console/logon.jsp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
was-svc LoadBalancer x.x.x.x x.x.x.x 9080:30810/TCP,9443:30095/TCP,9043:31902/TCP,7777:32123/TCP,31199:30225/TCP,8880:31027/TCP,9100:30936/TCP,9403:32371/TCP 2d5h
But if i configure that websphere service behind ingress using ingress file like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-check
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /ibm/console/logon.jsp
backend:
serviceName: was-svc
servicePort: 9043
- path: /v1
backend:
serviceName: web
servicePort: 8080
The url https://ingressip//ibm/console/logon.jsp doesn't works.
I have tried the rewrite annotation too.
Can anyone help to just deploy the ibmcom/websphere-traditional docker image in kubernetes using deployment and service. With the service mapped behind the ingress and the websphere admin console should somehow be opened from ingress

There is a helm chart available from IBM team which has the ingress resource as well. In your code snippet, you are missing SSL related annotations as well.
https://hub.helm.sh/charts/ibm-charts/ibm-websphere-traditional
https://github.com/IBM/charts/tree/master/stable/ibm-websphere-traditional
I have added the Virtual Host configuration for admin console to work with port 443 in the following code sample.
Please Note: Exposing admin console on the ingress is not a good practice. Configuration should be done via wsadmin or by extending the base Dockerfile. Any changes done through the console will be lost when the container restarts.
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: websphere
spec:
type: NodePort
ports:
- name: admin
port: 9043
protocol: TCP
targetPort: 9043
nodePort: 30510
- name: app
port: 9443
protocol: TCP
targetPort: 9443
nodePort: 30511
selector:
run: websphere
status:
loadBalancer: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: websphere-admin-vh
namespace: default
data:
ingress_vh.props: |+
#
# Header
#
ResourceType=VirtualHost
ImplementingResourceType=VirtualHost
ResourceId=Cell=!{cellName}:VirtualHost=admin_host
AttributeInfo=aliases(port,hostname)
#
#
#Properties
#
443=*
EnvironmentVariablesSection
#
#
#Environment Variables
cellName=DefaultCell01
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: websphere
name: websphere
spec:
containers:
- image: ibmcom/websphere-traditional
name: websphere
volumeMounts:
- name: admin-vh
mountPath: /etc/websphere/
ports:
- name: app
containerPort: 9443
- name: admin
containerPort: 9043
volumes:
- name: admin-vh
configMap:
name: websphere-admin-vh
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-check
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- http:
paths:
- path: /ibm/console
backend:
serviceName: websphere
servicePort: 9043

Exposing both the adminhost and defaulthost via ingress isn't possible, or at least I've never figured out how to accomplish it. The crux of the issue is that ingress listens on port 80 or port 443 and forwards your request to the corresponding port on the container. Thus, the Host header of your request contains that port. I don't know enough about WAS channels/virtualhosts to understand how this works exactly, but in order for accessing WAS endpoints over any port other than the one listed for the endpoint in WAS config to work, the websphere-traditional image has to set a property to extract the port it should use for things like checking against virtualhost hostalias entries and issuing redirects from the Host header (com.ibm.ws.webcontainer.extractHostHeaderPort).
The problem becomes, when it uses that port, that port needs to be listed as a host alias for the virtual host in order for the traffic to be let through to the application. And since a combination of wildcard host and specific port can only be a host alias on one virtual host at a time, they were set up as host aliases on defaulthost so that web applications will work via ingress, but this makes it impossible to also access the admin console since that is served via a separate virtualhost which doesn't (and as far as I know can't) have the host alias entries set up to allow traffic with port 443 in its host header through. I haven't had to figure out how to get this working because kubectl port-forward has been sufficient to get at the admin console for the times I've needed to consult something, and you can't make changes anyway because they'll disappear when the pod restarts and a new one is started from the same (unchanged) image.

Related

How can I resolve connection issues between my frontend and backend pods in Minikube?

I am trying to deploy an application to Minikube. However I am having issues connecting the frontend pod to the backend pod.
Each Deployment have a ClusterIP service, and a NodePort service.
I access the frontend via browser, executing the command: minikube service frontend-entrypoint. When the frontend tries to query the backend it requests the URL: http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial, but the status response is: (failed)net::ERR_EMPTY_RESPONSE.
If I access the frontend via cmd, executing the command: kubectl exec -it react-deployment-xxxxxxxxxx-xxxxx -- sh, and execute inside it the command: curl -X GET "http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial" I get what I expect.
So, I understand that NodePorts are used to route external traffic to services inside the cluster by opening a specific port on each node in the cluster and forwarding traffic from that port to the service, and that ClusterIPs, on the other hand, are used to expose services only within the cluster and are not directly accessible from outside the cluster. What I don't understand is why when reaching the frontend via browser, the same is not able to connect internally to the backend? Once playing with the frontend I consider I am inside the cluster...
I tried to expose the cluster using other services such as Ingress or LoadBalancer, but I didn't have success connecting to the frontend, so I rollback to the NodePort solution.
References:
Kubernetes Guide - Deploying a machine learning app built with Django, React and PostgreSQL using Kubernetes
Exposing External-Facing Services In Kubernetes
Files:
component_backend.yaml:
apiVersion: v1
kind: Service
metadata:
name: backend-entrypoint
spec:
selector:
component: fastapi
ports:
- name: http2
port: 8000
targetPort: 8000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi-deployment
spec:
replicas: 1
selector:
matchLabels:
component: fastapi
template:
metadata:
labels:
component: fastapi
spec:
containers:
- name: fastapi-container
image: xxx/yyy:zzz
ports:
- containerPort: 8000
env:
- name: DB_USERNAME
valueFrom:
configMapKeyRef:
name: app-variables
key: DB_USERNAME
[...]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
externalIPs:
- <minikube ip>
componente_frontend.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend-entrypoint
spec:
selector:
component: react
ports:
- name: http1
port: 3000
targetPort: 3000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-deployment
spec:
replicas: 1
selector:
matchLabels:
app: react
template:
metadata:
labels:
app: react
spec:
containers:
- name: react-container
image: xxx/yyy:zzz
ports:
- containerPort: 3000
env:
- name: BASELINE_API_URL
valueFrom:
configMapKeyRef:
name: app-variables
key: BASELINE_API_URL
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: react-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: react
ports:
- port: 3000
targetPort: 3000
externalIPs:
- <minikube ip>
BASELINE_API_URL is declared with the backend ClusterIP service name (i.e., fastapi-cluster-ip-service).
ingress_service.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: sfs.baseline
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-entrypoint
port:
name: http1
- path: /api
pathType: Prefix
backend:
service:
name: backend-entrypoint
port:
name: http2
You have to think about this: If you call the api through your browser or Postman from your host, they dont know anything about the url inside your pod.
For production ingress configuration is needed, so you can call the api like:
https://api.mydomain.com/api/myroute
When you deploy the frontend the paths inside your code should be created dynamically.
Inside your code define the path with env variables and use them inside your code.
On Kubernetes define a configMap with the paths and bind it to your container.
So when you call the Frontend it will have the right paths.
Your Frontend on local can be reached with the Ip address of your masternode and the nodePort.
To not use the ip address you can create an entry in your local hosts file.
nodesipaddress mydomain.local
nodesipaddress api.mydomain.local
so from you browser you can reach the frontend with mydomain.local:nodeportOfFrontend
And your frontend code should call the backend with api.mydomain.local:nodeportOfApi
If you enable Ingress inside your cluster and create an ingress resource in your deployment and a service of type LoadBalancer, then you can call the Frontend and api without the nodePort.
If you are getting in issues with that, please post all your kubernetes yamls. Deployments, Services, configMap and ingress if you decide to use it.
UPDATE
Check if you have ingress enabled on minikube
Modify your ingress-recource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: mydomain.local
http:
paths:
- path: /
Im not sure if minikube supports ClusterIp
Change the type to LoadBalancer in your service:
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
If you verify your service:
kubectl get svc -o wide
kubectl describe services my-service
Look if there is external IP pending if so add externalIp:
ports:
- port: 8000
targetPort: 8000
externalIPs:
- xxx.xxx.xxx.xxx # your minikube ip
However i would try first to create a service type NodePort and then access with xxx.xxx.xxx.xxx:NodePort.
In the yamls you posted i see only the backend.
Think if you use ingress or NodePort, your code must be adapted:
Your Frontend must call the api by api ingress domain or xxx.xxx.xxx.xxx:NodePort / mydomain.local/api
I think you are not understanding how the frontend part of a website works.
To display the website in your browser, you actually download all the required content from the server/pod where you host the frontend.
Once the front is downloaded and displayed on your browser, now you try to hit the backend directly from your browser/computer using either an IP address or an URL.
When using an URL, you browser will try to resolve the hostname using a DNS server
So when you read a website from your browser you are not in the server/pod and you cannot resolve the URL because that URL is not mapped to any IP address (or not to your server).
Also that is why it works when you go inside the pod using kubectl exec, because you are inside the network and you are using the internal DNS.
As David said, to make this work, you need to call the backend from the frontend using sone kind of ingress.
And also you will need to create a DNS entry on your domain (if using an URL).
If not you can directly use the IP of the pod/service.

How to access an application/container from dns/hostname in k8s?

I have a k8s cluster where I deploy some containers.
The cluster is accessible at microk8s.hostname.internal.
At this moment I have an application/container deployed that is accessible here: microk8s.hostname.internal/myapplication with the help of a service and an ingress.
And this works great.
Now I would like to deploy another application/container but I would like it accessible like this: otherapplication.microk8s.hostname.internal.
How do I do this?
Currently installed addons in microk8s:
aasa#bolsrv0891:/snap/bin$ microk8s status
microk8s is running
high-availability: no
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
Update 1:
If I portforward to my service it works.
I have tried this ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
namespace: jupyter-notebook
annotations:
kubernetes.io/ingress.class: public
spec:
rules:
- host: jupyter.microk8s.hostname.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
But I cant access it nor ping it. Chrome says:
jupyter.microk8s.hostname.internal’s server IP address could not be found.
My service looks like this:
apiVersion: v1
kind: Service
metadata:
name: jupyter-service
namespace: jupyter-notebook
spec:
ports:
- name: 7070-8888
port: 7070
protocol: TCP
targetPort: 8888
selector:
app: jupyternotebook
type: ClusterIP
status:
loadBalancer: {}
I can of course ping microk8s.hostname.internal.
Update 2:
The ingress that is working today that has a context path: microk8s.boliden.internal/myapplication looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: jupyter-ingress
namespace: jupyter-notebook
spec:
rules:
- http:
paths:
- path: "/jupyter-notebook/?(.*)"
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
This is accessible externally by accessing microk8s.hostname.internal/jupyter-notebook.
To do this you would have to configure a kube service, kube ingress and the configure your DNS.
Adding an entry into the hosts file would allow DNS resolution to otherapplication.microk8s.hostname.internal
You could use dnsmasq to allow for wildcard resolution e.g. *.microk8s.hostname.internal
You can test the dns reoslution using nslookup or dig
You can copy the same ingress and update name of it and Host inside it, that's all change you need.
For ref:
kind: Ingress
metadata:
name: second-ingress <<- make sure to update name else it will overwrite if the same
spec:
rules:
- host: otherapplication.microk8s.hostname.internal
http:
paths:
- path: /
backend:
serviceName: service-name
servicePort: service-port
You can create the subdomain with ingress just update the Host in ingress and add the necessary serviceName and servicePort to route traffic to specific service.
Feel free to append the necessary fields, and annotation if any to the above ingress from the existing ingress which is working for you.
If you are running it locally you might have to map the IP to the subdomain locally in /etc/hosts file
/etc/hosts
otherapplication.microk8s.hostname.internal <IP address>

Microk8s reaches internet but not internal network

I've just installed Ubuntu 22.04 on a vmware virtual server and started using microk8s. The server is part of a local network in which there are some servers, including microsoft AD and IIS servers that handle the network.
I've docker installed on the ubuntu system and can run all the containers of the web app with no problem via docker. In particular, I have a service (a container) that connects to the windows AD server of the local network to authenticate users of the web app. On the host, it works with no problem, can reach the AD server and also other servers in the network and do all the necessary operations.
On the other hand, when run on kubernetes via microk8s, all the services work, they are all reachable from the local network, while at the same time the containers can reach the external network (outside our local network, e.g. www.google.com). Only the internal network seems to be unreachable, for which I always get a timeout error.
What I tried (but did not work)
External service
[https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors][1]
Check the dns resolution on the host that gets copied into the container
Note
I'm not sure what kind of commands shall be run in order to provide the most useful information about the configuration, so I'll be iterating over this question, extending it with logs and other meaningful information.
Thanks
Edit 11/10/2022
I've enable the following addons
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metallb # (core) Loadbalancer for your Kubernetes cluster
Another strange thing, is that the containers can access the postgres database on the host via the host's ip address (10.1.1.xxx)
Edit 2 12/10/2022
Here's the ingress yaml file
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
---
#
# Ingress
#
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /api/erp(/|$)(.*)
pathType: Prefix
backend:
service:
name: erp-service
port:
number: 8000
- path: /api/auth(/|$)(.*)
pathType: Prefix
backend:
service:
name: auth-service
port:
number: 8000
- path: /()(.*)
pathType: Prefix
backend:
service:
name: ui-service
port:
number: 3000
I can access the UI and by using the host's ip and /api/auth, I can access the online documentation of swagger/openapi.
[1]: https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
To this day I haven't managed to find any solution but to circumvent the request and use a "proxy" endpoint as suggested in
Accessing an external InfluxDb Database from a microk8s pod using selectorless service and manual endpoint?
Basically, it creates a service with that can be accessed by the cluster and an endpoint that points to the external resource.
Actual source config taken from the answer
kind: Service
apiVersion: v1
metadata:
name: influxdb-service-lb
#namespace: ingress
spec:
type: LoadBalancer
loadBalancerIP: 10.1.2.61
# selector:
# app: grafana
ports:
- name: http
protocol: TCP
port: 8086
targetPort: 8086
---
apiVersion: v1
kind: Endpoints
metadata:
name: influxdb-service-lb
subsets:
- addresses:
- ip: 10.1.2.220
ports:
- name: influx
protocol: TCP
port: 8086
If I'll manage to find a solution, I'll update this answer

Kubernetes nginx ingress session affinity across ports

I have a legacy application we've started running in Kubernetes. The application listens on two different ports, one for the general web page and another for a web service. In the long run we may try to change some of this but for the moment we're trying to get the legacy application to run as is. The current configuration has a single service for both ports:
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081
Then I'm using a single ingress to route traffic to the correct service port based on path:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app
annotations:
nginx.ingress.kubernetes.io/upstream-hash-by: "$remote_addr"
spec:
rules:
- host: myapp.test.com
http:
paths:
- backend:
serviceName: app
servicePort: 8080
path: /app
- backend:
serviceName: app
servicePort: 8081
path: /service
This works great for routing. Requests coming into the ingress get routed to the correct service port based on path. However, the problem I have is that for this legacy app to work the requests to both ports 8080 and 8081 need to be routed to the same pod for each client. You can see I tried adding the upstream-hash-by annotation. This seemed to ensure that all requests to 8080 from one client went to the same pod and all requests to 8081 from one client went to the same pod but not that those are the same pod for any one client. When I run with a single pod instance everything is great but when I start spinning up additional pods some clients get /app requests routed to one pod and /service requests to another and in this application that does not currently work. I have tried other annotations in the ingress including nginx.ingress.kubernetes.io/affinity: "cookie" and nginx.ingress.kubernetes.io/affinity-mode: "persistent" as well as trying to add sessionAffinity: ClientIP to the service but so far nothing seems to work. The goal is that all requests to either path get routed to the same pod for any one client. Any help would be greatly appreciated.
Session persistence settings will only work if you set the kube proxy settings such that it forwards requests to local pod only and not to random pods across the cluster.
you can do this by setting the service level settings to:
service.spec.externalTrafficPolicy: Local
you can read more here:
https://kubernetes.io/docs/tutorials/services/source-ip/
after doing this , you ingress annotations should work. I have tested this with external load balancer only , not with ingress though.
keeping everything else the same and having this service definition should work
apiVersion: v1
kind: Service
metadata:
name: app
spec:
externalTrafficPolicy: Local
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081

Ingress cannot resolve NodePort IP in GKE

I have an ingress defined as:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: zaz-address
kubernetes.io/ingress.allow-http: "false"
ingress.gcp.kubernetes.io/pre-shared-cert: foo-bar-com
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /zaz/*
backend:
serviceName: zaz-service
servicePort: 8080
Then the service zap-service is a nodeport defined as:
apiVersion: v1
kind: Service
metadata:
name: zaz-service
namespace: default
spec:
clusterIP: 10.27.255.88
externalTrafficPolicy: Cluster
ports:
- nodePort: 32455
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: zap
sessionAffinity: None
type: NodePort
The nodeport is successfully selecting the two pods behind it serving my service. I can see in the GKE services list that the nodeport has an IP that looks internal.
When I check in the same interface the ingress, it also looks all fine, but serving zero pods.
When I describe the ingress on the other hand I can see:
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/zaz/* zaz-service:8080 (<none>)
Which looks like the ingress is unable to resolve the service IP. What am I doing wrong here? I cannot access the service through the external domain name, I am getting an error 404.
How can I make the ingress translate the domain name zaz-service into the proper IP so it can redirect traffic there?
Seems like the wildcards in the path are not supported yet.
Any reason why not using just the following in your case?
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /zaz
backend:
serviceName: zaz-service
servicePort: 8080
My mistake was, as expected, not reading the documentation thoroughly.
The port stated in the Ingress path is not a "forwarding" mechanism but a "filtering" one. In my head it made sense that it would be redirecting http(s) traffic to port 8080, which is the one where the Service behind was listening to, and the Pod behind the service too.
Reality was that it would not route traffic which was not port 8080 to the service. To make it cleaner I changed the port in the Ingress from 8080 to 80 and in the Service the front-facing port from 8080 to 80 too.
Now all requests coming from the internet can reach the server successfully.