I've just installed Ubuntu 22.04 on a vmware virtual server and started using microk8s. The server is part of a local network in which there are some servers, including microsoft AD and IIS servers that handle the network.
I've docker installed on the ubuntu system and can run all the containers of the web app with no problem via docker. In particular, I have a service (a container) that connects to the windows AD server of the local network to authenticate users of the web app. On the host, it works with no problem, can reach the AD server and also other servers in the network and do all the necessary operations.
On the other hand, when run on kubernetes via microk8s, all the services work, they are all reachable from the local network, while at the same time the containers can reach the external network (outside our local network, e.g. www.google.com). Only the internal network seems to be unreachable, for which I always get a timeout error.
What I tried (but did not work)
External service
[https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors][1]
Check the dns resolution on the host that gets copied into the container
Note
I'm not sure what kind of commands shall be run in order to provide the most useful information about the configuration, so I'll be iterating over this question, extending it with logs and other meaningful information.
Thanks
Edit 11/10/2022
I've enable the following addons
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metallb # (core) Loadbalancer for your Kubernetes cluster
Another strange thing, is that the containers can access the postgres database on the host via the host's ip address (10.1.1.xxx)
Edit 2 12/10/2022
Here's the ingress yaml file
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
---
#
# Ingress
#
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /api/erp(/|$)(.*)
pathType: Prefix
backend:
service:
name: erp-service
port:
number: 8000
- path: /api/auth(/|$)(.*)
pathType: Prefix
backend:
service:
name: auth-service
port:
number: 8000
- path: /()(.*)
pathType: Prefix
backend:
service:
name: ui-service
port:
number: 3000
I can access the UI and by using the host's ip and /api/auth, I can access the online documentation of swagger/openapi.
[1]: https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors
To this day I haven't managed to find any solution but to circumvent the request and use a "proxy" endpoint as suggested in
Accessing an external InfluxDb Database from a microk8s pod using selectorless service and manual endpoint?
Basically, it creates a service with that can be accessed by the cluster and an endpoint that points to the external resource.
Actual source config taken from the answer
kind: Service
apiVersion: v1
metadata:
name: influxdb-service-lb
#namespace: ingress
spec:
type: LoadBalancer
loadBalancerIP: 10.1.2.61
# selector:
# app: grafana
ports:
- name: http
protocol: TCP
port: 8086
targetPort: 8086
---
apiVersion: v1
kind: Endpoints
metadata:
name: influxdb-service-lb
subsets:
- addresses:
- ip: 10.1.2.220
ports:
- name: influx
protocol: TCP
port: 8086
If I'll manage to find a solution, I'll update this answer
Related
I have a k8s cluster where I deploy some containers.
The cluster is accessible at microk8s.hostname.internal.
At this moment I have an application/container deployed that is accessible here: microk8s.hostname.internal/myapplication with the help of a service and an ingress.
And this works great.
Now I would like to deploy another application/container but I would like it accessible like this: otherapplication.microk8s.hostname.internal.
How do I do this?
Currently installed addons in microk8s:
aasa#bolsrv0891:/snap/bin$ microk8s status
microk8s is running
high-availability: no
addons:
enabled:
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
ingress # (core) Ingress controller for external access
metrics-server # (core) K8s Metrics Server for API access to service metrics
Update 1:
If I portforward to my service it works.
I have tried this ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
namespace: jupyter-notebook
annotations:
kubernetes.io/ingress.class: public
spec:
rules:
- host: jupyter.microk8s.hostname.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
But I cant access it nor ping it. Chrome says:
jupyter.microk8s.hostname.internal’s server IP address could not be found.
My service looks like this:
apiVersion: v1
kind: Service
metadata:
name: jupyter-service
namespace: jupyter-notebook
spec:
ports:
- name: 7070-8888
port: 7070
protocol: TCP
targetPort: 8888
selector:
app: jupyternotebook
type: ClusterIP
status:
loadBalancer: {}
I can of course ping microk8s.hostname.internal.
Update 2:
The ingress that is working today that has a context path: microk8s.boliden.internal/myapplication looks like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: public
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: jupyter-ingress
namespace: jupyter-notebook
spec:
rules:
- http:
paths:
- path: "/jupyter-notebook/?(.*)"
pathType: Prefix
backend:
service:
name: jupyter-service
port:
number: 7070
This is accessible externally by accessing microk8s.hostname.internal/jupyter-notebook.
To do this you would have to configure a kube service, kube ingress and the configure your DNS.
Adding an entry into the hosts file would allow DNS resolution to otherapplication.microk8s.hostname.internal
You could use dnsmasq to allow for wildcard resolution e.g. *.microk8s.hostname.internal
You can test the dns reoslution using nslookup or dig
You can copy the same ingress and update name of it and Host inside it, that's all change you need.
For ref:
kind: Ingress
metadata:
name: second-ingress <<- make sure to update name else it will overwrite if the same
spec:
rules:
- host: otherapplication.microk8s.hostname.internal
http:
paths:
- path: /
backend:
serviceName: service-name
servicePort: service-port
You can create the subdomain with ingress just update the Host in ingress and add the necessary serviceName and servicePort to route traffic to specific service.
Feel free to append the necessary fields, and annotation if any to the above ingress from the existing ingress which is working for you.
If you are running it locally you might have to map the IP to the subdomain locally in /etc/hosts file
/etc/hosts
otherapplication.microk8s.hostname.internal <IP address>
We want to access only local services via Ingress using K3S (1.23) and Traefik.
We have an NGINX gateway running as a DaemonSet on all nodes, exposed as a NodePort 30123 called gateway with externalTrafficPolicy: Local. When we ping https://node1:30123 we consistently get only a local pod from the nginx instance on node1. ✔
We also have our Ingress Controller Traefik running as a DaemonSet exposed as a NodePort 30999 with externalTrafficPolicy: Local. When we hit https://node1:30999 we get a load-balanced answer ❌🤷♂️:
answer from node1
answer from node2
answer from node1
answer from node3
etc
How can we ensure that https://node1:30999 only gets routed to local pods?
Ingress Resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: my-ingress
spec:
rules:
- host: node1
http:
paths:
- backend:
service:
name: gateway
port:
number: 8443
path: /
pathType: Prefix
- host: node2
http:
paths:
- backend:
service:
name: gateway
port:
number: 8443
path: /
pathType: Prefix
tls:
- hosts:
- node1
- node2
secretName: tls-secret
Gateway Service
apiVersion: v1
kind: Service
metadata:
annotations:
name: gateway
spec:
clusterIP: ***
clusterIPs:
- ***
externalTrafficPolicy: Local
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: https
nodePort: 30123
port: 8443
protocol: TCP
targetPort: 8443
selector:
app: gateway
sessionAffinity: None
type: NodePort
The reason we want only local traffic is because Kubernetes is too slow at evicting pods/endpoints from a service when a node goes down. It continues to send traffic to dead nodes/pods for minutes after a node disappears. We use an external load balancer with active health checks every 2s to avoid this problem. However, evern if our LB targets only healthy nodes, Kubernetes "Services" still have invalid endpoints and round-robin traffic into nowhere.
I guess this might be due to the fact that you are running traefic as a daemonset which behaves like a shared daemon running on all nodes and every node will then apply the local traffic policy to the request.
So basically running the Ingress Controller as a daemonset and setting the controllers service traffic Policy to Local will result in some behavior that equals the Cluster Policy.
Further the idea of the Ingress Controller is to route the traffic to a specific service in the cluster. Accessing a node directly on the Ingress-Controller Service Port will not trigger any forwarding rules. You should instead create a Ingress Resource that maps some existing service (Nginx-test-pod). This services traffic policy must then be set to "Local" too to see the real SourceIP in the services log.
I am running websphere application server deployment and service (type LoadBalancer). The websphere admin console works fine at URL https://svcloadbalancerip:9043/ibm/console/logon.jsp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
was-svc LoadBalancer x.x.x.x x.x.x.x 9080:30810/TCP,9443:30095/TCP,9043:31902/TCP,7777:32123/TCP,31199:30225/TCP,8880:31027/TCP,9100:30936/TCP,9403:32371/TCP 2d5h
But if i configure that websphere service behind ingress using ingress file like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-check
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /ibm/console/logon.jsp
backend:
serviceName: was-svc
servicePort: 9043
- path: /v1
backend:
serviceName: web
servicePort: 8080
The url https://ingressip//ibm/console/logon.jsp doesn't works.
I have tried the rewrite annotation too.
Can anyone help to just deploy the ibmcom/websphere-traditional docker image in kubernetes using deployment and service. With the service mapped behind the ingress and the websphere admin console should somehow be opened from ingress
There is a helm chart available from IBM team which has the ingress resource as well. In your code snippet, you are missing SSL related annotations as well.
https://hub.helm.sh/charts/ibm-charts/ibm-websphere-traditional
https://github.com/IBM/charts/tree/master/stable/ibm-websphere-traditional
I have added the Virtual Host configuration for admin console to work with port 443 in the following code sample.
Please Note: Exposing admin console on the ingress is not a good practice. Configuration should be done via wsadmin or by extending the base Dockerfile. Any changes done through the console will be lost when the container restarts.
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: websphere
spec:
type: NodePort
ports:
- name: admin
port: 9043
protocol: TCP
targetPort: 9043
nodePort: 30510
- name: app
port: 9443
protocol: TCP
targetPort: 9443
nodePort: 30511
selector:
run: websphere
status:
loadBalancer: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: websphere-admin-vh
namespace: default
data:
ingress_vh.props: |+
#
# Header
#
ResourceType=VirtualHost
ImplementingResourceType=VirtualHost
ResourceId=Cell=!{cellName}:VirtualHost=admin_host
AttributeInfo=aliases(port,hostname)
#
#
#Properties
#
443=*
EnvironmentVariablesSection
#
#
#Environment Variables
cellName=DefaultCell01
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: websphere
name: websphere
spec:
containers:
- image: ibmcom/websphere-traditional
name: websphere
volumeMounts:
- name: admin-vh
mountPath: /etc/websphere/
ports:
- name: app
containerPort: 9443
- name: admin
containerPort: 9043
volumes:
- name: admin-vh
configMap:
name: websphere-admin-vh
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-check
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- http:
paths:
- path: /ibm/console
backend:
serviceName: websphere
servicePort: 9043
Exposing both the adminhost and defaulthost via ingress isn't possible, or at least I've never figured out how to accomplish it. The crux of the issue is that ingress listens on port 80 or port 443 and forwards your request to the corresponding port on the container. Thus, the Host header of your request contains that port. I don't know enough about WAS channels/virtualhosts to understand how this works exactly, but in order for accessing WAS endpoints over any port other than the one listed for the endpoint in WAS config to work, the websphere-traditional image has to set a property to extract the port it should use for things like checking against virtualhost hostalias entries and issuing redirects from the Host header (com.ibm.ws.webcontainer.extractHostHeaderPort).
The problem becomes, when it uses that port, that port needs to be listed as a host alias for the virtual host in order for the traffic to be let through to the application. And since a combination of wildcard host and specific port can only be a host alias on one virtual host at a time, they were set up as host aliases on defaulthost so that web applications will work via ingress, but this makes it impossible to also access the admin console since that is served via a separate virtualhost which doesn't (and as far as I know can't) have the host alias entries set up to allow traffic with port 443 in its host header through. I haven't had to figure out how to get this working because kubectl port-forward has been sufficient to get at the admin console for the times I've needed to consult something, and you can't make changes anyway because they'll disappear when the pod restarts and a new one is started from the same (unchanged) image.
I have a legacy application we've started running in Kubernetes. The application listens on two different ports, one for the general web page and another for a web service. In the long run we may try to change some of this but for the moment we're trying to get the legacy application to run as is. The current configuration has a single service for both ports:
apiVersion: v1
kind: Service
metadata:
name: app
spec:
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081
Then I'm using a single ingress to route traffic to the correct service port based on path:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app
annotations:
nginx.ingress.kubernetes.io/upstream-hash-by: "$remote_addr"
spec:
rules:
- host: myapp.test.com
http:
paths:
- backend:
serviceName: app
servicePort: 8080
path: /app
- backend:
serviceName: app
servicePort: 8081
path: /service
This works great for routing. Requests coming into the ingress get routed to the correct service port based on path. However, the problem I have is that for this legacy app to work the requests to both ports 8080 and 8081 need to be routed to the same pod for each client. You can see I tried adding the upstream-hash-by annotation. This seemed to ensure that all requests to 8080 from one client went to the same pod and all requests to 8081 from one client went to the same pod but not that those are the same pod for any one client. When I run with a single pod instance everything is great but when I start spinning up additional pods some clients get /app requests routed to one pod and /service requests to another and in this application that does not currently work. I have tried other annotations in the ingress including nginx.ingress.kubernetes.io/affinity: "cookie" and nginx.ingress.kubernetes.io/affinity-mode: "persistent" as well as trying to add sessionAffinity: ClientIP to the service but so far nothing seems to work. The goal is that all requests to either path get routed to the same pod for any one client. Any help would be greatly appreciated.
Session persistence settings will only work if you set the kube proxy settings such that it forwards requests to local pod only and not to random pods across the cluster.
you can do this by setting the service level settings to:
service.spec.externalTrafficPolicy: Local
you can read more here:
https://kubernetes.io/docs/tutorials/services/source-ip/
after doing this , you ingress annotations should work. I have tested this with external load balancer only , not with ingress though.
keeping everything else the same and having this service definition should work
apiVersion: v1
kind: Service
metadata:
name: app
spec:
externalTrafficPolicy: Local
selector:
app: my-app
ports:
- name: web
port: 8080
protocol: TCP
targetPort: 8080
- name: service
port: 8081
protocol: TCP
targetPort: 8081
I want to host a website (simple nginx+php-fpm) on Google Container Engine. I built a replication controller that controls the nginx and php-fpm pod. I also built a service that can expose the site.
How do I link my service to a public (and reserved) IP Address so that the webserver sees the client IP addresses?
I tried creating an ingress. It provides the client IP through an extra http header. Unfortunately ingress does not support reserved IPs yet:
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.org
http:
paths:
- backend:
serviceName: example-web
servicePort: 80
path: /
I also tried creating a service with a reserved IP. This gives me a public IP address but I think the client IP is lost:
apiVersion: v1
kind: Service
metadata:
name: 'example-web'
spec:
selector:
app: example-web
ports:
- port: 80
targetPort: 80
loadBalancerIP: "10.10.10.10"
type: LoadBalancer
I would setup the HTTP Loadbalancer manually, but I didn't find a way to configure a cluster IP as a backend for the loadbalancer.
This seems like a very basic use case to me and stands in the way of using container engine in production. What am I missing? Where am I wrong?
As you are running in google-container-engine you could set up a Compute Engine HTTP Load Balancer for your static IP. The Target proxy will add X-Forwarded- headers for you.
Set up your kubernetes service with type NodePort and add a nodePort field. This way nodePort is accessible via kubernetes-proxy on every nodes IP address regardless of where the pod is running:
apiVersion: v1
kind: Service
metadata:
name: 'example-web'
spec:
selector:
app: example-web
ports:
- nodePort: 30080
port: 80
targetPort: 80
type: NodePort
Create a backend service with HTTP health check on port 30080 for your instance group (nodes).