I have tradional server app stack:
database
app (ruby)
microservice (node)
App is avaible by https://example.com
My users wants isolated personal apps (for high availability) with full database access by connection string
So we need server app stacks:
- personal (isolated) databases
- personal (isolated) apps
- personal (isolated) microservices
Apps must be avaible by http://cloud.example.com/userX, where userX is user's login
I think each user should have their own namespace. Thus, the personal database, application, and microserver belong to this namespace.
Also I have one Ingress (in namespace: kube-public) now for all users apps:
# ? apiVersion: networking.k8s.io/v1beta1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mgrs
namespace: kube-public
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- cloud.example.com
secretName: cloud-tls
rules:
- host: cloud.example.com
http:
paths:
- path: /user1
backend:
serviceName: user1-service
servicePort: 80
- path: /user2
backend:
serviceName: user2-service
servicePort: 80
...
How it is possible with Kubernetes? Maybe I need several Ingress for each users?
Maybe it is more easy make paths
userX.example.com instead cloud.example.com/userX ?
One approach is use one Ngnix as a dynamic proxy to services, for that you cloud add a ConfigMap to dynamicaly rote to user service,
If you use one namespace and put the user name in the service name you should use this config:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-dns-file
data:
nginx.conf: |
server {
listen 80;
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
location ~ /(.*) {
proxy_pass http://$1-service.default.svc.cluster.local;
}
}
If you use one namespace per user and put the user name in the service name you should use something like this config:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-dns-file
data:
nginx.conf: |
server {
listen 80;
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
location ~ /(.*) {
proxy_pass http://$1-service.$1.svc.cluster.local;
}
}
Another possibility aligned with this one is to use a Nginx-Ingress controller and take advantages of Ngnix as ingress controller and the possibilities of apply some configuration to achieve what you wish.
Related
My modern application hosted within kubernetes works based on the host and the path app.dev.sandbox.com/ (root / path). I need access external legacy application hosted on EC2/VM with host name app-legacy.dev.sandbox.com via the kubernetes ingress. I want to be able to access the legacy sites pages with the same host name of application deployed on kubernetes like this app.dev.sandbox.com/nbs/login or app.dev.sandbox.com/nbs/homepage.do should actually request app-legacy.dev.sandbox.com/abc/login or app-legacy.dev.sandbox.com/nbs/homepage.do but preserve the modern hostname of app.dev.sandbox.com when displaying the page.
This is needed because, we are using strangler fig pattern to navigate between legacy and modern pages seamlessly. The request needs to be captured by the kubernetes ingress when the user clicks on certain links on the legacy pages. I am currently using the following external service and ingress resource.
apiVersion: v1
kind: Service
metadata:
name: nbs-legacy
spec:
type: ExternalName
externalName: app-legacy.dev.sandbox.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-production"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/preserve-host: "true"
spec:
ingressClassName: nginx
tls:
- secretName: app.dev.sandbox.com
hosts:
- app.dev.sandbox.com
rules:
- host: app.dev.sandbox.com
http:
paths:
- path: /nbs #I would like the path to be appened to the nbs-legacy request. For example /nbs/login should be appened while requesting app-legacy.dev.sandbox.com/nbs/login
pathType: Exact
backend:
service:
name: nbs-legacy
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: modern-service
port:
number: 80
I am using nginx ingress. I tried with the above manifests but did not work.
I am expecting the following:
Navigate to external legacy application seamlessly while preserving the host name of the kubernetes.
Capture path from the request and pass that to legacy host name request. For example, /nbs/login or /nbs/dashboard should be passed when requesting legacy application pages.
Avoid routing of anything that starts with /nbs(eg: /nbs/abc) to the default root / path.
I have the following need:
There is API that may be accessed only from allowlisted IPs. I'd like to make this API available publicly.
I thought about the following solution:
Create a service of type ServiceName:
kind: Service
apiVersion: v1
metadata:
name: my-svc
spec:
type: ExternalName
externalName: restricted-api.com
Create an ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- mysite.com
secretName: mysite-tls
rules:
- host: example.com
http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: my-svc
port:
name: https
Is my understanding correct that with such a setup when I call https://example.com/request on K8s level the request will be sent to https://restricted-api.com/request? The caller would not know that there is communication with restricted-api.com. Since the clients' IPs are dynamic the restricted-api.com would not allow them to call it.
The k8s IP is static and I could allowlist it.
Ok, if these are just your thoughts, I would recommend to look into this annotation:
externalTrafficPolicy=local is an annotation on the Kubernetes service resource that can be set to preserve the client source IP. When this value is set, the actual IP address of a client (e.g., a browser or mobile application) is propagated to the Kubernetes service instead of the IP address of the node.
For more information you can find here or in official Kubernetes docs.
Feel free to reach me out again, if you start to realize your thoughts and will face any issue with this.
I would like to deploy an ngingx-ingress controller on my self-hosted Kubernetes (microk8s) that is configurable to listen on one or more interfaces (external IPs).
Not even sure if that is easily possible and I should just switch to an external solution, such as HAProxy or an nginx.
Required behavior:
192.168.0.1-H"domain.com":443/frontend -> 192.168.0.1 (eth0) -> ingress -> service-frontend
192.168.0.1-H"domain.com":443/backend -> 192.168.0.1 (eth0) -> ingress -> service-backend
88.88.88.88-H"domain.com":443/frontend -> 88.88.88.88 (eth1) -> ? -> [403 or timeout]
88.88.88.88-H"domain.com":443/backend -> 88.88.88.88 (eth1) -> ? -> [403 or timeout]
And then later the eth1 interface should be able to be switched on, so that requests on that interface behave the same as on eth0.
I would like to be able to deploy multiple instances of services for load-balancing. I would like to keep the configuration in my namespace (if possible) so I can always delete and apply everything at once.
I'm using this guide as a reference: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
I was able to get something working with minikube, but obviously could not expose any external IPs and performance was quite bad. For that, I just configured a "kind: Ingress" and that was it.
So far, the ingress controller that's default on microk8s seems to listen on all interfaces and I can only configure it in its own namespace. Defining my own ingress seems to not have any effect.
I would like to deploy an ngingx-ingress controller on my self-hosted
Kubernetes (microk8s) that is configurable to listen on one or more
interfaces (external IPs).
For above scenario, you have to deploy the multiple ingress controller of Nginx ingress and keep the different class name in it.
Official document : https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
So in this scenario, you have to create the Kubernetes service with Loadbalancer IP and each will point to the respective deployment and class will be used in the ingress object.
If you looking forward to use the multiple domains with a single ingress controller you can easily do it by mentioning the host into ingress.
Example for two domain :
bar.foo.dev
foo.bar.dev
YAML example
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-bar
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- bar.foo.dev
secretName: tls-secret-bar
rules:
- host: bar.foo.dev
http:
paths:
- backend:
serviceName: barfoo
servicePort: 80
path: /(.*)
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-foo
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- foo.bar.dev
secretName: tls-secret-foo
rules:
- host: foo.bar.dev
http:
paths:
- backend:
serviceName: foobar
servicePort: 9000
path: /(.*)
One potential fix was much simpler than anticipated, no messing with MetalLB needed or anything else.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "public"
nginx.ingress.kubernetes.io/whitelist-source-range: 192.168.0.0/24
...
This does not answer the question of splitting an Ingress across multiple interfaces, but it does solve the problem of restricting public access.
By default, bare-metal ingress will listen on all interfaces, which might be a security issue.
This solution works without enabling ingress on Microk8s:
install ingress controller : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/baremetal/deploy.yaml
create your deployment and service and add this Ingress resource (all in the one namespace):
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/service-upstream: 'true'
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
name: ingress-resource
namespace: namespace-name
spec:
rules:
- http:
paths:
- backend:
service:
name: service-name
port:
number: service-port
path: /namespace-name/service-name(/|$)(.*)
pathType: Prefix
kubectl get svc -n ingress-nginx
now get either CLUSTER-IP or EXTERNAL-IP and :
curl ip/namespace-here/service-here
i'm trying to reverse proxy using nginx-ingress.
but i cannot find a way to apply reverse proxy in only certain paths
for example, i want apply reverse proxy http://myservice.com/about/* from CDN static resources
and other paths serve my service (in example, it means 'my-service-web' service)
maybe in terms of k8s, CDN means "public external service"
in result,
http://myservice.com/about/* -> reverse proxy from CDN (external service)
http://myservice.com/* -> my-service-web (internal service)
here is my ingress.yaml file
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-service-web
namespace: my-service
annotations:
kubernetes.io/ingress.class: nginx-ingress
nginx.ingress.kubernetes.io/server-snippet: |
location ~ /about/(.*) {
proxy_pass https://CDN_URL/$1${is_args}${args};
......and other proxy settings
}
spec:
rules:
- host: myservice.com
http:
paths:
- path: /about
........how do i configuration this?
- path: /*
backend:
serviceName: my-service-web
servicePort: 80
how do i set rules and annotations?
You can create a service with externalName type that will point to your external service (CDN) and it's well explained in this blog post, for example:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-service
spec:
type: ExternalName
externalName: FQDN
and then use it in your ingress rules by referring to the service name.
On GKE, K8s Ingress are LoadBalancers provided by Compute Engine which have some cost. Example for 2 months I payed 16.97€.
In my cluster I have 3 namespaces (default, dev and prod) so to reduce cost I would like to avoid spawning 3 LoadBalancers. The question is how to configure the current one to point to the right namespace?
GKE requires the ingress's target Service to be of type NodePort, I am stuck because of that constraint.
I would like to do something like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: dev
annotations: # activation certificat ssl
kubernetes.io/ingress.global-static-ip-name: lb-ip-adress
spec:
hosts:
- host: dev.domain.com
http:
paths:
- path: /*
backend:
serviceName: dev-service # This is the current case, 'dev-service' is a NodePort
servicePort: http
- host: domain.com
http:
paths:
- path: /*
backend:
serviceName: prod-service # This service lives in 'dev' namespace and is of type ExternalName. Its final purpose is to point to the real target service living in 'prod' namespace.
servicePort: http
- host: www.domain.com
http:
paths:
- path: /*
backend:
serviceName: prod-service
servicePort: http
As GKE requires service to be NodePort I am stuck with prod-service.
Any help will be appreciated.
Thanks a lot
OK here is what I have been doing. I have only one ingress with one backend service to nginx.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: nginx-svc
servicePort: 80
And In your nginx deployment/controller you can define the config-maps with typical nginx configuration. This way you use one ingress and target mulitple namespaces.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default.conf: |
server {
listen 80;
listen [::]:80;
server_name _;
location /{
add_header Content-Type text/plain;
return 200 "OK.";
}
location /segmentation {
proxy_pass http://myservice.mynamespace.svc.cluster.local:80;
}
}
And the deployment will use the above config of nginx via config-map
apiVersion: extensions/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
#podAntiAffinity will not let two nginx pods to run in a same node machine
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-configs
mountPath: /etc/nginx/conf.d
livenessProbe:
httpGet:
path: /
port: 80
# Load the configuration files for nginx
volumes:
- name: nginx-configs
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
selector:
app: nginx
type: NodePort
ports:
- protocol: "TCP"
nodePort: 32111
port: 80
This way you can take advantage of ingress features like tls/ssl termination like managed by google or cert-manager and also if you want you can also have your complex configuration inside nginx too.
Use #Prata way but with one change, do not route prod traffic via nginx but route it directly to service from loadbalancer, use nginx for non-prod traffic e.g staging etc.
The reason is that google HTTPS load balancer uses container native LB (Link) which routes traffic directly to healthy pods which saves hops and is effecient, Why not use it for production.
One alternative (and probably the most flexible GCP native) solution for http(s) load-balancing is the use standalone NEGs. This requires you to setup all parts of the load-balancer yourself (such as url maps, health-checks etc.)
There are multiple benefits, such as:
One load-balancer can serve multiple namespaces
The same load-balancer can integrate other backends as-well (like other instance groups outside your cluster)
You can still use container native load balancing
One challenge of this approach is that it is not "GKE native", which means that the routes will still exist even if you delete the underlying service. This approach is therefore best maintained through tools like terraform which allows you to have GCP wholistic deployment control.