Ensure ingress rule creation - kubernetes

My problem: i create ingress rule via kubernetes-client:
try (InputStream is = IOUtils.toInputStream(crd, StandardCharsets.UTF_8)) {
client.load(is).inNamespace(namespaceName).createOrReplace();
}
Where is - yaml file with ingress rule like:
spec:
ingressClassName: nginx
rules:
- host: {{host}}
http:
paths:
- backend:
service:
name: {{service_name}}
port:
number: 80
path: /
pathType: ImplementationSpecific
But in my task i need ensure that the rule is created, i need wait or check status in loop.
What is the best way to do it?
Thanx!

I think you might be able to do it with waitUntilCondition:
try (KubernetesClient client = new KubernetesClientBuilder().build()) {
Ingress ingress = client.network().v1().ingresses()
.load(IngressRuleWaitUntilCondition.class.getResourceAsStream("/ingress-rule.yml"))
.get();
client.resource(ingress)
.inNamespace("default")
.createOrReplace();
client.network().v1()
.ingresses()
.inNamespace("default")
.resource(ingress)
.waitUntilCondition(i -> !i.getSpec().getRules().isEmpty(), 2, TimeUnit.MINUTES);
}

Related

How to have multiple host in with Terraform and kubernetes_ingress_v1

The question is simple, yet I can not find a single example online... basically how do you write this in Terraform with kubernetes_ingress_v1. I basically have an app where I am using subdomains for each of the components due to overlapping paths.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: "/bar"
backend:
service:
name: service1
port:
number: 80
- host: "*.foo.com"
http:
paths:
- pathType: Prefix
path: "/foo"
backend:
service:
name: service2
port:
number: 80
With kubernetes_ingress_v1, to have multiple host on same ingress I had to use multiple rule.
resource "kubernetes_ingress_v1" "monitoring" {
metadata {
name = "monitoring-alb"
namespace = "monitoring"
annotations = {
"kubernetes.io/ingress.class" = "alb"
#More annotation##
}
}
spec {
rule {
host = "monitoring.mydomain.io"
http {
path {
path_type = "ImplementationSpecific"
backend {
service {
name = "grafana"
port {
number = 80
}
}
}
}
}
}
rule {
host = "logs.mydomain.io"
http {
path {
path_type = "ImplementationSpecific"
backend {
service {
name = "loki-distributed-gateway"
port {
number = 80
}
}
}
}
}
}
}
}

How to deny path in k8S ingress

I would like to block /public/configs in my k8s ingress.
My current settings doesnt work.
- host: example.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: service-myapp
port:
number: 80
- path: /public/configs
pathType: ImplementationSpecific
backend:
service:
name: service-myapp
port:
number: 88 // fake port
Is there any better (easy) way?
1- Create a dummy service and send it to that:
- path: /public/configs
pathType: ImplementationSpecific
backend:
service:
name: dummy-service
port:
number: 80
2- use server-snippets as bellow to return 403 or any error you want:
a) for k8s nginx ingress:
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
location ~* "^/public/configs" {
deny all;
return 403;
}
b) for nginx ingress:
annotations:
nginx.org/server-snippet: |
location ~* "^/public/configs" {
deny all;
return 403;
}

Setting up kubernets Ingress proxy-body-size based on request method

I've been trying to set up a max body size in the Ingress controller based on the HTTP method of a given path.
Basically the POST method should allow 3m as max size and all the other methods should allow 1m.
Right now my main idea was to do something like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-custom-service
namespace: development
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx-dev"
nginx.ingress.kubernetes.io/configuration-snippet: |
internal;
rewrite ^ $original_uri break;
nginx.ingress.kubernetes.io/server-snippet: |
location /api/v1/my-endpoint {
if ( $request_method = POST) {
set $target_destination '/_post';
client_max_body_size 3M;
}
if ( $request_method != POST) {
set $target_destination '/_not_post';
client_max_body_size 1M;
}
set $original_uri $uri;
rewrite ^ $target_destination last;
}
spec:
tls:
rules:
- host: my-host.com
http:
paths:
- path: /_post
backend:
serviceName: my-service
servicePort: 8080
- path: /_not_post
backend:
serviceName: my-service
servicePort: 8080
But then I'm getting the following error in the pod:
Is there any way I can correctly set-up the max body size via the ingress controller?
Try changing your annotations with the configuration-snippet
nginx.ingress.kubernetes.io/configuration-snippet: |
location /upload-path {
client_max_body_size 8M;
}
Read more at : https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#configuration-snippet

Force kubernetes ingress cname format

With Kubernetes, in a multi-tenant env., controlled by RBAC, when creating a new Ingress cname, I would like to force cname format like:
${service}.${namespace}.${cluster}.kube.infra
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ${servce}
spec:
tls:
- hosts:
- ${service}.${namespace}.${cluster}.kube.infra
secretName: conso-elasticsearch-ssl
rules:
- host: ${service}.${namespace}.${cluster}.kube.infra
http:
paths:
- path: /
backend:
serviceName: ${service}
servicePort: 9200
Is it possible?
You can do by it by writing a validating admission webhook which validates the ingress yaml and rejects it if the cname format is not as per the way you want. A better way to is to use Open Policy agent(OPA) and write rego policy. Here is a guide on how to perform policy driven validation of ingress using OPA.
package kubernetes.admission
import data.kubernetes.namespaces
operations = {"CREATE", "UPDATE"}
deny[msg] {
input.request.kind.kind == "Ingress"
operations[input.request.operation]
host := input.request.object.spec.rules[_].host
not fqdn_matches_any(host, valid_ingress_hosts)
msg := sprintf("invalid ingress host %q", [host])
}
valid_ingress_hosts = {
// valid hosts
}
fqdn_matches_any(str, patterns) {
fqdn_matches(str, patterns[_])
}
fqdn_matches(str, pattern) {
// validation logic
}
fqdn_matches(str, pattern) {
not contains(pattern, "*")
str == pattern
}

Kubernetes Websockets using Socket.io, ExpressJS and Nginx Ingress

I want to connect a React Native application using Socket.io to a server that is inside a Kubernetes Cluster hosted on Google Cloud Platform (GKE).
There seems to be an issue with the Nginx Ingress Controller declaration but I cannot find it.
I have tried adding nginx.org/websocket-services; rewriting my backend code so that it uses a separate NodeJS server (a simple HTTP server) on port 3004, then exposing it via the Ingress Controller under a different path than the one on port 3003; and multiple other suggestions from other SO questions and Github issues.
Information that might be useful:
Cluster master version: 1.15.11-gke.15
I use a Load Balancer managed with Helm (stable/nginx-ingress) with RBAC enabled
All deployments and services are within the namespace gitlab-managed-apps
The error I receive when trying to connect to socket.io is: Error: websocket error
For the front-end part, the code is as follows:
App.js
const socket = io('https://example.com/app-sockets/socketns', {
reconnect: true,
secure: true,
transports: ['websocket', 'polling']
});
I expect the above to connect me to a socket.io namespace called socketdns.
The backend code is:
app.js
const express = require('express');
const app = express();
const server = require('http').createServer(app);
const io = require('socket.io')(server);
const redis = require('socket.io-redis');
io.set('transports', ['websocket', 'polling']);
io.adapter(redis({
host: process.env.NODE_ENV === 'development' ? 'localhost' : 'redis-cluster-ip-service.gitlab-managed-apps.svc.cluster.local',
port: 6379
}));
io.of('/').adapter.on('error', function(err) { console.log('Redis Adapter error! ', err); });
const nsp = io.of('/socketns');
nsp.on('connection', function(socket) {
console.log('connected!');
});
server.listen(3003, () => {
console.log('App listening to 3003');
});
The ingress service is:
ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-connect-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-read-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-send-timeout: "7200"
nginx.org/websocket-services: "app-sockets-cluster-ip-service"
name: ingress-service
namespace: gitlab-managed-apps
spec:
tls:
- hosts:
- example.com
secretName: letsencrypt-prod
rules:
- host: example.com
http:
paths:
- backend:
serviceName: app-cms-cluster-ip-service
servicePort: 3000
path: /?(.*)
- backend:
serviceName: app-users-cluster-ip-service
servicePort: 3001
path: /app-users/?(.*)
- backend:
serviceName: app-sockets-cluster-ip-service
servicePort: 3003
path: /app-sockets/?(.*)
- backend:
serviceName: app-sockets-cluster-ip-service
servicePort: 3003
path: /app-sockets/socketns/?(.*)
The solution is to remove the nginx.ingress.kubernetes.io/rewrite-target: /$1 annotation.
Here is a working configuration: (please note that apiVersion has changed since the question has been asked)
Ingress configuration
ingress-service.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "64m"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
name: ingress-service
namespace: default
spec:
tls:
- hosts:
- example.com
secretName: letsencrypt-prod
rules:
- host: example.com
http:
paths:
- backend:
service:
name: app-sockets-cluster-ip-service
port:
number: 3003
path: /app-sockets/?(.*)
pathType: Prefix
On the service (Express.js):
app.js
const redisAdapter = require('socket.io-redis');
const io = require('socket.io')(server, {
path: `${ global.NODE_ENV === 'development' ? '' : '/app-sockets' }/sockets/`,
cors: {
origin: '*',
methods: ['GET', 'POST'],
},
});
io.adapter(redisAdapter({
host: global.REDIS_HOST,
port: 6379,
}));
io.of('/').adapter.on('error', err => console.log('Redis Adapter error! ', err));
io.on('connection', () => {
//...
});
The global.NODE_ENV === 'development' ? '' : '/app-sockets' bit is related to an issue in development. If you change it here, you must also change it in the snippet below.
In development the service is under http://localhost:3003 (sockets endpoint is http://localhost:3003/sockets).
In production the service is under https://example.com/app-sockets (sockets endpoint is https://example.com/app-sockets/sockets).
On frontend
connectToWebsocketsService.js
/**
* Connect to a websockets service
* #param tokens {Object}
* #param successCallback {Function}
* #param failureCallback {Function}
*/
export const connectToWebsocketsService = (tokens, successCallback, failureCallback) => {
//SOCKETS_URL = NODE_ENV === 'development' ? 'http://localhost:3003' : 'https://example.com/app-sockets'
const socket = io(`${ SOCKETS_URL.replace('/app-sockets', '') }`, {
path: `${ NODE_ENV === 'development' ? '' : '/app-sockets' }/sockets/`,
reconnect: true,
secure: true,
transports: ['polling', 'websocket'], //required
query: {
// optional
},
auth: {
...generateAuthorizationHeaders(tokens), //optional
},
});
socket.on('connect', successCallback(socket));
socket.on('reconnect', successCallback(socket));
socket.on('connect_error', failureCallback);
};
Note: I wasn't able to do it on the project mentioned in the question, but I have on another project which is hosted on EKS, not GKE. Feel free to confirm if this works for you on GKE as well.
Just change annotations to
nginx.ingress.kubernetes.io/websocket-services: "app-sockets-cluster-ip-service"
instead of
nginx.org/websocket-services: "app-sockets-cluster-ip-service"
Mostly it will resolve your issue.