With Kubernetes, in a multi-tenant env., controlled by RBAC, when creating a new Ingress cname, I would like to force cname format like:
${service}.${namespace}.${cluster}.kube.infra
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ${servce}
spec:
tls:
- hosts:
- ${service}.${namespace}.${cluster}.kube.infra
secretName: conso-elasticsearch-ssl
rules:
- host: ${service}.${namespace}.${cluster}.kube.infra
http:
paths:
- path: /
backend:
serviceName: ${service}
servicePort: 9200
Is it possible?
You can do by it by writing a validating admission webhook which validates the ingress yaml and rejects it if the cname format is not as per the way you want. A better way to is to use Open Policy agent(OPA) and write rego policy. Here is a guide on how to perform policy driven validation of ingress using OPA.
package kubernetes.admission
import data.kubernetes.namespaces
operations = {"CREATE", "UPDATE"}
deny[msg] {
input.request.kind.kind == "Ingress"
operations[input.request.operation]
host := input.request.object.spec.rules[_].host
not fqdn_matches_any(host, valid_ingress_hosts)
msg := sprintf("invalid ingress host %q", [host])
}
valid_ingress_hosts = {
// valid hosts
}
fqdn_matches_any(str, patterns) {
fqdn_matches(str, patterns[_])
}
fqdn_matches(str, pattern) {
// validation logic
}
fqdn_matches(str, pattern) {
not contains(pattern, "*")
str == pattern
}
Related
My problem: i create ingress rule via kubernetes-client:
try (InputStream is = IOUtils.toInputStream(crd, StandardCharsets.UTF_8)) {
client.load(is).inNamespace(namespaceName).createOrReplace();
}
Where is - yaml file with ingress rule like:
spec:
ingressClassName: nginx
rules:
- host: {{host}}
http:
paths:
- backend:
service:
name: {{service_name}}
port:
number: 80
path: /
pathType: ImplementationSpecific
But in my task i need ensure that the rule is created, i need wait or check status in loop.
What is the best way to do it?
Thanx!
I think you might be able to do it with waitUntilCondition:
try (KubernetesClient client = new KubernetesClientBuilder().build()) {
Ingress ingress = client.network().v1().ingresses()
.load(IngressRuleWaitUntilCondition.class.getResourceAsStream("/ingress-rule.yml"))
.get();
client.resource(ingress)
.inNamespace("default")
.createOrReplace();
client.network().v1()
.ingresses()
.inNamespace("default")
.resource(ingress)
.waitUntilCondition(i -> !i.getSpec().getRules().isEmpty(), 2, TimeUnit.MINUTES);
}
I've been trying to set up a max body size in the Ingress controller based on the HTTP method of a given path.
Basically the POST method should allow 3m as max size and all the other methods should allow 1m.
Right now my main idea was to do something like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-custom-service
namespace: development
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx-dev"
nginx.ingress.kubernetes.io/configuration-snippet: |
internal;
rewrite ^ $original_uri break;
nginx.ingress.kubernetes.io/server-snippet: |
location /api/v1/my-endpoint {
if ( $request_method = POST) {
set $target_destination '/_post';
client_max_body_size 3M;
}
if ( $request_method != POST) {
set $target_destination '/_not_post';
client_max_body_size 1M;
}
set $original_uri $uri;
rewrite ^ $target_destination last;
}
spec:
tls:
rules:
- host: my-host.com
http:
paths:
- path: /_post
backend:
serviceName: my-service
servicePort: 8080
- path: /_not_post
backend:
serviceName: my-service
servicePort: 8080
But then I'm getting the following error in the pod:
Is there any way I can correctly set-up the max body size via the ingress controller?
Try changing your annotations with the configuration-snippet
nginx.ingress.kubernetes.io/configuration-snippet: |
location /upload-path {
client_max_body_size 8M;
}
Read more at : https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#configuration-snippet
I try to create an ingress resource over terraform. I receive the following error message
Error: Failed to create Ingress 'jenkins/jenkins-ingress' because: the
server could not find the requested resource (post
ingresses.extensions) │ │ with kubernetes_ingress.jenkins-ingress, │
on main.tf line 160, in resource "kubernetes_ingress"
"jenkins-ingress": │ 160: resource "kubernetes_ingress"
"jenkins-ingress" {
My terraform resource looks like this:
resource "kubernetes_ingress" "jenkins-ingress" {
metadata {
name = "${var.name}-ingress"
namespace = var.namespace
annotations = {
"ingress.kubernetes.io/rewrite-target" = "/"
"kubernetes.io/ingress.class" = "nginx"
}
}
spec {
rule {
host = "domain.com"
http {
path {
path = "/"
backend {
service_name = var.name
service_port = 8080
}
}
}
}
}
}
If I create the ingress over a yaml it works:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jenkins
port:
number: 8080
What strikes me is the difference between rule (see kubernetes_ingress) and rules in the terraform script and in the yaml. Ideas?
I was getting the same error.
Try using kubernetes_ingress_v1 instead of kubernetes_ingress which uses networking.k8s.io/v1 instead of networking.k8s.io/v1beta1.
I want to connect a React Native application using Socket.io to a server that is inside a Kubernetes Cluster hosted on Google Cloud Platform (GKE).
There seems to be an issue with the Nginx Ingress Controller declaration but I cannot find it.
I have tried adding nginx.org/websocket-services; rewriting my backend code so that it uses a separate NodeJS server (a simple HTTP server) on port 3004, then exposing it via the Ingress Controller under a different path than the one on port 3003; and multiple other suggestions from other SO questions and Github issues.
Information that might be useful:
Cluster master version: 1.15.11-gke.15
I use a Load Balancer managed with Helm (stable/nginx-ingress) with RBAC enabled
All deployments and services are within the namespace gitlab-managed-apps
The error I receive when trying to connect to socket.io is: Error: websocket error
For the front-end part, the code is as follows:
App.js
const socket = io('https://example.com/app-sockets/socketns', {
reconnect: true,
secure: true,
transports: ['websocket', 'polling']
});
I expect the above to connect me to a socket.io namespace called socketdns.
The backend code is:
app.js
const express = require('express');
const app = express();
const server = require('http').createServer(app);
const io = require('socket.io')(server);
const redis = require('socket.io-redis');
io.set('transports', ['websocket', 'polling']);
io.adapter(redis({
host: process.env.NODE_ENV === 'development' ? 'localhost' : 'redis-cluster-ip-service.gitlab-managed-apps.svc.cluster.local',
port: 6379
}));
io.of('/').adapter.on('error', function(err) { console.log('Redis Adapter error! ', err); });
const nsp = io.of('/socketns');
nsp.on('connection', function(socket) {
console.log('connected!');
});
server.listen(3003, () => {
console.log('App listening to 3003');
});
The ingress service is:
ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-connect-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-read-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-send-timeout: "7200"
nginx.org/websocket-services: "app-sockets-cluster-ip-service"
name: ingress-service
namespace: gitlab-managed-apps
spec:
tls:
- hosts:
- example.com
secretName: letsencrypt-prod
rules:
- host: example.com
http:
paths:
- backend:
serviceName: app-cms-cluster-ip-service
servicePort: 3000
path: /?(.*)
- backend:
serviceName: app-users-cluster-ip-service
servicePort: 3001
path: /app-users/?(.*)
- backend:
serviceName: app-sockets-cluster-ip-service
servicePort: 3003
path: /app-sockets/?(.*)
- backend:
serviceName: app-sockets-cluster-ip-service
servicePort: 3003
path: /app-sockets/socketns/?(.*)
The solution is to remove the nginx.ingress.kubernetes.io/rewrite-target: /$1 annotation.
Here is a working configuration: (please note that apiVersion has changed since the question has been asked)
Ingress configuration
ingress-service.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "64m"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
name: ingress-service
namespace: default
spec:
tls:
- hosts:
- example.com
secretName: letsencrypt-prod
rules:
- host: example.com
http:
paths:
- backend:
service:
name: app-sockets-cluster-ip-service
port:
number: 3003
path: /app-sockets/?(.*)
pathType: Prefix
On the service (Express.js):
app.js
const redisAdapter = require('socket.io-redis');
const io = require('socket.io')(server, {
path: `${ global.NODE_ENV === 'development' ? '' : '/app-sockets' }/sockets/`,
cors: {
origin: '*',
methods: ['GET', 'POST'],
},
});
io.adapter(redisAdapter({
host: global.REDIS_HOST,
port: 6379,
}));
io.of('/').adapter.on('error', err => console.log('Redis Adapter error! ', err));
io.on('connection', () => {
//...
});
The global.NODE_ENV === 'development' ? '' : '/app-sockets' bit is related to an issue in development. If you change it here, you must also change it in the snippet below.
In development the service is under http://localhost:3003 (sockets endpoint is http://localhost:3003/sockets).
In production the service is under https://example.com/app-sockets (sockets endpoint is https://example.com/app-sockets/sockets).
On frontend
connectToWebsocketsService.js
/**
* Connect to a websockets service
* #param tokens {Object}
* #param successCallback {Function}
* #param failureCallback {Function}
*/
export const connectToWebsocketsService = (tokens, successCallback, failureCallback) => {
//SOCKETS_URL = NODE_ENV === 'development' ? 'http://localhost:3003' : 'https://example.com/app-sockets'
const socket = io(`${ SOCKETS_URL.replace('/app-sockets', '') }`, {
path: `${ NODE_ENV === 'development' ? '' : '/app-sockets' }/sockets/`,
reconnect: true,
secure: true,
transports: ['polling', 'websocket'], //required
query: {
// optional
},
auth: {
...generateAuthorizationHeaders(tokens), //optional
},
});
socket.on('connect', successCallback(socket));
socket.on('reconnect', successCallback(socket));
socket.on('connect_error', failureCallback);
};
Note: I wasn't able to do it on the project mentioned in the question, but I have on another project which is hosted on EKS, not GKE. Feel free to confirm if this works for you on GKE as well.
Just change annotations to
nginx.ingress.kubernetes.io/websocket-services: "app-sockets-cluster-ip-service"
instead of
nginx.org/websocket-services: "app-sockets-cluster-ip-service"
Mostly it will resolve your issue.
I'm new to openshift and k8s. I'm not sure what's the difference between these two terms, openshift route vs k8s ingress ?
Ultimately they are intended to achieve the same end. Originally Kubernetes had no such concept and so in OpenShift the concept of a Route was developed, along with the bits for providing a load balancing proxy etc. In time it was seen as being useful to have something like this in Kubernetes, so using Route from OpenShift as a starting point for what could be done, Ingress was developed for Kubernetes. In the Ingress version they went for a more generic rules based system so how you specify them looks different, but the intent is to effectively be able to do the same thing.
The following code implementation will create a route in OCP.
The OCP will consider the ingress as a route in the same way.
// build the ingress/route object
func (r *ReconcileMobileSecurityService) buildAppIngress(m *mobilesecurityservicev1alpha1.MobileSecurityService) *v1beta1.Ingress {
ls := getAppLabels(m.Name)
hostName := m.Name + "-" + m.Namespace + "." + m.Spec.ClusterHost + ".nip.io"
ing := &v1beta1.Ingress{
TypeMeta: v1.TypeMeta{
APIVersion: "extensions/v1beta1",
Kind: "Ingress",
},
ObjectMeta: v1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
Labels: ls,
},
Spec: v1beta1.IngressSpec{
Backend: &v1beta1.IngressBackend{
ServiceName: m.Name,
ServicePort: intstr.FromInt(int(m.Spec.Port)),
},
Rules: []v1beta1.IngressRule{
{
Host: hostName,
IngressRuleValue: v1beta1.IngressRuleValue{
HTTP: &v1beta1.HTTPIngressRuleValue{
Paths: []v1beta1.HTTPIngressPath{
{
Backend: v1beta1.IngressBackend{
ServiceName: m.Name,
ServicePort: intstr.FromInt(int(m.Spec.Port)),
},
Path: "/",
},
},
},
},
},
},
},
}
// Set MobileSecurityService instance as the owner and controller
controllerutil.SetControllerReference(m, ing, r.scheme)
return ing
}