kubernetes_ingress kubernetes v2.6.1 - Failed to create Ingress - kubernetes

I try to create an ingress resource over terraform. I receive the following error message
Error: Failed to create Ingress 'jenkins/jenkins-ingress' because: the
server could not find the requested resource (post
ingresses.extensions) │ │ with kubernetes_ingress.jenkins-ingress, │
on main.tf line 160, in resource "kubernetes_ingress"
"jenkins-ingress": │ 160: resource "kubernetes_ingress"
"jenkins-ingress" {
My terraform resource looks like this:
resource "kubernetes_ingress" "jenkins-ingress" {
metadata {
name = "${var.name}-ingress"
namespace = var.namespace
annotations = {
"ingress.kubernetes.io/rewrite-target" = "/"
"kubernetes.io/ingress.class" = "nginx"
}
}
spec {
rule {
host = "domain.com"
http {
path {
path = "/"
backend {
service_name = var.name
service_port = 8080
}
}
}
}
}
}
If I create the ingress over a yaml it works:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jenkins
port:
number: 8080
What strikes me is the difference between rule (see kubernetes_ingress) and rules in the terraform script and in the yaml. Ideas?

I was getting the same error.
Try using kubernetes_ingress_v1 instead of kubernetes_ingress which uses networking.k8s.io/v1 instead of networking.k8s.io/v1beta1.

Related

Routing external traffic through a load balancer to an ingress or through an ingress only on aks?

I have an AKS cluster with its LoadBalancer configured (following https://learn.microsoft.com/en-us/azure/aks/internal-lb ) so that it gets the IP from a PublicIP (all provisioned with Terraform) and targets the cluster ingress deployed with Helm.
resource "kubernetes_service" "server-loadbalacer" {
metadata {
name = "server-loadbalacer-svc"
annotations = {
"service.beta.kubernetes.io/azure-load-balancer-resource-group" = "fixit-resource-group"
}
}
spec {
type = "LoadBalancer"
load_balancer_ip = var.public_ip_address
selector = {
name = "ingress-service"
}
port {
name = "server-port"
protocol = "TCP"
port = 8080
}
}
}
Then with Helm I deploy a Node.js server listening on port 3000, a MongoDB replica set, and a Neo4 cluster.
I set up a service for the server receiving on port 3000 and targeting port 3000.
apiVersion: v1
kind: Service
metadata:
name: server-clusterip-service
spec:
type: ClusterIP
selector:
app: fixit-server-pod
ports:
- name: server-clusterip-service
protocol: TCP
port: 3000 # service port
targetPort: 3000 # por on whic the app is listening to
Then the Ingress redirects traffic to the correct service eg. server
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
labels:
name: ingress-service
spec:
rules:
- host: fixit.westeurope.cloudapp.azure.com #dns from Azure PublicIP
http:
paths:
- path: '/server/*'
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 3000
- path: '/neo4j/*'
pathType: Prefix
backend:
service:
name: fixit-cluster
port:
number: 7687
number: 7474
number: 7473
- path: '/neo4j-admin/*'
pathType: Prefix
backend:
service:
name: fixit-cluster-admin
port:
number: 6362
number: 7687
number: 7474
number: 7473
I'm expecting to go to http://fixit.westeurope.cloudapp.azure.com:8080/server/api and see the message that the server response for the endpoint /api, but it fails at browser timeout.
Pods and services deployed on the cluster are
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get pod
NAME READY STATUS RESTARTS AGE
fixit-cluster-0 1/1 Running 0 27m
fixit-server-868f657b64-hvmxq 1/1 Running 0 27m
mongo-rs-0 2/2 Running 0 27m
mongodb-kubernetes-operator-7c5666c957-sscsf 1/1 Running 0 4h35m
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fixit-cluster ClusterIP 10.0.230.247 <none> 7687/TCP,7474/TCP,7473/TCP 27m
fixit-cluster-admin ClusterIP 10.0.132.24 <none> 6362/TCP,7687/TCP,7474/TCP,7473/TCP 27m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4h44m
mongo-rs-svc ClusterIP None <none> 27017/TCP 27m
server-clusterip-service ClusterIP 10.0.242.65 <none> 3000/TCP 27m
server-loadbalacer-svc LoadBalancer 10.0.149.160 52.174.18.27 8080:32660/TCP 4h41m
The ingress is deployed as
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe ingress ingress-service
Name: ingress-service
Labels: app.kubernetes.io/managed-by=Helm
name=ingress-service
Namespace: default
Address:
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
fixit.westeurope.cloudapp.azure.com
/server/* server-clusterip-service:3000 (<none>)
/neo4j/* fixit-cluster:7473 (<none>)
/neo4j-admin/* fixit-cluster-admin:7473 (<none>)
Annotations: kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: fixit-cluster
meta.helm.sh/release-namespace: default
Events: <none>
and the server service is
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe svc server-clusterip-service
Name: server-clusterip-service
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: fixit-cluster
meta.helm.sh/release-namespace: default
Selector: app=fixit-server-pod
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.160.206
IPs: 10.0.160.206
Port: server-clusterip-service 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.244.0.15:3000
Session Affinity: None
Events: <none>
I tried setting the paths with and without /* but it won't connect in either case.
Is this setup even the right way to route external traffic to the cluster or should I use just the ingress? I see that this setup has been given as the solution (1st answer) to this question Kubernetes Load balancer without Label Selector and dough it looks like we're in the same situation, I'm on AKS, and the Azure docs https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli are making me have doubts about my current setup.
Can you spot what I'm setting up wrongly if this setup is not a nonsense?
Many many thanks for the help.
UPDATE
as mentioned here https://learnk8s.io/terraform-aks the option http_application_routing_enabled = true in cluster creation installs addons
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get pods -n kube-system | grep addon
addon-http-application-routing-external-dns-5d48bdffc6-q98nx 1/1 Running 0 26m
addon-http-application-routing-nginx-ingress-controller-5bcrf87 1/1 Running 0 26m
so the Ingress service should point to that controller in its annotations and not specify a host, so I changed the ingress service to
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
# kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.class: addon-http-application-routing
# nginx.ingress.kubernetes.io/rewrite-target: /
labels:
name: ingress-service
spec:
rules:
# - host: fixit.westeurope.cloudapp.azure.com #server.com
- http:
paths:
- path: '/server/*' # service
# - path: '/server' # service doesn't get a IPaddress
# - path: '/*'
# - path: '/'
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 3000
# - path: '/neo4j/*'
# pathType: Prefix
# backend:
# service:
# name: fixit-cluster
# port:
# number: 7687
# number: 7474
# number: 7473
# - path: '/neo4j-admin/*'
# pathType: Prefix
# backend:
# service:
# name: fixit-cluster-admin
# port:
# number: 6362
# number: 7687
# number: 7474
# number: 7473
and its output is now
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> * 108.143.71.248 80 7s
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe ingress ingress-service
Name: ingress-service
Labels: app.kubernetes.io/managed-by=Helm
name=ingress-service
Namespace: default
Address: 108.143.71.248
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/server/* server-clusterip-service:3000 (10.244.0.21:3000)
Annotations: kubernetes.io/ingress.class: addon-http-application-routing
meta.helm.sh/release-name: fixit-cluster
meta.helm.sh/release-namespace: default
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 20s (x2 over 27s) nginx-ingress-controller Scheduled for sync
now going to http://108.143.71.248/server/api in the browser shows an Nginx 404 page.
I finally found the problem. It was my setup. I was using the default ingress-controller and load balancer that get created when you set the option http_application_routing_enabled = true on cluster creation which the docs are discouraging for production https://learn.microsoft.com/en-us/azure/aks/http-application-routing. So the proper implementation is to install an ingress controller https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli, which hooks up the the internal load balancer, so there is no need to create one. Now, the Ingress controller will accept an ip address for the load balancer, but you have to create the PublicIP it in the node resource group because is going to look for it there and not in the resource group check the difference between the two here https://learn.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks.
So the working configuration is now:
main
terraform {
required_version = ">=1.1.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
}
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
subscription_id = var.azure_subscription_id
tenant_id = var.azure_subscription_tenant_id
client_id = var.service_principal_appid
client_secret = var.service_principal_password
}
provider "kubernetes" {
host = "${module.cluster.host}"
client_certificate = "${base64decode(module.cluster.client_certificate)}"
client_key = "${base64decode(module.cluster.client_key)}"
cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}"
}
provider "helm" {
kubernetes {
host = "${module.cluster.host}"
client_certificate = "${base64decode(module.cluster.client_certificate)}"
client_key = "${base64decode(module.cluster.client_key)}"
cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}"
}
}
module "cluster" {
source = "./modules/cluster"
location = var.location
vm_size = var.vm_size
resource_group_name = var.resource_group_name
node_resource_group_name = var.node_resource_group_name
kubernetes_version = var.kubernetes_version
ssh_key = var.ssh_key
sp_client_id = var.service_principal_appid
sp_client_secret = var.service_principal_password
}
module "ingress-controller" {
source = "./modules/ingress-controller"
public_ip_address = module.cluster.public_ip_address
depends_on = [
module.cluster.public_ip_address
]
}
cluster
resource "azurerm_resource_group" "resource_group" {
name = var.resource_group_name
location = var.location
tags = {
Environment = "test"
Team = "DevOps"
}
}
resource "azurerm_kubernetes_cluster" "server_cluster" {
name = "server_cluster"
### choose the resource goup to use for the cluster
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
### decide the name of the cluster "node" resource group, if unset will be named automatically
node_resource_group = var.node_resource_group_name
dns_prefix = "fixit"
kubernetes_version = var.kubernetes_version
# sku_tier = "Paid"
default_node_pool {
name = "default"
node_count = 1
min_count = 1
max_count = 3
vm_size = var.vm_size
type = "VirtualMachineScaleSets"
enable_auto_scaling = true
enable_host_encryption = false
# os_disk_size_gb = 30
}
service_principal {
client_id = var.sp_client_id
client_secret = var.sp_client_secret
}
tags = {
Environment = "Production"
}
linux_profile {
admin_username = "azureuser"
ssh_key {
key_data = var.ssh_key
}
}
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "basic"
}
http_application_routing_enabled = false
depends_on = [
azurerm_resource_group.resource_group
]
}
resource "azurerm_public_ip" "public-ip" {
name = "fixit-public-ip"
location = var.location
# resource_group_name = var.resource_group_name
resource_group_name = var.node_resource_group_name
allocation_method = "Static"
domain_name_label = "fixit"
# sku = "Standard"
depends_on = [
azurerm_kubernetes_cluster.server_cluster
]
}
ingress controller
resource "helm_release" "nginx" {
name = "ingress-nginx"
repository = "ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "default"
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
ingress service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
# namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
spec:
ingressClassName: nginx
rules:
# - host: fixit.westeurope.cloudapp.azure.com #dns from Azure PublicIP
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
...
other services omitted
Hope this can help getting the setup right.
Cheers.

Share resources in different Helm releases using Terraform

Deploying Kubernetes Deployment, Service and Ingress in EKS using Helm. Terraform is being used to deploy the Helm release via custom Chart.
Deploying two Helm releases using Terraform:
Release 1 (game-app): consists of Deployment and Service manifests
Release 2 (game-app-ingress): consists of Ingress manifests. Ingress resource is pointing to Kubernetes Service created in Release 1.
Problem: If I use Helm Install CLI command to deploy the Kubernetes manifests in two different releases then everything is working fine. No errors.
But if I use Terraform to deploy two Helm releases then getting below error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Service "game-app-service" in namespace "ns-fargate-app" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "game-app-ingress": current value is "game-app"
│
│ with module.kubernetes_app_helm.helm_release.game_app_ingress,
│ on modules/kubernetes-app-helm/main.tf line 42, in resource "helm_release" "game_app_ingress":
│ 42: resource "helm_release" "game_app_ingress" {
Tried lot of options to fix the issue but nothing worked. Followed solution mentioned in these links as well: https://github.com/helm/helm/pull/7649 and helm not creating the resources
What I am not able to understand is, why Terraform is not able to deploy the second Helm release and share Kubernetes service resource which has been created in another Helm release? Whereas Helm CLI is NOT giving any issues.
Any help to resolve the issue?
Kubernetes Manifests:
Used as Helm Release 1 from Chart: Helm release name - game-app
These two annotations are added by Helm while deploying through Terraform. Chart template is not generating these annotations.
meta.helm.sh/release-name: <helm release name>
meta.helm.sh/release-namespace: <namespace name>
---
# Source: game-app-full/templates/game-app-service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: ns-fargate-app
name: sample-game-app
annotations:
meta.helm.sh/release-name: game-app-rel
meta.helm.sh/release-namespace: ns-fargate-app
labels:
helm.sh/chart: game-app-full-0.1.0
app.kubernetes.io/instance: dev-rel
app.kubernetes.io/name: game-app-full
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: "1.16.0"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/instance: dev-rel
app.kubernetes.io/name: game-app-full
app.kubernetes.io/managed-by: Helm
---
# Source: game-app-full/templates/game-app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sample-game-app
namespace: ns-fargate-app
annotations:
meta.helm.sh/release-name: game-app-rel
meta.helm.sh/release-namespace: ns-fargate-app
labels:
helm.sh/chart: game-app-full-0.1.0
app.kubernetes.io/instance: dev-rel
app.kubernetes.io/name: game-app-full
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: "1.16.0"
spec:
selector:
matchLabels:
app.kubernetes.io/instance: dev-rel
app.kubernetes.io/name: game-app-full
app.kubernetes.io/managed-by: Helm
replicas: 3
template:
metadata:
labels:
app.kubernetes.io/instance: dev-rel
app.kubernetes.io/name: game-app-full
app.kubernetes.io/managed-by: Helm
spec:
containers:
- image: "public.ecr.aws/l6m2t8p7/docker-2048:latest"
imagePullPolicy: Always
name: game-app-full
ports:
- containerPort: 80
Used as Helm Release 2 from Chart: Helm release name - game-app-ingress
---
# Source: game-app-full/templates/game-app-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: ns-fargate-app
name: sample-game-app
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
meta.helm.sh/release-name: game-app-ingress-rel
meta.helm.sh/release-namespace: ns-fargate-app
labels:
helm.sh/chart: game-app-full-0.1.0
app.kubernetes.io/instance: dev-rel
app.kubernetes.io/name: game-app-full
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: "1.16.0"
spec:
rules:
- http:
paths:
- path: /foo/
pathType: Prefix
backend:
service:
name: sample-game-app
port:
number: 80
- path: /bar/
pathType: Prefix
backend:
service:
name: sample-game-app
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: sample-game-app
port:
number: 80
Terraform Module for Helm Release:
variable "game_app_chart_version" {
default = "0.1.0"
}
variable "game_app_ingress_chart_version" {
default = "0.1.0"
}
variable "game_app_full_chart_version" {
default = "0.1.0"
}
variable "app_namespace" {
description = "Kubernetes namespace name in which the application will be deployed "
type = string
default = null
}
locals {
application_helm_repo = "https://git4suvendu.github.io/application-helm-charts/"
game_app_chart_name = "game-app"
game_app_chart_version = var.game_app_chart_version
game_app_release_name = "game-app-rel"
game_app_ingress_chart_name = "game-app-ingress"
game_app_ingress_chart_version = var.game_app_ingress_chart_version
game_app_ingress_release_name = "game-app-ingress-rel"
game_app_full_chart_name = "game-app-full"
game_app_full_chart_version = var.game_app_full_chart_version
game_app_full_release_name = "game-app-full-rel"
}
##### Deploying application with Kubernetes Manifests (Deployment, Service). NO Ingress will be deployed ################
resource "helm_release" "game_app" {
name = local.game_app_release_name
repository = local.application_helm_repo
chart = local.game_app_chart_name
version = local.game_app_chart_version
namespace = var.app_namespace
create_namespace = true
atomic = true
timeout = 900
cleanup_on_fail = true
force_update = true
recreate_pods = true
set {
name = "replicaCount"
value = 6
type = "auto"
}
set {
name = "image.repository"
value = "public.ecr.aws/l6m2t8p7/docker-2048"
type = "string"
}
set {
name = "image.tag"
value = "latest"
type = "string"
}
set {
name = "namespace.enabled"
value = "false"
type = "auto"
}
set {
name = "fullnameOverride"
value = "sample-game-app"
type = "string"
}
}
##### Deploying application with Ingress with Kubernetes Manifests Ingress only ################
resource "helm_release" "game_app_ingress" {
name = local.game_app_ingress_release_name
repository = local.application_helm_repo
chart = local.game_app_chart_name
version = local.game_app_chart_version
namespace = var.app_namespace
create_namespace = true
atomic = true
timeout = 900
cleanup_on_fail = true
force_update = true
recreate_pods = true
set {
name = "fullnameOverride"
value = "sample-game-app"
type = "string"
}
depends_on = [helm_release.game_app]
}

deploy kubernetes ingress with terraform

I'm trying deploy kubernetes ingress with terraform.
As described here link and my own variant:
resource "kubernetes_ingress" "node" {
metadata {
name = "node"
}
spec {
ingress_class_name = "nginx"
rule {
host = "backend.io"
http {
path {
path = "/"
backend {
service_name = kubernetes_service.node.metadata.0.name
service_port = 3000
}
}
}
}
}
}
error:
╷
│ Error: Failed to create Ingress 'default/node' because: the server could not find the requested resource (post ingresses.extensions)
│
│ with kubernetes_ingress.node,
│ on node.tf line 86, in resource "kubernetes_ingress" "node":
│ 86: resource "kubernetes_ingress" "node" {
│
╵
it works:
kubectl apply -f file_below.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: node
spec:
ingressClassName: nginx
rules:
- host: backend.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: node
port:
number: 3000
Need some ideas about how to deploy kubernetes ingress with terraform.
The issue here is that the example in YML is using the proper API version, i.e., networking.k8s.io/v1, hence it works as you probably have a version of K8s higher than 1.19. It is available since that version, the extensions/v1beta1 that Ingress was a part of was deprecated in favor of networking.k8s.io/v1 in 1.22, as you can read here. As that is the case, your current Terraform code is using the old K8s API version for Ingress. You can see that on the left-hand side of the documentation menu:
If you look further down in the documentation, you will see networking/v1 and in the resource section kubernetes_ingress_v1. Changing the code you have in Terraform to use Ingress from the networking.k8s.io/v1, it becomes:
resource "kubernetes_ingress_v1" "node" {
metadata {
name = "node"
}
spec {
ingress_class_name = "nginx"
rule {
host = "backend.io"
http {
path {
path = "/*"
path_type = "ImplementationSpecific"
backend {
service {
name = kubernetes_service.node.metadata.0.name
port {
number = 3000
}
}
}
}
}
}
}
}

Setting up kubernets Ingress proxy-body-size based on request method

I've been trying to set up a max body size in the Ingress controller based on the HTTP method of a given path.
Basically the POST method should allow 3m as max size and all the other methods should allow 1m.
Right now my main idea was to do something like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-custom-service
namespace: development
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
kubernetes.io/ingress.class: "nginx-dev"
nginx.ingress.kubernetes.io/configuration-snippet: |
internal;
rewrite ^ $original_uri break;
nginx.ingress.kubernetes.io/server-snippet: |
location /api/v1/my-endpoint {
if ( $request_method = POST) {
set $target_destination '/_post';
client_max_body_size 3M;
}
if ( $request_method != POST) {
set $target_destination '/_not_post';
client_max_body_size 1M;
}
set $original_uri $uri;
rewrite ^ $target_destination last;
}
spec:
tls:
rules:
- host: my-host.com
http:
paths:
- path: /_post
backend:
serviceName: my-service
servicePort: 8080
- path: /_not_post
backend:
serviceName: my-service
servicePort: 8080
But then I'm getting the following error in the pod:
Is there any way I can correctly set-up the max body size via the ingress controller?
Try changing your annotations with the configuration-snippet
nginx.ingress.kubernetes.io/configuration-snippet: |
location /upload-path {
client_max_body_size 8M;
}
Read more at : https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#configuration-snippet

Force kubernetes ingress cname format

With Kubernetes, in a multi-tenant env., controlled by RBAC, when creating a new Ingress cname, I would like to force cname format like:
${service}.${namespace}.${cluster}.kube.infra
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ${servce}
spec:
tls:
- hosts:
- ${service}.${namespace}.${cluster}.kube.infra
secretName: conso-elasticsearch-ssl
rules:
- host: ${service}.${namespace}.${cluster}.kube.infra
http:
paths:
- path: /
backend:
serviceName: ${service}
servicePort: 9200
Is it possible?
You can do by it by writing a validating admission webhook which validates the ingress yaml and rejects it if the cname format is not as per the way you want. A better way to is to use Open Policy agent(OPA) and write rego policy. Here is a guide on how to perform policy driven validation of ingress using OPA.
package kubernetes.admission
import data.kubernetes.namespaces
operations = {"CREATE", "UPDATE"}
deny[msg] {
input.request.kind.kind == "Ingress"
operations[input.request.operation]
host := input.request.object.spec.rules[_].host
not fqdn_matches_any(host, valid_ingress_hosts)
msg := sprintf("invalid ingress host %q", [host])
}
valid_ingress_hosts = {
// valid hosts
}
fqdn_matches_any(str, patterns) {
fqdn_matches(str, patterns[_])
}
fqdn_matches(str, pattern) {
// validation logic
}
fqdn_matches(str, pattern) {
not contains(pattern, "*")
str == pattern
}