deploy kubernetes ingress with terraform - kubernetes

I'm trying deploy kubernetes ingress with terraform.
As described here link and my own variant:
resource "kubernetes_ingress" "node" {
metadata {
name = "node"
}
spec {
ingress_class_name = "nginx"
rule {
host = "backend.io"
http {
path {
path = "/"
backend {
service_name = kubernetes_service.node.metadata.0.name
service_port = 3000
}
}
}
}
}
}
error:
╷
│ Error: Failed to create Ingress 'default/node' because: the server could not find the requested resource (post ingresses.extensions)
│
│ with kubernetes_ingress.node,
│ on node.tf line 86, in resource "kubernetes_ingress" "node":
│ 86: resource "kubernetes_ingress" "node" {
│
╵
it works:
kubectl apply -f file_below.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: node
spec:
ingressClassName: nginx
rules:
- host: backend.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: node
port:
number: 3000
Need some ideas about how to deploy kubernetes ingress with terraform.

The issue here is that the example in YML is using the proper API version, i.e., networking.k8s.io/v1, hence it works as you probably have a version of K8s higher than 1.19. It is available since that version, the extensions/v1beta1 that Ingress was a part of was deprecated in favor of networking.k8s.io/v1 in 1.22, as you can read here. As that is the case, your current Terraform code is using the old K8s API version for Ingress. You can see that on the left-hand side of the documentation menu:
If you look further down in the documentation, you will see networking/v1 and in the resource section kubernetes_ingress_v1. Changing the code you have in Terraform to use Ingress from the networking.k8s.io/v1, it becomes:
resource "kubernetes_ingress_v1" "node" {
metadata {
name = "node"
}
spec {
ingress_class_name = "nginx"
rule {
host = "backend.io"
http {
path {
path = "/*"
path_type = "ImplementationSpecific"
backend {
service {
name = kubernetes_service.node.metadata.0.name
port {
number = 3000
}
}
}
}
}
}
}
}

Related

kubernetes_ingress kubernetes v2.6.1 - Failed to create Ingress

I try to create an ingress resource over terraform. I receive the following error message
Error: Failed to create Ingress 'jenkins/jenkins-ingress' because: the
server could not find the requested resource (post
ingresses.extensions) │ │ with kubernetes_ingress.jenkins-ingress, │
on main.tf line 160, in resource "kubernetes_ingress"
"jenkins-ingress": │ 160: resource "kubernetes_ingress"
"jenkins-ingress" {
My terraform resource looks like this:
resource "kubernetes_ingress" "jenkins-ingress" {
metadata {
name = "${var.name}-ingress"
namespace = var.namespace
annotations = {
"ingress.kubernetes.io/rewrite-target" = "/"
"kubernetes.io/ingress.class" = "nginx"
}
}
spec {
rule {
host = "domain.com"
http {
path {
path = "/"
backend {
service_name = var.name
service_port = 8080
}
}
}
}
}
}
If I create the ingress over a yaml it works:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jenkins
port:
number: 8080
What strikes me is the difference between rule (see kubernetes_ingress) and rules in the terraform script and in the yaml. Ideas?
I was getting the same error.
Try using kubernetes_ingress_v1 instead of kubernetes_ingress which uses networking.k8s.io/v1 instead of networking.k8s.io/v1beta1.

How to create ingress on K8S Cluster machine using Terraform?

I am new to K8S and Terraform. I installed ingress_nginx on K8S Cluster running on Bare-metal.
[root#control02 ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-hello-world-svc NodePort 10.xx.xx.121 <none> 8086:30333/TCP 13d
ingress-nginx-controller NodePort 10.xx.xx.124 <none> 80:31545/TCP,443:30198/TCP 13d
ingress-nginx-controller-admission ClusterIP 10.xx.xx.85 <none> 443/TCP 13d
I created Deployment, Service and Ingress and am able to access the docker-hello-world-svc from browser successfully. Ingress.yaml is given below
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
annotations:
kubernetes.io/ingress.class: "nginx"
namespace: ingress-nginx
spec:
#ingressClassName : nginx
rules:
- host: foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docker-hello-world-svc
port:
number: 8086
My requirement is to containerize our PHP based applications on K8S Cluster.
Creating ingress via Terraform's resource "kubernetes_ingress" "web" and ingress.yaml:kubernetes.io/ingress.class are same (or) are they different?
How can I create 'ingress' on K8S Cluster machine using Terraform ?
For example, when I trigger a job from GitLab, Terraform should create a new "resource kubernetes_ingress" on K8S Cluster or Control-Plane machine. Is this possible ?
Kindly clarify on the queries mentioned above and let me know if my understanding is wrong
The ingress.class is needed to let the nginx ingress controller understand thathe need to handle this resource.
To create an ingress with terraform you can use the following
resource "kubernetes_ingress" "ingress" {
metadata {
name = "ingress-name"
namespace = "ingress-namespace"
labels = {
app = "some-label-app"
}
annotations = {
"kubernetes.io/ingress.class" : "nginx"
}
}
spec {
rule {
host = "foo.com"
http {
path {
backend {
service_name = "svc"
service_port = "http"
}
}
}
}
}
}
I was able to create the service on existing K8S Cluster (Bare metal) using the following code
K8S Cluster was setup on 192.168.xxx.xxx on which I created a service example. We need to mention the 'host' parameter inside 'kubernetes' block
provider "kubernetes" {
**host = "https://192.168.xxx.xxx:6443"**
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
}
resource "kubernetes_service" "example" {
metadata {
name = "example"
}
spec {
port {
port = 8585
target_port = 80
}
type = "ClusterIP"
}
}
for,
resource "kubernetes_ingress"
this,
metadata {
annotations = {
"kubernetes.io/ingress.class" : "nginx"
}
}
should now be,
spec {
ingress_class_name = "nginx"
}
see,
https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
https://github.com/hashicorp/terraform-provider-kubernetes/commit/647bf733b333bc0ccbabdbd937a6f759800a253a

How to configure EKS ALB with Terraform

I'm having a hard time getting EKS to expose an IP address to the public internet. Do I need to set up the ALB myself or do you get that for free as part of the EKS cluster? If I have to do it myself, do I need to define it in the terraform template file or in the kubernetes object yaml?
Here's my EKS cluster defined in Terraform along with what I think are the required permissions.
// eks.tf
resource "aws_iam_role" "eks_cluster_role" {
name = "${local.env_name}-eks-cluster-role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "eks.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_kms_key" "eks_key" {
description = "EKS KMS Key"
deletion_window_in_days = 7
enable_key_rotation = true
tags = {
Environment = local.env_name
Service = "EKS"
}
}
resource "aws_kms_alias" "eks_key_alias" {
target_key_id = aws_kms_key.eks_key.id
name = "alias/eks-kms-key-${local.env_name}"
}
resource "aws_eks_cluster" "eks_cluster" {
name = "${local.env_name}-eks-cluster"
role_arn = aws_iam_role.eks_cluster_role.arn
enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
vpc_config {
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
}
encryption_config {
resources = ["secrets"]
provider {
key_arn = aws_kms_key.eks_key.arn
}
}
tags = {
Environment = local.env_name
}
}
resource "aws_iam_role" "eks_node_group_role" {
name = "${local.env_name}-eks-node-group"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "ec2.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_eks_node_group" "eks_node_group" {
instance_types = var.node_group_instance_types
node_group_name = "${local.env_name}-eks-node-group"
node_role_arn = aws_iam_role.eks_node_group_role.arn
cluster_name = aws_eks_cluster.eks_cluster.name
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
// Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
// Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.eks-node-group-AmazonEC2ContainerRegistryReadOnly,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKSWorkerNodePolicy,
]
And here's my kubernetes object yaml:
# hello-kubernetes.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.9
ports:
- containerPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-ingress
spec:
backend:
serviceName: hello-kubernetes
servicePort: 80
I've run terraform apply and the cluster is up and running. I've installed eksctl and kubectl and run kubectl apply -f hello-kubernetes.yaml. The pods, service, and ingress appear to be running fine.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-kubernetes-6cb7cd595b-25bd9 1/1 Running 0 6h13m
hello-kubernetes-6cb7cd595b-lccdj 1/1 Running 0 6h13m
hello-kubernetes-6cb7cd595b-snwvr 1/1 Running 0 6h13m
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes LoadBalancer 172.20.102.37 <pending> 80:32086/TCP 6h15m
$ kubectl get ingresses
NAME CLASS HOSTS ADDRESS PORTS AGE
hello-ingress <none> * 80 3h45m
What am I missing and which file does it belong in?
You need to install the AWS Load Balancer Controller by following the installation instructions; first you need to create IAM Role and permissions, this can be done with Terraform; then you need to apply Kubernetes Yaml for installing the controller into your cluster, this can be done with Helm or Kubectl.
You also need to be aware of the subnet tagging that is needed for e.g. creating a public or private facing load balancer.
Usually the way to go is to put an ALB and redirect traffic to the EKS cluster, managing it with the ALB Ingress Controller. This ingress controller will act as the communication between the cluster and your ALB, here is the AWS documentation that is pretty straight foward
EKS w/ALB
Other solution could be using an NGINX ingress controller with an NLB if the ALB doesn't suits your applications needs, as described in the following article
NGINX w/NLB
This happened with me too that after all the setup, I was not able to see the ingress address. The best way to debug this issue is to check the logs for the ingress controller. You can do this by:
Get the Ingress controller po name by using: kubectl get po -n kube-system
Check logs for the po using: kubectl logs <po_name> -n kube-system
This will point you to the exact issue as to why you are not seeing the address.
If you do not find any po running by the name ingress, then u will have to create ingress controller first.

Force kubernetes ingress cname format

With Kubernetes, in a multi-tenant env., controlled by RBAC, when creating a new Ingress cname, I would like to force cname format like:
${service}.${namespace}.${cluster}.kube.infra
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ${servce}
spec:
tls:
- hosts:
- ${service}.${namespace}.${cluster}.kube.infra
secretName: conso-elasticsearch-ssl
rules:
- host: ${service}.${namespace}.${cluster}.kube.infra
http:
paths:
- path: /
backend:
serviceName: ${service}
servicePort: 9200
Is it possible?
You can do by it by writing a validating admission webhook which validates the ingress yaml and rejects it if the cname format is not as per the way you want. A better way to is to use Open Policy agent(OPA) and write rego policy. Here is a guide on how to perform policy driven validation of ingress using OPA.
package kubernetes.admission
import data.kubernetes.namespaces
operations = {"CREATE", "UPDATE"}
deny[msg] {
input.request.kind.kind == "Ingress"
operations[input.request.operation]
host := input.request.object.spec.rules[_].host
not fqdn_matches_any(host, valid_ingress_hosts)
msg := sprintf("invalid ingress host %q", [host])
}
valid_ingress_hosts = {
// valid hosts
}
fqdn_matches_any(str, patterns) {
fqdn_matches(str, patterns[_])
}
fqdn_matches(str, pattern) {
// validation logic
}
fqdn_matches(str, pattern) {
not contains(pattern, "*")
str == pattern
}

what's the difference between openshift route and k8s ingress?

I'm new to openshift and k8s. I'm not sure what's the difference between these two terms, openshift route vs k8s ingress ?
Ultimately they are intended to achieve the same end. Originally Kubernetes had no such concept and so in OpenShift the concept of a Route was developed, along with the bits for providing a load balancing proxy etc. In time it was seen as being useful to have something like this in Kubernetes, so using Route from OpenShift as a starting point for what could be done, Ingress was developed for Kubernetes. In the Ingress version they went for a more generic rules based system so how you specify them looks different, but the intent is to effectively be able to do the same thing.
The following code implementation will create a route in OCP.
The OCP will consider the ingress as a route in the same way.
// build the ingress/route object
func (r *ReconcileMobileSecurityService) buildAppIngress(m *mobilesecurityservicev1alpha1.MobileSecurityService) *v1beta1.Ingress {
ls := getAppLabels(m.Name)
hostName := m.Name + "-" + m.Namespace + "." + m.Spec.ClusterHost + ".nip.io"
ing := &v1beta1.Ingress{
TypeMeta: v1.TypeMeta{
APIVersion: "extensions/v1beta1",
Kind: "Ingress",
},
ObjectMeta: v1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
Labels: ls,
},
Spec: v1beta1.IngressSpec{
Backend: &v1beta1.IngressBackend{
ServiceName: m.Name,
ServicePort: intstr.FromInt(int(m.Spec.Port)),
},
Rules: []v1beta1.IngressRule{
{
Host: hostName,
IngressRuleValue: v1beta1.IngressRuleValue{
HTTP: &v1beta1.HTTPIngressRuleValue{
Paths: []v1beta1.HTTPIngressPath{
{
Backend: v1beta1.IngressBackend{
ServiceName: m.Name,
ServicePort: intstr.FromInt(int(m.Spec.Port)),
},
Path: "/",
},
},
},
},
},
},
},
}
// Set MobileSecurityService instance as the owner and controller
controllerutil.SetControllerReference(m, ing, r.scheme)
return ing
}