How to create ingress on K8S Cluster machine using Terraform? - kubernetes

I am new to K8S and Terraform. I installed ingress_nginx on K8S Cluster running on Bare-metal.
[root#control02 ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-hello-world-svc NodePort 10.xx.xx.121 <none> 8086:30333/TCP 13d
ingress-nginx-controller NodePort 10.xx.xx.124 <none> 80:31545/TCP,443:30198/TCP 13d
ingress-nginx-controller-admission ClusterIP 10.xx.xx.85 <none> 443/TCP 13d
I created Deployment, Service and Ingress and am able to access the docker-hello-world-svc from browser successfully. Ingress.yaml is given below
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
annotations:
kubernetes.io/ingress.class: "nginx"
namespace: ingress-nginx
spec:
#ingressClassName : nginx
rules:
- host: foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docker-hello-world-svc
port:
number: 8086
My requirement is to containerize our PHP based applications on K8S Cluster.
Creating ingress via Terraform's resource "kubernetes_ingress" "web" and ingress.yaml:kubernetes.io/ingress.class are same (or) are they different?
How can I create 'ingress' on K8S Cluster machine using Terraform ?
For example, when I trigger a job from GitLab, Terraform should create a new "resource kubernetes_ingress" on K8S Cluster or Control-Plane machine. Is this possible ?
Kindly clarify on the queries mentioned above and let me know if my understanding is wrong

The ingress.class is needed to let the nginx ingress controller understand thathe need to handle this resource.
To create an ingress with terraform you can use the following
resource "kubernetes_ingress" "ingress" {
metadata {
name = "ingress-name"
namespace = "ingress-namespace"
labels = {
app = "some-label-app"
}
annotations = {
"kubernetes.io/ingress.class" : "nginx"
}
}
spec {
rule {
host = "foo.com"
http {
path {
backend {
service_name = "svc"
service_port = "http"
}
}
}
}
}
}

I was able to create the service on existing K8S Cluster (Bare metal) using the following code
K8S Cluster was setup on 192.168.xxx.xxx on which I created a service example. We need to mention the 'host' parameter inside 'kubernetes' block
provider "kubernetes" {
**host = "https://192.168.xxx.xxx:6443"**
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
}
resource "kubernetes_service" "example" {
metadata {
name = "example"
}
spec {
port {
port = 8585
target_port = 80
}
type = "ClusterIP"
}
}

for,
resource "kubernetes_ingress"
this,
metadata {
annotations = {
"kubernetes.io/ingress.class" : "nginx"
}
}
should now be,
spec {
ingress_class_name = "nginx"
}
see,
https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
https://github.com/hashicorp/terraform-provider-kubernetes/commit/647bf733b333bc0ccbabdbd937a6f759800a253a

Related

Routing external traffic through a load balancer to an ingress or through an ingress only on aks?

I have an AKS cluster with its LoadBalancer configured (following https://learn.microsoft.com/en-us/azure/aks/internal-lb ) so that it gets the IP from a PublicIP (all provisioned with Terraform) and targets the cluster ingress deployed with Helm.
resource "kubernetes_service" "server-loadbalacer" {
metadata {
name = "server-loadbalacer-svc"
annotations = {
"service.beta.kubernetes.io/azure-load-balancer-resource-group" = "fixit-resource-group"
}
}
spec {
type = "LoadBalancer"
load_balancer_ip = var.public_ip_address
selector = {
name = "ingress-service"
}
port {
name = "server-port"
protocol = "TCP"
port = 8080
}
}
}
Then with Helm I deploy a Node.js server listening on port 3000, a MongoDB replica set, and a Neo4 cluster.
I set up a service for the server receiving on port 3000 and targeting port 3000.
apiVersion: v1
kind: Service
metadata:
name: server-clusterip-service
spec:
type: ClusterIP
selector:
app: fixit-server-pod
ports:
- name: server-clusterip-service
protocol: TCP
port: 3000 # service port
targetPort: 3000 # por on whic the app is listening to
Then the Ingress redirects traffic to the correct service eg. server
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
labels:
name: ingress-service
spec:
rules:
- host: fixit.westeurope.cloudapp.azure.com #dns from Azure PublicIP
http:
paths:
- path: '/server/*'
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 3000
- path: '/neo4j/*'
pathType: Prefix
backend:
service:
name: fixit-cluster
port:
number: 7687
number: 7474
number: 7473
- path: '/neo4j-admin/*'
pathType: Prefix
backend:
service:
name: fixit-cluster-admin
port:
number: 6362
number: 7687
number: 7474
number: 7473
I'm expecting to go to http://fixit.westeurope.cloudapp.azure.com:8080/server/api and see the message that the server response for the endpoint /api, but it fails at browser timeout.
Pods and services deployed on the cluster are
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get pod
NAME READY STATUS RESTARTS AGE
fixit-cluster-0 1/1 Running 0 27m
fixit-server-868f657b64-hvmxq 1/1 Running 0 27m
mongo-rs-0 2/2 Running 0 27m
mongodb-kubernetes-operator-7c5666c957-sscsf 1/1 Running 0 4h35m
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fixit-cluster ClusterIP 10.0.230.247 <none> 7687/TCP,7474/TCP,7473/TCP 27m
fixit-cluster-admin ClusterIP 10.0.132.24 <none> 6362/TCP,7687/TCP,7474/TCP,7473/TCP 27m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4h44m
mongo-rs-svc ClusterIP None <none> 27017/TCP 27m
server-clusterip-service ClusterIP 10.0.242.65 <none> 3000/TCP 27m
server-loadbalacer-svc LoadBalancer 10.0.149.160 52.174.18.27 8080:32660/TCP 4h41m
The ingress is deployed as
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe ingress ingress-service
Name: ingress-service
Labels: app.kubernetes.io/managed-by=Helm
name=ingress-service
Namespace: default
Address:
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
fixit.westeurope.cloudapp.azure.com
/server/* server-clusterip-service:3000 (<none>)
/neo4j/* fixit-cluster:7473 (<none>)
/neo4j-admin/* fixit-cluster-admin:7473 (<none>)
Annotations: kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: fixit-cluster
meta.helm.sh/release-namespace: default
Events: <none>
and the server service is
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe svc server-clusterip-service
Name: server-clusterip-service
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: fixit-cluster
meta.helm.sh/release-namespace: default
Selector: app=fixit-server-pod
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.160.206
IPs: 10.0.160.206
Port: server-clusterip-service 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.244.0.15:3000
Session Affinity: None
Events: <none>
I tried setting the paths with and without /* but it won't connect in either case.
Is this setup even the right way to route external traffic to the cluster or should I use just the ingress? I see that this setup has been given as the solution (1st answer) to this question Kubernetes Load balancer without Label Selector and dough it looks like we're in the same situation, I'm on AKS, and the Azure docs https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli are making me have doubts about my current setup.
Can you spot what I'm setting up wrongly if this setup is not a nonsense?
Many many thanks for the help.
UPDATE
as mentioned here https://learnk8s.io/terraform-aks the option http_application_routing_enabled = true in cluster creation installs addons
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get pods -n kube-system | grep addon
addon-http-application-routing-external-dns-5d48bdffc6-q98nx 1/1 Running 0 26m
addon-http-application-routing-nginx-ingress-controller-5bcrf87 1/1 Running 0 26m
so the Ingress service should point to that controller in its annotations and not specify a host, so I changed the ingress service to
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
# kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.class: addon-http-application-routing
# nginx.ingress.kubernetes.io/rewrite-target: /
labels:
name: ingress-service
spec:
rules:
# - host: fixit.westeurope.cloudapp.azure.com #server.com
- http:
paths:
- path: '/server/*' # service
# - path: '/server' # service doesn't get a IPaddress
# - path: '/*'
# - path: '/'
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 3000
# - path: '/neo4j/*'
# pathType: Prefix
# backend:
# service:
# name: fixit-cluster
# port:
# number: 7687
# number: 7474
# number: 7473
# - path: '/neo4j-admin/*'
# pathType: Prefix
# backend:
# service:
# name: fixit-cluster-admin
# port:
# number: 6362
# number: 7687
# number: 7474
# number: 7473
and its output is now
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> * 108.143.71.248 80 7s
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe ingress ingress-service
Name: ingress-service
Labels: app.kubernetes.io/managed-by=Helm
name=ingress-service
Namespace: default
Address: 108.143.71.248
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/server/* server-clusterip-service:3000 (10.244.0.21:3000)
Annotations: kubernetes.io/ingress.class: addon-http-application-routing
meta.helm.sh/release-name: fixit-cluster
meta.helm.sh/release-namespace: default
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 20s (x2 over 27s) nginx-ingress-controller Scheduled for sync
now going to http://108.143.71.248/server/api in the browser shows an Nginx 404 page.
I finally found the problem. It was my setup. I was using the default ingress-controller and load balancer that get created when you set the option http_application_routing_enabled = true on cluster creation which the docs are discouraging for production https://learn.microsoft.com/en-us/azure/aks/http-application-routing. So the proper implementation is to install an ingress controller https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli, which hooks up the the internal load balancer, so there is no need to create one. Now, the Ingress controller will accept an ip address for the load balancer, but you have to create the PublicIP it in the node resource group because is going to look for it there and not in the resource group check the difference between the two here https://learn.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks.
So the working configuration is now:
main
terraform {
required_version = ">=1.1.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
}
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
subscription_id = var.azure_subscription_id
tenant_id = var.azure_subscription_tenant_id
client_id = var.service_principal_appid
client_secret = var.service_principal_password
}
provider "kubernetes" {
host = "${module.cluster.host}"
client_certificate = "${base64decode(module.cluster.client_certificate)}"
client_key = "${base64decode(module.cluster.client_key)}"
cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}"
}
provider "helm" {
kubernetes {
host = "${module.cluster.host}"
client_certificate = "${base64decode(module.cluster.client_certificate)}"
client_key = "${base64decode(module.cluster.client_key)}"
cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}"
}
}
module "cluster" {
source = "./modules/cluster"
location = var.location
vm_size = var.vm_size
resource_group_name = var.resource_group_name
node_resource_group_name = var.node_resource_group_name
kubernetes_version = var.kubernetes_version
ssh_key = var.ssh_key
sp_client_id = var.service_principal_appid
sp_client_secret = var.service_principal_password
}
module "ingress-controller" {
source = "./modules/ingress-controller"
public_ip_address = module.cluster.public_ip_address
depends_on = [
module.cluster.public_ip_address
]
}
cluster
resource "azurerm_resource_group" "resource_group" {
name = var.resource_group_name
location = var.location
tags = {
Environment = "test"
Team = "DevOps"
}
}
resource "azurerm_kubernetes_cluster" "server_cluster" {
name = "server_cluster"
### choose the resource goup to use for the cluster
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
### decide the name of the cluster "node" resource group, if unset will be named automatically
node_resource_group = var.node_resource_group_name
dns_prefix = "fixit"
kubernetes_version = var.kubernetes_version
# sku_tier = "Paid"
default_node_pool {
name = "default"
node_count = 1
min_count = 1
max_count = 3
vm_size = var.vm_size
type = "VirtualMachineScaleSets"
enable_auto_scaling = true
enable_host_encryption = false
# os_disk_size_gb = 30
}
service_principal {
client_id = var.sp_client_id
client_secret = var.sp_client_secret
}
tags = {
Environment = "Production"
}
linux_profile {
admin_username = "azureuser"
ssh_key {
key_data = var.ssh_key
}
}
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "basic"
}
http_application_routing_enabled = false
depends_on = [
azurerm_resource_group.resource_group
]
}
resource "azurerm_public_ip" "public-ip" {
name = "fixit-public-ip"
location = var.location
# resource_group_name = var.resource_group_name
resource_group_name = var.node_resource_group_name
allocation_method = "Static"
domain_name_label = "fixit"
# sku = "Standard"
depends_on = [
azurerm_kubernetes_cluster.server_cluster
]
}
ingress controller
resource "helm_release" "nginx" {
name = "ingress-nginx"
repository = "ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "default"
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
ingress service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
# namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
spec:
ingressClassName: nginx
rules:
# - host: fixit.westeurope.cloudapp.azure.com #dns from Azure PublicIP
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
...
other services omitted
Hope this can help getting the setup right.
Cheers.

deploy kubernetes ingress with terraform

I'm trying deploy kubernetes ingress with terraform.
As described here link and my own variant:
resource "kubernetes_ingress" "node" {
metadata {
name = "node"
}
spec {
ingress_class_name = "nginx"
rule {
host = "backend.io"
http {
path {
path = "/"
backend {
service_name = kubernetes_service.node.metadata.0.name
service_port = 3000
}
}
}
}
}
}
error:
╷
│ Error: Failed to create Ingress 'default/node' because: the server could not find the requested resource (post ingresses.extensions)
│
│ with kubernetes_ingress.node,
│ on node.tf line 86, in resource "kubernetes_ingress" "node":
│ 86: resource "kubernetes_ingress" "node" {
│
╵
it works:
kubectl apply -f file_below.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: node
spec:
ingressClassName: nginx
rules:
- host: backend.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: node
port:
number: 3000
Need some ideas about how to deploy kubernetes ingress with terraform.
The issue here is that the example in YML is using the proper API version, i.e., networking.k8s.io/v1, hence it works as you probably have a version of K8s higher than 1.19. It is available since that version, the extensions/v1beta1 that Ingress was a part of was deprecated in favor of networking.k8s.io/v1 in 1.22, as you can read here. As that is the case, your current Terraform code is using the old K8s API version for Ingress. You can see that on the left-hand side of the documentation menu:
If you look further down in the documentation, you will see networking/v1 and in the resource section kubernetes_ingress_v1. Changing the code you have in Terraform to use Ingress from the networking.k8s.io/v1, it becomes:
resource "kubernetes_ingress_v1" "node" {
metadata {
name = "node"
}
spec {
ingress_class_name = "nginx"
rule {
host = "backend.io"
http {
path {
path = "/*"
path_type = "ImplementationSpecific"
backend {
service {
name = kubernetes_service.node.metadata.0.name
port {
number = 3000
}
}
}
}
}
}
}
}

GKE ingress resource returns 404 although it is connected to a service

I added a default nginx-ingress deployment with a regional IP that I got from GCP.
helm install nginx-ingress \
nginx-stable/nginx-ingress \
--set rbac.create=true \
--set controller.service.loadBalancerIP="<GCP Regional IP>"
I have a dockerized node app with a single .js file. Which I deployed with a basic helm chart. The service is called node-app-blue-helm-chart
const http = require('http');
const hostname = '0.0.0.0';
const port = 80;
const server = http.createServer((req, res) => {
if (req.url == '/another-page'){
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.end('<h1>another page</h1>');
} else {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.end('<h1>Hello World</h1>');
}
});
server.listen(port, hostname, () => {
console.log('Server running at http://%s:%s/', hostname, port);
});
process.on('SIGINT', function() {
console.log('Caught interrupt signal and will exit');
process.exit();
});
I deployed following ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "*.example.com"
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: node-app-blue-helm-chart
port:
number: 80
Although ingress resource acquires IP and endpoint. It still returns 404 error. What can be wrong? Can host: "*.example.com" section be a problem?
More info:
kubectl describe ing ingress-resource
Name: ingress-resource
Namespace: default
Address: <GCP Regional IP>
Default backend: default-http-backend:80 (10.0.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
*.example.com
/* node-app-blue-helm-chart:80 (10.0.0.15:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
kubectl describe svc node-app-blue-helm-chart
Name: node-app-blue-helm-chart
Namespace: default
Labels: app.kubernetes.io/instance=node-app-blue
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=helm-chart
app.kubernetes.io/version=1.16.0
helm.sh/chart=helm-chart-0.1.0
Annotations: meta.helm.sh/release-name: node-app-blue
meta.helm.sh/release-namespace: default
Selector: app.kubernetes.io/instance=node-app-blue,app.kubernetes.io/name=helm-chart
Type: ClusterIP
IP Families: <none>
IP: 10.3.248.13
IPs: 10.3.248.13
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.0.0.15:80
Session Affinity: None
Events: <none>
What I tried:
Removing * from /* in ingress resource. Didn't fix the issue.
kubectl describe ing ingress-resource
Name: ingress-resource
Namespace: default
Address: W.X.Y.Z
Default backend: default-http-backend:80 (10.0.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
*.example.com
/ node-app-blue-helm-chart:80 (10.0.0.15:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated <invalid> nginx-ingress-controller Configuration for default/ingress-resource was added or updated
Try to edit your Ingress. You have set a path=/*, which may not be what you meant to do. A / should do:
[...]
spec:
rules:
- host: "*.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: node-app-blue-helm-chart
port:
number: 80

GCP GKE - Healthcheck for native GKE ingress for the same application with 2 exposed ports

I have an application that is supposed to expose 2 x ports and the application does not have the default healthcheck endpoint of / that returns 200, so at the moment, I supply a custom healthcheck endpoint just for 1 port. I haven't exposed the other port yet as I don't know how to provide another custom healthcheck endpoint for the same application.
This is how my Terraform configuration looks like.
resource "kubernetes_deployment" "core" {
metadata {
name = "core"
labels = {
app = "core"
}
}
spec {
replicas = 1
selector {
match_labels = {
app = "core"
}
}
template {
metadata {
labels = {
app = "core"
}
}
spec {
container {
name = "core"
image = "asia.gcr.io/admin/core:${var.app_version}"
port {
container_port = 8069
}
readiness_probe {
http_get {
path = "/web/database/selector"
port = "8069"
}
initial_delay_seconds = 15
period_seconds = 30
}
image_pull_policy = "IfNotPresent"
}
}
}
}
}
resource "kubernetes_service" "core_service" {
metadata {
name = "core-service"
}
spec {
type = "NodePort"
selector = {
app = "core"
}
port {
port = 8080
protocol = "TCP"
target_port = "8069"
}
}
}
How do I tell GKE to expose the other port (8072) and use a custom healthcheck endpoint for both ports?
There are a GKE Ingress feature called FrontendConfig and BackendConfig custom resource definitions (CRDs) that allow you to further customize the load balancer, you can use a Unique BackendConfig per Service port to specify a custom BackendConfig for a specific port or ports of a Service or MultiClusterService, using a key that matches the port's name or port's number. The Ingress controller uses the specific BackendConfig when it creates a load balancer backend service for a referenced Service port
When using a BackendConfig to provide a custom load balancer health check, the port number you use for the load balancer's health check can differ from the Service's spec.ports[].port number, here's an example of the service and the custom health check:
Service:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"ports": {
"service-reference-a":"backendconfig-reference-a",
"service-reference-b":"backendconfig-reference-b"
}}'
spec:
ports:
- name: port-name-1
port: port-number-1
protocol: TCP
targetPort: 50000
- name: port-name-2
port: port-number-2
protocol: TCP
targetPort: 8080
Custom Health Check:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
spec:
healthCheck:
checkIntervalSec: interval
timeoutSec: timeout
healthyThreshold: health-threshold
unhealthyThreshold: unhealthy-threshold
type: protocol
requestPath: path
port: port

How to configure EKS ALB with Terraform

I'm having a hard time getting EKS to expose an IP address to the public internet. Do I need to set up the ALB myself or do you get that for free as part of the EKS cluster? If I have to do it myself, do I need to define it in the terraform template file or in the kubernetes object yaml?
Here's my EKS cluster defined in Terraform along with what I think are the required permissions.
// eks.tf
resource "aws_iam_role" "eks_cluster_role" {
name = "${local.env_name}-eks-cluster-role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "eks.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_kms_key" "eks_key" {
description = "EKS KMS Key"
deletion_window_in_days = 7
enable_key_rotation = true
tags = {
Environment = local.env_name
Service = "EKS"
}
}
resource "aws_kms_alias" "eks_key_alias" {
target_key_id = aws_kms_key.eks_key.id
name = "alias/eks-kms-key-${local.env_name}"
}
resource "aws_eks_cluster" "eks_cluster" {
name = "${local.env_name}-eks-cluster"
role_arn = aws_iam_role.eks_cluster_role.arn
enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
vpc_config {
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
}
encryption_config {
resources = ["secrets"]
provider {
key_arn = aws_kms_key.eks_key.arn
}
}
tags = {
Environment = local.env_name
}
}
resource "aws_iam_role" "eks_node_group_role" {
name = "${local.env_name}-eks-node-group"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "ec2.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_eks_node_group" "eks_node_group" {
instance_types = var.node_group_instance_types
node_group_name = "${local.env_name}-eks-node-group"
node_role_arn = aws_iam_role.eks_node_group_role.arn
cluster_name = aws_eks_cluster.eks_cluster.name
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
// Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
// Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.eks-node-group-AmazonEC2ContainerRegistryReadOnly,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKSWorkerNodePolicy,
]
And here's my kubernetes object yaml:
# hello-kubernetes.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.9
ports:
- containerPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-ingress
spec:
backend:
serviceName: hello-kubernetes
servicePort: 80
I've run terraform apply and the cluster is up and running. I've installed eksctl and kubectl and run kubectl apply -f hello-kubernetes.yaml. The pods, service, and ingress appear to be running fine.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-kubernetes-6cb7cd595b-25bd9 1/1 Running 0 6h13m
hello-kubernetes-6cb7cd595b-lccdj 1/1 Running 0 6h13m
hello-kubernetes-6cb7cd595b-snwvr 1/1 Running 0 6h13m
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes LoadBalancer 172.20.102.37 <pending> 80:32086/TCP 6h15m
$ kubectl get ingresses
NAME CLASS HOSTS ADDRESS PORTS AGE
hello-ingress <none> * 80 3h45m
What am I missing and which file does it belong in?
You need to install the AWS Load Balancer Controller by following the installation instructions; first you need to create IAM Role and permissions, this can be done with Terraform; then you need to apply Kubernetes Yaml for installing the controller into your cluster, this can be done with Helm or Kubectl.
You also need to be aware of the subnet tagging that is needed for e.g. creating a public or private facing load balancer.
Usually the way to go is to put an ALB and redirect traffic to the EKS cluster, managing it with the ALB Ingress Controller. This ingress controller will act as the communication between the cluster and your ALB, here is the AWS documentation that is pretty straight foward
EKS w/ALB
Other solution could be using an NGINX ingress controller with an NLB if the ALB doesn't suits your applications needs, as described in the following article
NGINX w/NLB
This happened with me too that after all the setup, I was not able to see the ingress address. The best way to debug this issue is to check the logs for the ingress controller. You can do this by:
Get the Ingress controller po name by using: kubectl get po -n kube-system
Check logs for the po using: kubectl logs <po_name> -n kube-system
This will point you to the exact issue as to why you are not seeing the address.
If you do not find any po running by the name ingress, then u will have to create ingress controller first.