URL of remoteEntry in Kubernetes Cluster - kubernetes

I am trying to build a series of Micro-Frontends using Webpack 5 and the ModuleFederationPlugin.
In the webpack config of my container app I have to configure how the container is going to reach out to the other microfrontends so I can make use of those micro-frontends.
This all works fine when I am serving locally, not using Docker and Kubernetes and my Ingress Controller.
However because I am using Kubernetes and an Ingress Controller, I am unsure what the remote host would be.
Link to Repo
Here is my container webpack.dev.js file
const { merge } = require("webpack-merge");
const HtmlWebpackPlugin = require("html-webpack-plugin");
const ModuleFederationPlugin = require("webpack/lib/container/ModuleFederationPlugin");
const commonConfig = require("./webpack.common");
const packageJson = require("../package.json");
const devConfig = {
mode: "development",
devServer: {
host: "0.0.0.0",
port: 8080,
historyApiFallback: {
index: "index.html",
},
compress: true,
disableHostCheck: true,
},
plugins: [
new ModuleFederationPlugin({
name: "container",
remotes: {
marketing:
"marketing#https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8081/remoteEntry.js",
},
shared: packageJson.dependencies,
}),
],
};
module.exports = merge(commonConfig, devConfig);
and here is my Ingress Config
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /marketing?(.*)
pathType: Prefix
backend:
service:
name: marketing-srv
port:
number: 8081
- path: /?(.*)
pathType: Prefix
backend:
service:
name: container-srv
port:
number: 8080
and here is my marketing webpack.dev.js file
const { merge } = require("webpack-merge");
const ModuleFederationPlugin = require("webpack/lib/container/ModuleFederationPlugin");
const commonConfig = require("./webpack.common");
const packageJson = require("../package.json");
const devConfig = {
mode: "development",
devServer: {
host: "0.0.0.0",
port: 8081,
historyApiFallback: {
index: "index.html",
},
compress: true,
disableHostCheck: true, // That solved it
},
plugins: [
new ModuleFederationPlugin({
name: "marketing",
filename: "remoteEntry.js",
exposes: {
"./core": "./src/bootstrap",
},
shared: packageJson.dependencies,
}),
],
};
module.exports = merge(commonConfig, devConfig);
I am totally stumped as to what the remote host would be to reach out to my marketing micro-frontend
serving it as usual without running it in a docker container or kubernetes cluster, the remote host would be
https://localhost:8081/remoteEntry.js
but that doesn't work in a kubernetes cluster
I tried using the ingress controller and namespace, but that too, does not work
https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8081/remoteEntry.js
This is the error I get

https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:8081/remoteEntry.js
If your client and the node are on the same network (eg. can ping each other), do kubectl get service ingress-nginx --namespace ingress-nginx and take note of the nodePort# (TYPE=NodePort, PORT(S) 443:<nodePort#>/TCP). Your remote entry will be https://<any of the worker node IP>:<nodePort#>/remoteEntry.js
If you client is on the Internet and your worker node has public IP, your remote entry will be https://<public IP of the worker node>:<nodePort#>/remoteEntry.js
If you client is on the Internet and your worker node doesn't have public IP, you need to expose your ingress-nginx service with LoadBalancer. Do kubectl get service ingress-nginx --namespace ingress-nginx and take note of the EXTERNAL IP. Your remote entry become https://<EXTERNAL IP>/remoteEntry.js

Related

Routing external traffic through a load balancer to an ingress or through an ingress only on aks?

I have an AKS cluster with its LoadBalancer configured (following https://learn.microsoft.com/en-us/azure/aks/internal-lb ) so that it gets the IP from a PublicIP (all provisioned with Terraform) and targets the cluster ingress deployed with Helm.
resource "kubernetes_service" "server-loadbalacer" {
metadata {
name = "server-loadbalacer-svc"
annotations = {
"service.beta.kubernetes.io/azure-load-balancer-resource-group" = "fixit-resource-group"
}
}
spec {
type = "LoadBalancer"
load_balancer_ip = var.public_ip_address
selector = {
name = "ingress-service"
}
port {
name = "server-port"
protocol = "TCP"
port = 8080
}
}
}
Then with Helm I deploy a Node.js server listening on port 3000, a MongoDB replica set, and a Neo4 cluster.
I set up a service for the server receiving on port 3000 and targeting port 3000.
apiVersion: v1
kind: Service
metadata:
name: server-clusterip-service
spec:
type: ClusterIP
selector:
app: fixit-server-pod
ports:
- name: server-clusterip-service
protocol: TCP
port: 3000 # service port
targetPort: 3000 # por on whic the app is listening to
Then the Ingress redirects traffic to the correct service eg. server
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
labels:
name: ingress-service
spec:
rules:
- host: fixit.westeurope.cloudapp.azure.com #dns from Azure PublicIP
http:
paths:
- path: '/server/*'
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 3000
- path: '/neo4j/*'
pathType: Prefix
backend:
service:
name: fixit-cluster
port:
number: 7687
number: 7474
number: 7473
- path: '/neo4j-admin/*'
pathType: Prefix
backend:
service:
name: fixit-cluster-admin
port:
number: 6362
number: 7687
number: 7474
number: 7473
I'm expecting to go to http://fixit.westeurope.cloudapp.azure.com:8080/server/api and see the message that the server response for the endpoint /api, but it fails at browser timeout.
Pods and services deployed on the cluster are
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get pod
NAME READY STATUS RESTARTS AGE
fixit-cluster-0 1/1 Running 0 27m
fixit-server-868f657b64-hvmxq 1/1 Running 0 27m
mongo-rs-0 2/2 Running 0 27m
mongodb-kubernetes-operator-7c5666c957-sscsf 1/1 Running 0 4h35m
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
fixit-cluster ClusterIP 10.0.230.247 <none> 7687/TCP,7474/TCP,7473/TCP 27m
fixit-cluster-admin ClusterIP 10.0.132.24 <none> 6362/TCP,7687/TCP,7474/TCP,7473/TCP 27m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4h44m
mongo-rs-svc ClusterIP None <none> 27017/TCP 27m
server-clusterip-service ClusterIP 10.0.242.65 <none> 3000/TCP 27m
server-loadbalacer-svc LoadBalancer 10.0.149.160 52.174.18.27 8080:32660/TCP 4h41m
The ingress is deployed as
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe ingress ingress-service
Name: ingress-service
Labels: app.kubernetes.io/managed-by=Helm
name=ingress-service
Namespace: default
Address:
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
fixit.westeurope.cloudapp.azure.com
/server/* server-clusterip-service:3000 (<none>)
/neo4j/* fixit-cluster:7473 (<none>)
/neo4j-admin/* fixit-cluster-admin:7473 (<none>)
Annotations: kubernetes.io/ingress.class: nginx
meta.helm.sh/release-name: fixit-cluster
meta.helm.sh/release-namespace: default
Events: <none>
and the server service is
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe svc server-clusterip-service
Name: server-clusterip-service
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: fixit-cluster
meta.helm.sh/release-namespace: default
Selector: app=fixit-server-pod
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.160.206
IPs: 10.0.160.206
Port: server-clusterip-service 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.244.0.15:3000
Session Affinity: None
Events: <none>
I tried setting the paths with and without /* but it won't connect in either case.
Is this setup even the right way to route external traffic to the cluster or should I use just the ingress? I see that this setup has been given as the solution (1st answer) to this question Kubernetes Load balancer without Label Selector and dough it looks like we're in the same situation, I'm on AKS, and the Azure docs https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli are making me have doubts about my current setup.
Can you spot what I'm setting up wrongly if this setup is not a nonsense?
Many many thanks for the help.
UPDATE
as mentioned here https://learnk8s.io/terraform-aks the option http_application_routing_enabled = true in cluster creation installs addons
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get pods -n kube-system | grep addon
addon-http-application-routing-external-dns-5d48bdffc6-q98nx 1/1 Running 0 26m
addon-http-application-routing-nginx-ingress-controller-5bcrf87 1/1 Running 0 26m
so the Ingress service should point to that controller in its annotations and not specify a host, so I changed the ingress service to
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
# kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.class: addon-http-application-routing
# nginx.ingress.kubernetes.io/rewrite-target: /
labels:
name: ingress-service
spec:
rules:
# - host: fixit.westeurope.cloudapp.azure.com #server.com
- http:
paths:
- path: '/server/*' # service
# - path: '/server' # service doesn't get a IPaddress
# - path: '/*'
# - path: '/'
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 3000
# - path: '/neo4j/*'
# pathType: Prefix
# backend:
# service:
# name: fixit-cluster
# port:
# number: 7687
# number: 7474
# number: 7473
# - path: '/neo4j-admin/*'
# pathType: Prefix
# backend:
# service:
# name: fixit-cluster-admin
# port:
# number: 6362
# number: 7687
# number: 7474
# number: 7473
and its output is now
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> * 108.143.71.248 80 7s
vincenzocalia#vincenzos-MacBook-Air helm_charts % kubectl describe ingress ingress-service
Name: ingress-service
Labels: app.kubernetes.io/managed-by=Helm
name=ingress-service
Namespace: default
Address: 108.143.71.248
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/server/* server-clusterip-service:3000 (10.244.0.21:3000)
Annotations: kubernetes.io/ingress.class: addon-http-application-routing
meta.helm.sh/release-name: fixit-cluster
meta.helm.sh/release-namespace: default
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 20s (x2 over 27s) nginx-ingress-controller Scheduled for sync
now going to http://108.143.71.248/server/api in the browser shows an Nginx 404 page.
I finally found the problem. It was my setup. I was using the default ingress-controller and load balancer that get created when you set the option http_application_routing_enabled = true on cluster creation which the docs are discouraging for production https://learn.microsoft.com/en-us/azure/aks/http-application-routing. So the proper implementation is to install an ingress controller https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli, which hooks up the the internal load balancer, so there is no need to create one. Now, the Ingress controller will accept an ip address for the load balancer, but you have to create the PublicIP it in the node resource group because is going to look for it there and not in the resource group check the difference between the two here https://learn.microsoft.com/en-us/azure/aks/faq#why-are-two-resource-groups-created-with-aks.
So the working configuration is now:
main
terraform {
required_version = ">=1.1.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0.2"
}
}
}
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
subscription_id = var.azure_subscription_id
tenant_id = var.azure_subscription_tenant_id
client_id = var.service_principal_appid
client_secret = var.service_principal_password
}
provider "kubernetes" {
host = "${module.cluster.host}"
client_certificate = "${base64decode(module.cluster.client_certificate)}"
client_key = "${base64decode(module.cluster.client_key)}"
cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}"
}
provider "helm" {
kubernetes {
host = "${module.cluster.host}"
client_certificate = "${base64decode(module.cluster.client_certificate)}"
client_key = "${base64decode(module.cluster.client_key)}"
cluster_ca_certificate = "${base64decode(module.cluster.cluster_ca_certificate)}"
}
}
module "cluster" {
source = "./modules/cluster"
location = var.location
vm_size = var.vm_size
resource_group_name = var.resource_group_name
node_resource_group_name = var.node_resource_group_name
kubernetes_version = var.kubernetes_version
ssh_key = var.ssh_key
sp_client_id = var.service_principal_appid
sp_client_secret = var.service_principal_password
}
module "ingress-controller" {
source = "./modules/ingress-controller"
public_ip_address = module.cluster.public_ip_address
depends_on = [
module.cluster.public_ip_address
]
}
cluster
resource "azurerm_resource_group" "resource_group" {
name = var.resource_group_name
location = var.location
tags = {
Environment = "test"
Team = "DevOps"
}
}
resource "azurerm_kubernetes_cluster" "server_cluster" {
name = "server_cluster"
### choose the resource goup to use for the cluster
location = azurerm_resource_group.resource_group.location
resource_group_name = azurerm_resource_group.resource_group.name
### decide the name of the cluster "node" resource group, if unset will be named automatically
node_resource_group = var.node_resource_group_name
dns_prefix = "fixit"
kubernetes_version = var.kubernetes_version
# sku_tier = "Paid"
default_node_pool {
name = "default"
node_count = 1
min_count = 1
max_count = 3
vm_size = var.vm_size
type = "VirtualMachineScaleSets"
enable_auto_scaling = true
enable_host_encryption = false
# os_disk_size_gb = 30
}
service_principal {
client_id = var.sp_client_id
client_secret = var.sp_client_secret
}
tags = {
Environment = "Production"
}
linux_profile {
admin_username = "azureuser"
ssh_key {
key_data = var.ssh_key
}
}
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "basic"
}
http_application_routing_enabled = false
depends_on = [
azurerm_resource_group.resource_group
]
}
resource "azurerm_public_ip" "public-ip" {
name = "fixit-public-ip"
location = var.location
# resource_group_name = var.resource_group_name
resource_group_name = var.node_resource_group_name
allocation_method = "Static"
domain_name_label = "fixit"
# sku = "Standard"
depends_on = [
azurerm_kubernetes_cluster.server_cluster
]
}
ingress controller
resource "helm_release" "nginx" {
name = "ingress-nginx"
repository = "ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "default"
set {
name = "controller.service.externalTrafficPolicy"
value = "Local"
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-internal"
value = "true"
}
set {
name = "controller.service.loadBalancerIP"
value = var.public_ip_address
}
set {
name = "controller.service.annotations.service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"
value = "/healthz"
}
}
ingress service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
# namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2$3$4
spec:
ingressClassName: nginx
rules:
# - host: fixit.westeurope.cloudapp.azure.com #dns from Azure PublicIP
### Node.js server
- http:
paths:
- path: /(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
- http:
paths:
- path: /server(/|$)(.*)
pathType: Prefix
backend:
service:
name: server-clusterip-service
port:
number: 80
...
other services omitted
Hope this can help getting the setup right.
Cheers.

In Grafana I am getting a "400 Bad Request Client sent an HTTP request to an HTTPS server" when trying to update datasource configmaps

In Grafana I notice that when I deploy a configmap that should add a datasource it makes no change and does not add the new datasource - note that the configmap is in the cluster and in the correct namespace.
If I make a change to the configmap I get the following error if I look at the logs for the grafana-sc-datasources container:
POST request sent to http://localhost:3000/api/admin/provisioning/datasources/reload. Response: 400 Bad Request Client sent an HTTP request to an HTTPS server.
I assume I do not see any changes because it can not make the post request.
I played around a bit and at one point I did see changes being made/updated in the datasources:
I changed the protocol to http under grafana: / server: / protocol: and I was NOT able to open the grafana website but I did notice that if I did make a change to a datasource configmap in the cluster then I would see a successful 200 message in logs of the grafana-sc-datasources container : POST request sent to http://localhost:3000/api/admin/provisioning/datasources/reload. Response: 200 OK {"message":"Datasources config reloaded"}.
So I assume just need to know how to get Grafana to send the POST request as https instead of http.
Can someone point me to what might be wrong and how to fix it?
Note that I am pretty new to K8s, grafana and helmcharts.
Here is a configmap that I am trying to get to work:
apiVersion: v1
kind: ConfigMap
metadata:
name: jaeger-${NACKLE_ENV}-grafana-datasource
labels:
grafana_datasource: '1'
data:
jaeger-datasource.yaml: |-
apiVersion: 1
datasources:
- name: Jaeger-${NACKLE_ENV}
type: jaeger
access: browser
url: http://jaeger-${NACKLE_ENV}-query.${NACKLE_ENV}.svc.cluster.local:16690
version: 1
basicAuth: false
Here is the current Grafana values file:
# use 1 replica when using a StatefulSet
# If we need more than 1 replica, then we'll have to:
# - remove the `persistence` section below
# - use an external database for all replicas to connect to (refer to Grafana Helm chart docs)
replicas: 1
image:
pullSecrets:
- docker-hub
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: eks.amazonaws.com/capacityType
operator: In
values:
- ON_DEMAND
persistence:
enabled: true
type: statefulset
storageClassName: biw-durable-gp2
podDisruptionBudget:
maxUnavailable: 1
admin:
existingSecret: grafana
sidecar:
datasources:
enabled: true
label: grafana_datasource
dashboards:
enabled: true
label: grafana_dashboard
labelValue: 1
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/default
dashboards:
default:
node-exporter:
gnetId: 1860
revision: 23
datasource: Prometheus
core-dns:
gnetId: 12539
revision: 5
datasource: Prometheus
fluentd:
gnetId: 7752
revision: 6
datasource: Prometheus
ingress:
apiVersion: networking.k8s.io/v1
enabled: true
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: '/api/health'
alb.ingress.kubernetes.io/healthcheck-protocol: HTTPS
alb.ingress.kubernetes.io/backend-protocol: HTTPS
# Redirect to HTTPS at the ALB
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: ssl-redirect
port:
name: use-annotation
defaultBackend:
service:
name: grafana
port:
number: 80
livenessProbe: { "httpGet": { "path": "/api/health", "port": 3000, "scheme": "HTTPS" }, "initialDelaySeconds": 60, "timeoutSeconds": 30, "failureThreshold": 10 }
readinessProbe: { "httpGet": { "path": "/api/health", "port": 3000, "scheme": "HTTPS" } }
service:
type: NodePort
name: grafana
rolePrefix: app-role
env: eks-test
serviceAccount:
name: grafana
annotations:
eks.amazonaws.com/role-arn: ""
pod:
spec:
serviceAccountName: grafana
grafana.ini:
server:
# don't use enforce_domain - it causes an infinite redirect in our setup
# enforce_domain: true
enable_gzip: true
# NOTE - if I set the protocol to http I do see it make changes to datasources but I can not see the website
protocol: https
cert_file: /biw-cert/domain.crt
cert_key: /biw-cert/domain.key
users:
auto_assign_org_role: Editor
# https://grafana.com/docs/grafana/v6.5/auth/gitlab/
auth.gitlab:
enabled: true
allow_sign_up: true
org_role: Editor
scopes: read_api
auth_url: https://gitlab.biw-services.com/oauth/authorize
token_url: https://gitlab.biw-services.com/oauth/token
api_url: https://gitlab.biw-services.com/api/v4
allowed_groups: nackle-teams/devops
securityContext:
fsGroup: 472
runAsUser: 472
runAsGroup: 472
extraConfigmapMounts:
- name: "cert-configmap"
mountPath: "/biw-cert"
subPath: ""
configMap: biw-grafana-cert
readOnly: true

Websocket connection fails for microfrontend (using Kubernetes ingress-nginx and port forwarding)

I am building a microfrontend-microservices app where the microfrontend is written in React and the backend utilizes a Kubernetes cluster, in particular an ingress-nginx service. There are many services (auth, company, companies, event bus, permissions ingress and query services) with a few corresponding micro (client) apps that are built using Webpack Module Federation and webpack dev server. I am using skaffold as a dev tool.
When I run skaffold dev --port-forwarding, I am able to load the app in the browser using the host domain of mavata.dev but with errors (which eventually make certain pages which rely on module federation fail to load entirely). I get the following errors in chrome dev tools:
WebSocket connection to 'wss://mavata.dev:9000/ws' failed
WebSocket connection to 'wss://mavata.dev:8081/ws' failed
WebSocket connection to 'wss://mavata.dev:8083/ws' failed
How do I fix the websocket connection?? Below are relevant code snippets..
(Client) Company App Deployment: client-company-depl.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-company-depl
spec:
replicas: 1
selector:
matchLabels:
app: client-company
template:
metadata:
labels:
app: client-company
spec:
containers:
- name: client-company
image: bridgetmelvin/client-company
---
apiVersion: v1
kind: Service
metadata:
name: client-company-srv
spec:
selector:
app: client-company
ports:
- name: client-company
protocol: TCP
port: 8083
targetPort: 8083
(a similar deployment for the (client) container app but with a port and targetPort of 9000)
(another similar deployment for the (client) marketing app but with a port and targetPort of 8081)
Ingress Service Config: ingress-srv.yaml:
# RUN: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
certmanager.k8s.io/cluster-issuer: core-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/websocket-services: core-service
nginx.org/websocket-services: core-service
spec:
rules:
- host: mavata.dev # need to update 'hosts' file on local machine (in VS Code) C:\Windows\System32\drivers\etc
http:
paths:
- path: /company/create
pathType: Prefix
backend:
service:
name: company-clusterip-srv
port:
number: 4000
- path: /company
pathType: Prefix
backend:
service:
name: query-srv
port:
number: 4002
- path: /company/?(.*)
pathType: Prefix
backend:
service:
name: company-srv
port:
number: 4000
- path: /companies
pathType: Prefix
backend:
service:
name: companies-srv
port:
number: 4002
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-server-srv
port:
number: 4004
- path: /permissions/?(.*)
pathType: Prefix
backend:
service:
name: permissions-srv
port:
number: 4001
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-container-srv
port:
number: 9000
(Client) Container webpack.dev.js:
const { merge } = require('webpack-merge');
const ModuleFederationPlugin = require('webpack/lib/container/ModuleFederationPlugin');
const commonConfig = require('./webpack.common');
const packageJson = require('../package.json');
const globals = require('../src/data-variables/global');
const port = globals.port;
const devConfig = {
mode: 'development',
output: {
publicPath: `http://localhost:${port}/` // don't forget the slash at the end
},
devServer: {
host: '0.0.0.0',
port: port,
allowedHosts: ['mavata.dev'],
historyApiFallback: {
index: 'index.html',
},
},
plugins: [
new ModuleFederationPlugin({
name: 'container',
filename: 'remoteEntry.js',
remotes: {
marketingMfe: 'marketingMod#http://localhost:8081/remoteEntry.js',
authMfe: 'authMod#http://localhost:8082/remoteEntry.js',
companyMfe: 'companyMod#http://localhost:8083/remoteEntry.js',
dataMfe: 'dataMod#http://localhost:8084/remoteEntry.js'
},
exposes: {
'./Functions': './src/functions/Functions',
'./Variables': './src/data-variables/Variables'
},
shared: {...packageJson.dependencies, ...packageJson.peerDependencies},
}),
],
};
module.exports = merge(commonConfig, devConfig);
(Client) Company webpack.dev.js:
const { merge } = require('webpack-merge');
const HtmlWebpackPlugin = require('html-webpack-plugin');
const ModuleFederationPlugin = require('webpack/lib/container/ModuleFederationPlugin');
const commonConfig = require('./webpack.common');
const packageJson = require('../package.json');
const path = require('path');
const globals = require('../src/variables/global')
const port = globals.port
const devConfig = {
mode: 'development',
output: {
publicPath: `http://localhost:${port}/`, // don't forget the slash at the end
},
devServer: {
port: port,
historyApiFallback: {
index: 'index.html',
},
},
plugins: [
new ModuleFederationPlugin({
name: 'companyMod',
filename: 'remoteEntry.js',
exposes: {
'./CompanyApp': './src/bootstrap',
},
remotes: {
containerMfe: 'container#http://localhost:9000/remoteEntry.js',
dataMfe: 'dataMod#http://localhost:8084/remoteEntry.js'
},
shared: {...packageJson.dependencies, ...packageJson.peerDependencies},
}),
new HtmlWebpackPlugin({
template: './public/index.html',
}),
],
};
module.exports = merge(commonConfig, devConfig);

GKE ingress resource returns 404 although it is connected to a service

I added a default nginx-ingress deployment with a regional IP that I got from GCP.
helm install nginx-ingress \
nginx-stable/nginx-ingress \
--set rbac.create=true \
--set controller.service.loadBalancerIP="<GCP Regional IP>"
I have a dockerized node app with a single .js file. Which I deployed with a basic helm chart. The service is called node-app-blue-helm-chart
const http = require('http');
const hostname = '0.0.0.0';
const port = 80;
const server = http.createServer((req, res) => {
if (req.url == '/another-page'){
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.end('<h1>another page</h1>');
} else {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.end('<h1>Hello World</h1>');
}
});
server.listen(port, hostname, () => {
console.log('Server running at http://%s:%s/', hostname, port);
});
process.on('SIGINT', function() {
console.log('Caught interrupt signal and will exit');
process.exit();
});
I deployed following ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "*.example.com"
http:
paths:
- path: /*
pathType: Prefix
backend:
service:
name: node-app-blue-helm-chart
port:
number: 80
Although ingress resource acquires IP and endpoint. It still returns 404 error. What can be wrong? Can host: "*.example.com" section be a problem?
More info:
kubectl describe ing ingress-resource
Name: ingress-resource
Namespace: default
Address: <GCP Regional IP>
Default backend: default-http-backend:80 (10.0.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
*.example.com
/* node-app-blue-helm-chart:80 (10.0.0.15:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
kubectl describe svc node-app-blue-helm-chart
Name: node-app-blue-helm-chart
Namespace: default
Labels: app.kubernetes.io/instance=node-app-blue
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=helm-chart
app.kubernetes.io/version=1.16.0
helm.sh/chart=helm-chart-0.1.0
Annotations: meta.helm.sh/release-name: node-app-blue
meta.helm.sh/release-namespace: default
Selector: app.kubernetes.io/instance=node-app-blue,app.kubernetes.io/name=helm-chart
Type: ClusterIP
IP Families: <none>
IP: 10.3.248.13
IPs: 10.3.248.13
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.0.0.15:80
Session Affinity: None
Events: <none>
What I tried:
Removing * from /* in ingress resource. Didn't fix the issue.
kubectl describe ing ingress-resource
Name: ingress-resource
Namespace: default
Address: W.X.Y.Z
Default backend: default-http-backend:80 (10.0.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
*.example.com
/ node-app-blue-helm-chart:80 (10.0.0.15:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated <invalid> nginx-ingress-controller Configuration for default/ingress-resource was added or updated
Try to edit your Ingress. You have set a path=/*, which may not be what you meant to do. A / should do:
[...]
spec:
rules:
- host: "*.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: node-app-blue-helm-chart
port:
number: 80

Kubernetes Websockets using Socket.io, ExpressJS and Nginx Ingress

I want to connect a React Native application using Socket.io to a server that is inside a Kubernetes Cluster hosted on Google Cloud Platform (GKE).
There seems to be an issue with the Nginx Ingress Controller declaration but I cannot find it.
I have tried adding nginx.org/websocket-services; rewriting my backend code so that it uses a separate NodeJS server (a simple HTTP server) on port 3004, then exposing it via the Ingress Controller under a different path than the one on port 3003; and multiple other suggestions from other SO questions and Github issues.
Information that might be useful:
Cluster master version: 1.15.11-gke.15
I use a Load Balancer managed with Helm (stable/nginx-ingress) with RBAC enabled
All deployments and services are within the namespace gitlab-managed-apps
The error I receive when trying to connect to socket.io is: Error: websocket error
For the front-end part, the code is as follows:
App.js
const socket = io('https://example.com/app-sockets/socketns', {
reconnect: true,
secure: true,
transports: ['websocket', 'polling']
});
I expect the above to connect me to a socket.io namespace called socketdns.
The backend code is:
app.js
const express = require('express');
const app = express();
const server = require('http').createServer(app);
const io = require('socket.io')(server);
const redis = require('socket.io-redis');
io.set('transports', ['websocket', 'polling']);
io.adapter(redis({
host: process.env.NODE_ENV === 'development' ? 'localhost' : 'redis-cluster-ip-service.gitlab-managed-apps.svc.cluster.local',
port: 6379
}));
io.of('/').adapter.on('error', function(err) { console.log('Redis Adapter error! ', err); });
const nsp = io.of('/socketns');
nsp.on('connection', function(socket) {
console.log('connected!');
});
server.listen(3003, () => {
console.log('App listening to 3003');
});
The ingress service is:
ingress-service.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-connect-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-read-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-send-timeout: "7200"
nginx.org/websocket-services: "app-sockets-cluster-ip-service"
name: ingress-service
namespace: gitlab-managed-apps
spec:
tls:
- hosts:
- example.com
secretName: letsencrypt-prod
rules:
- host: example.com
http:
paths:
- backend:
serviceName: app-cms-cluster-ip-service
servicePort: 3000
path: /?(.*)
- backend:
serviceName: app-users-cluster-ip-service
servicePort: 3001
path: /app-users/?(.*)
- backend:
serviceName: app-sockets-cluster-ip-service
servicePort: 3003
path: /app-sockets/?(.*)
- backend:
serviceName: app-sockets-cluster-ip-service
servicePort: 3003
path: /app-sockets/socketns/?(.*)
The solution is to remove the nginx.ingress.kubernetes.io/rewrite-target: /$1 annotation.
Here is a working configuration: (please note that apiVersion has changed since the question has been asked)
Ingress configuration
ingress-service.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "64m"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
name: ingress-service
namespace: default
spec:
tls:
- hosts:
- example.com
secretName: letsencrypt-prod
rules:
- host: example.com
http:
paths:
- backend:
service:
name: app-sockets-cluster-ip-service
port:
number: 3003
path: /app-sockets/?(.*)
pathType: Prefix
On the service (Express.js):
app.js
const redisAdapter = require('socket.io-redis');
const io = require('socket.io')(server, {
path: `${ global.NODE_ENV === 'development' ? '' : '/app-sockets' }/sockets/`,
cors: {
origin: '*',
methods: ['GET', 'POST'],
},
});
io.adapter(redisAdapter({
host: global.REDIS_HOST,
port: 6379,
}));
io.of('/').adapter.on('error', err => console.log('Redis Adapter error! ', err));
io.on('connection', () => {
//...
});
The global.NODE_ENV === 'development' ? '' : '/app-sockets' bit is related to an issue in development. If you change it here, you must also change it in the snippet below.
In development the service is under http://localhost:3003 (sockets endpoint is http://localhost:3003/sockets).
In production the service is under https://example.com/app-sockets (sockets endpoint is https://example.com/app-sockets/sockets).
On frontend
connectToWebsocketsService.js
/**
* Connect to a websockets service
* #param tokens {Object}
* #param successCallback {Function}
* #param failureCallback {Function}
*/
export const connectToWebsocketsService = (tokens, successCallback, failureCallback) => {
//SOCKETS_URL = NODE_ENV === 'development' ? 'http://localhost:3003' : 'https://example.com/app-sockets'
const socket = io(`${ SOCKETS_URL.replace('/app-sockets', '') }`, {
path: `${ NODE_ENV === 'development' ? '' : '/app-sockets' }/sockets/`,
reconnect: true,
secure: true,
transports: ['polling', 'websocket'], //required
query: {
// optional
},
auth: {
...generateAuthorizationHeaders(tokens), //optional
},
});
socket.on('connect', successCallback(socket));
socket.on('reconnect', successCallback(socket));
socket.on('connect_error', failureCallback);
};
Note: I wasn't able to do it on the project mentioned in the question, but I have on another project which is hosted on EKS, not GKE. Feel free to confirm if this works for you on GKE as well.
Just change annotations to
nginx.ingress.kubernetes.io/websocket-services: "app-sockets-cluster-ip-service"
instead of
nginx.org/websocket-services: "app-sockets-cluster-ip-service"
Mostly it will resolve your issue.