How to enable subdomain with GKE - kubernetes

I have different Kubernetes deployment in GKE and I would like to access them from different external subdomains.
I tried to create 2 deployments with subdomain "sub1" and "sub2" and hostname "app" another deployment with hostname "app" and a service that expose it on the IP XXX.XXX.XXX.XXX configured on the DNS of app.mydomain.com
I would like to access the 2 child deployment from sub1.app.mydomain.com and sub2.app.mydomain.com
This should be automatic, adding new deployment I cannot change every time the DNS records.
Maybe I'm approaching the problem in the wrong way, I'm new in GKE, any suggestions?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-host
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-host
type: proxy
spec:
hostname: app
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-1
type: app
spec:
hostname: app
subdomain: sub1
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-2
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-2
type: app
spec:
hostname: app
subdomain: sub2
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-expose-dns
spec:
ports:
- port: 80
selector:
name: my-host
type: LoadBalancer

You want Ingress. There are several options available (Istio, nginx, traefik, etc). I like using nginx and it's really easy to install and work with. Installation steps can be found at kubernetes.github.io.
Once the Ingress Controller is installed, you want to make sure you've exposed it with a Service with type=LoadBalancer. Next, if you are using Google Cloud DNS, set up a wildcard entry for your domain with an A record pointing to the external IP address of your Ingress Controller's Service. In your case, it would be *.app.mydomain.com.
So now all of your traffic to app.mydomain.com is going to that load balancer and being handled by your Ingress Controller, so now you need to add Service and Ingress Entities for any service you want.
apiVersion: v1
kind: Service
metadata:
name: my-service1
spec:
selector:
app: my-app-1
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
apiVersion: v1
kind: Service
metadata:
name: my-service2
spec:
selector:
app: my-app2
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: sub1.app.mydomain.com
http:
paths:
- backend:
serviceName: my-service1
servicePort: 80
- host: sub2.app.mydomain.com
http:
paths:
- backend:
serviceName: my-service2
servicePort: 80
Routing shown is host based, but you just as easily could have handled those services as path based, so all traffic to app.mydomain.com/service1 would go to one of your deployments.

SOLVED!
This is the correct nginx configuration:
server {
listen 80;
server_name ~^(?<subdomain>.*?)\.;
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
location / {
proxy_pass http://$subdomain.my-internal-host.default.svc.cluster.local;
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

It could be a solution, for my case I need something more dynamic. I would not update the ingress each time I add a subdomain.
I've almost solved using an nginx proxy like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-1
type: app
spec:
hostname: sub1
subdomain: my-internal-host
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-2
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-2
type: app
spec:
hostname: sub2
subdomain: my-internal-host
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-dns-file
data:
nginx.conf: |
server {
listen 80;
server_name ~^(?.*?)\.;
location / {
proxy_pass http://$subdomain.my-internal-host;
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-proxy
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-proxy
type: app
spec:
subdomain: my-internal-host
containers:
- image: nginx:alpine
name: nginx
volumeMounts:
- name: nginx-config-dns-file
mountPath: /etc/nginx/conf.d/default.conf.test
subPath: nginx.conf
ports:
- name: nginx
containerPort: 80
hostPort: 80
volumes:
- name: nginx-config-dns-file
configMap:
name: nginx-config-dns-file
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-internal-host
spec:
selector:
type: app
clusterIP: None
ports:
- name: sk-port
port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: sk-expose-dns
spec:
ports:
- port: 80
selector:
name: my-proxy
type: LoadBalancer
I did understand that I need the service 'my-internal-host' to allow all the deployments to see each other internally.
The problem now is only the proxy_pass of nginx, if I change it with 'proxy_pass http://sub1.my-internal-host;' it works, but not with the regexp var.
The problem is related to the nginx resolver.

Related

Ingress connection refused

I want to deploy a SW application with docker and kubernetes and I have a big issue.
I have master node and worker node, inside, I have a Python application running on port 5000 with his service.
I want to take outside my app and I'm ussing ingress. When I make curl to nginx deployment and nginx service y can get response, but when I curl to ingress I can read connection refused.
Thank u so much
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: nginx
name: nginx
namespace: lazy-trading
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nginx
strategy: {}
template:
metadata:
labels:
io.kompose.service: nginx
spec:
containers:
- image: nginx:1.17-alpine
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /etc/nginx/conf.d
readOnly: true
name: nginx-conf
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: nginx
name: nginx
namespace: lazy-trading
spec:
ports:
- name: "8094"
port: 8094
targetPort: 80
selector:
io.kompose.service: nginx
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
namespace: lazy-trading
spec:
rules:
- host: lazytrading.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 8094
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
io.kompose.service: nginx
name: nginx-conf
namespace: lazy-trading
data:
nginx.conf: |
server {
# Lazt Ytading configuration ---
location = /api/v1/lazytrading {
return 302 /api/v1/lazytrading/;
}
location /api/v1/lazytrading/ {
proxy_pass http://{{ .Values.deployment.name }}:{{
.Values.service.ports.port }}/;
}
}

Ingress creating health check on HTTP instead of TCP

I am actually trying to run 3 containers in my gke cluster. I have them exposed via a network load balancer and over that, I am using ingress so I can reach my services from different domains with SSL certs on them.
Here is the complete manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app:web
spec:
containers:
- name: web
image: us-east4-docker.pkg.dev/web:e856485 # docker image
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms
spec:
replicas: 3
selector:
matchLabels:
app: cms
template:
metadata:
labels:
app: cms
spec:
containers:
- name: cms
image: us-east4-docker.pkg.dev/cms:4e1fe2f # docker image
ports:
- containerPort: 8055
env:
- name : DB
value : "postgres"
- name : DB_HOST
value : 10.142.0.3
- name : DB_PORT
value : "5432"
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us-east4-docker.pkg.dev/api:4e1fe2f # docker image
ports:
- containerPort: 8080
env:
- name : HOST
value : "0.0.0.0"
- name : PORT
value : "8080"
- name : NODE_ENV
value : production
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: web-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: web
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: web
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: cms-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: cms
spec:
ports:
- port: 8055
protocol: TCP
targetPort: 8055
selector:
app: cms
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: api-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: api
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: api
type: NodePort
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: web-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: cms-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: api-cert
type: kubernetes.io/tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
spec:
tls:
- secretName: api-cert
- secretName: cms-cert
- secretName: web-cert
rules:
- host: web-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: web-lb
port:
number: 3000
- host: cms-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: cms-lb
port:
number: 8055
- host: api-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: api-lb
port:
number: 8080
The containers are accessible through the load balancer(network), but from ingress(L7 lb) the health check is failing.
I tried editing the health checks manually from HTTP:80 to TCP:8080/8055/3000 for 3 services and it works.
But eventually, ingress reverts it back to HTTP health check and it fails again. I also tried using NodePort instead of load balancer as service type but no luck.
Any help?
The first thing I would like to mention is that you need to recheck your implementation because from what I see, you are creating an Ingress which will create a LoadBanacer, and this Ingress is using three services of type LoadBalancer in which each one of them will also create its LoadBalancer (I'm assuming the default behaviour, unless you applied the famous workaround of deleting the service's LoadBalancer manually after it is created).
And I don't think this is correct unless you need that design for some reason. So, my suggestion is that you might want to change your services types to NodePort.
As for answering your question, what you are missing is:
You need to implement a BackendConfig with custom HealthCheck configurations.
1- Create the Backendconfig:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: api-lb-backendconfig
spec:
healthCheck:
checkIntervalSec: INTERVAL
timeoutSec: TIMEOUT
healthyThreshold: HEALTH_THRESHOLD
unhealthyThreshold: UNHEALTHY_THRESHOLD
type: PROTOCOL
requestPath: PATH
port: PORT
2- Use this config in your service/s
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"ports": {
"PORT_NAME_1":"api-lb-backendconfig"
}}'
spec:
ports:
- name: PORT_NAME_1
port: PORT_NUMBER_1
protocol: TCP
targetPort: TARGET_PORT
Once you apply such configurations, your Ingress's LoadBalanacer will be created with the BackendConfig "api-lb-backendconfig"
Consider this documentation page as your reference.

Unable to access app running on pod inside cluster using nginx Ingress controller

I'm using this nginx ingress controller on Hetzner server. After installation of ingress controller, I'm able to access the worker node by its IP, but not able to access the app running on pod inside the cluster. am I missing something?
Are Ingress and Traefik are different, a bit confused in the terminologies.
service file -
apiVersion: v1
kind: Service
metadata:
name: service-name-xxx
spec:
selector:
app: app-name
ports:
- protocol: 'TCP'
port: 80
targetPort: 4200
type: LoadBalancer
deployment file -
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
app: app-name
spec:
replicas: 1
selector:
matchLabels:
app: app-name
template:
metadata:
labels:
app: app-name
spec:
imagePullSecrets:
- name: my-registry-key
containers:
- name: container-name
image: my-private-docker-img
imagePullPolicy: Always
ports:
- containerPort: 4200
ingress file -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
spec:
rules:
- host:
http:
paths:
- pathType: Prefix
path: "/app"
backend:
service:
name: service-name-xxx
port:
number: 4200
I think you have to add the kubernetes.io/ingress.class: "nginx" to your Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
spec:
name: hsts-ingress-backend1-minion
annotations:
kubernetes.io/ingress.class: "nginx"
You have set port to 80 and targetPort to 4200 in your service. Should mention port 80 in your ingress yaml.
backend:
service:
name: service-name-xxx
port: 80
targetPort: 4200

How to create a HTTPS route to a Service that is listening on Https with Traefik, and Kubernetes

I'm a newbie in kubernetes and Traefik.
I follow up that tutorial:
https://docs.traefik.io/user-guides/crd-acme/
And I changed it to use my Service in Scala, that it is under https and 9463 port.
I'm trying to deploy my Scala service with kubernetes and traefik.
When I forward directly to the service :
kubectl port-forward service/core-service 8001:9463
And I perform a curl -k 'https://localhost:8001/health' :
I get the "{Message:Ok}"
But when I perform a port forward to traefik
kubectl port-forward service/traefik 9463:9463 -n default
And perform a curl -k 'https://ejemplo.com:9463/tls/health'
I get an "Internal server error"
I guess the problem is that my "core-service" is listening over HTTPS protocol, that's what I add scheme:https.
I tried to find the solution over the documentation but it is confusing.
Those are my yml files:
Services.yaml
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 9463
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: core-service
spec:
ports:
- protocol: TCP
name: websecure
port: 9463
selector:
app: core-service
Deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.0
args:
- --api.insecure
- --accesslog
- --entrypoints.websecure.Address=:9463
- --providers.kubernetescrd
- --certificatesresolvers.default.acme.tlschallenge
- --certificatesresolvers.default.acme.email=foo#you.com
- --certificatesresolvers.default.acme.storage=acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
- --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
ports:
- name: websecure
containerPort: 9463
- name: admin
containerPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: core-service
labels:
app: core-service
spec:
replicas: 1
selector:
matchLabels:
app: core-service
template:
metadata:
labels:
app: core-service
spec:
containers:
- name: core-service
image: core-service:0.1.4-SNAPSHOT
ports:
- name: websecure
containerPort: 9463
livenessProbe:
httpGet:
port: 9463
scheme: HTTPS
path: /health
initialDelaySeconds: 10
IngressRoute2.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
From the docs
A TLS router will terminate the TLS connection by default. However,
the passthrough option can be specified to set whether the requests
should be forwarded "as is", keeping all data encrypted.
In your case SSL Passthrough need to be enabled because the pod is expecting HTTPS traffic.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
passthrough: true

Canot access to sidecar container in Kubernetes

I have the following hello world deployment.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: hello:v0.0.1
imagePullPolicy: Always
args:
- /hello
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: hello
type: NodePort
And I have ingress object deploy with side-car container
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: alb-ingress-controller
template:
metadata:
creationTimestamp: null
labels:
app: alb-ingress-controller
spec:
containers:
- name: server
image: alb-ingress-controller:v0.0.1
imagePullPolicy: Always
args:
- /server
- --ingress-class=alb
- --cluster-name=AAA
- --aws-max-retries=20
- --healthz-port=10254
ports:
- containerPort: 10254
protocol: TCP
- name: alb-sidecar
image: sidecar:v0.0.1
imagePullPolicy: Always
args:
- /sidecar
- --port=5000
ports:
- containerPort: 5000
protocol: TCP
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
serviceAccountName: alb-ingress
serviceAccount: alb-ingress
---
apiVersion: v1
kind: Service
metadata:
name: alb-ingress-controller-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: alb-ingress-controller
type: NodePort
And I have Ingress here
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/subnets: AAA
alb.ingress.kubernetes.io/security-groups: AAA
labels:
app: test-alb
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: hello-service
servicePort: 80
- path: /alb-sidecar
backend:
serviceName: alb-ingress-controller-service
servicePort: 80
I would expect to access to /alb-sidecar the same way that I access to /hello, but only /hello endpoint works for me. And keep getting 502 Bad Gateway for /alb-sidecar endpoint. The sidecar container is just a simple web app listening on /alb-sidecar.
Do I need do anything different when the sidecar container runs in a different namespace or how would you run a sidecar next to ALB ingress controller?
If you created the deployment alb-ingress-controller and the service alb-ingress-controller-service in another namespace, you need to create another ingress resource in the exact namespace.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-alb
namespace: alb-namespace
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/subnets: AAA
alb.ingress.kubernetes.io/security-groups: AAA
labels:
app: alb-service
spec:
rules:
- http:
paths:
- path: /alb-sidecar
backend:
serviceName: alb-ingress-controller-service
servicePort: 80