Adding websockets/port 6001 to Kubernetes Ingress deployed via Helm - Connection Refused - kubernetes

We currently have a multi-tenant backend laravel application set up, with pusher websockets enabled on the same app. This application is built into a docker image and hosted on Digital Ocean container registry, and deployed via HELM to our Kubernetes Cluster.
We also have a front end application built in angular that tries to connect to the backend app via port 80 on the /ws/ path to establish a websocket connection.
When we try to access the tenant1.example.com/ws/ we get a 502 gateway error, which suggests the ports arent mapping correctly? but tenant1.example.com port 80 works just fine.
Our heml chart yaml is as follows:
NAME: tenant1
LAST DEPLOYED: Fri Dec 11 14:34:00 2020
NAMESPACE: tenants
STATUS: pending-install
REVISION: 1
USER-SUPPLIED VALUES:
subdomain: tenant1
COMPUTED VALUES:
affinity: {}
autoscaling:
enabled: true
maxReplicas: 1
minReplicas: 1
targetCPUUtilizationPercentage: 80
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: nginx
tag: ""
imagePullSecrets: []
ingress:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
enabled: true
hosts:
- host: example.com
pathType: Prefix
tls: []
migrate:
enabled: true
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
replicaCount: 1
resources:
requests:
cpu: 10m
rootDB: public
securityContext: {}
service:
port: 80
type: ClusterIP
serviceAccount:
annotations: {}
create: true
name: ""
setup:
enabled: true
subdomain: tenant1
tolerations: []
---
# Source: backend-api/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tenant1-backend-api
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
---
# Source: backend-api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: tenant1-backend-api-service
namespace: tenants
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
name: 'http'
selector:
app: tenant1-backend-api-deployment
---
# Source: backend-api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: tenant1-backend-api-ws-service
namespace: tenants
spec:
type: ClusterIP
ports:
- port: 6001
targetPort: 6001
name: 'websocket'
selector:
app: tenant1-backend-api-deployment
---
# Source: backend-api/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tenant1-backend-api-deployment
namespace: tenants
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
selector:
matchLabels:
app: tenant1-backend-api-deployment
template:
metadata:
labels:
app: tenant1-backend-api-deployment
namespace: tenants
spec:
containers:
- name: backend-api
image: "registry.digitalocean.com/rock/backend-api:latest"
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 6001
resources:
requests:
cpu: 10m
env:
- name: CONTAINER_ROLE
value: "backend-api"
- name: DB_CONNECTION
value: "pgsql"
- name: DB_DATABASE
value: tenant1
- name: DB_HOST
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_HOST
- name: DB_PORT
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_PORT
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_PASSWORD
---
# Source: backend-api/templates/hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: tenant1-backend-api-hpa
namespace: tenants
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: tenant1-backend-api-deployment
minReplicas: 1
maxReplicas: 1
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80
---
# Source: backend-api/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tenant1-backend-api-ingress
namespace: tenants
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: tenant1.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tenant1-backend-api-service
port:
number: 80
- path: /ws/
pathType: Prefix
backend:
service:
name: tenant1-backend-api-ws-service
port:
number: 6001

Related

Ingress connection refused

I want to deploy a SW application with docker and kubernetes and I have a big issue.
I have master node and worker node, inside, I have a Python application running on port 5000 with his service.
I want to take outside my app and I'm ussing ingress. When I make curl to nginx deployment and nginx service y can get response, but when I curl to ingress I can read connection refused.
Thank u so much
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: nginx
name: nginx
namespace: lazy-trading
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nginx
strategy: {}
template:
metadata:
labels:
io.kompose.service: nginx
spec:
containers:
- image: nginx:1.17-alpine
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /etc/nginx/conf.d
readOnly: true
name: nginx-conf
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: nginx
name: nginx
namespace: lazy-trading
spec:
ports:
- name: "8094"
port: 8094
targetPort: 80
selector:
io.kompose.service: nginx
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
namespace: lazy-trading
spec:
rules:
- host: lazytrading.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 8094
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
io.kompose.service: nginx
name: nginx-conf
namespace: lazy-trading
data:
nginx.conf: |
server {
# Lazt Ytading configuration ---
location = /api/v1/lazytrading {
return 302 /api/v1/lazytrading/;
}
location /api/v1/lazytrading/ {
proxy_pass http://{{ .Values.deployment.name }}:{{
.Values.service.ports.port }}/;
}
}

Canno't acces statfuel set db's. assumption happroxy-ingress yaml is incorrect

So I'm trying to raise three instances of RavenDB and I can't access them although they are running fine when checking the logs the certificate for running in a secured manner as well as top
The deployment is done within GKE.
the certificate generated using LetsEncrypt binding to the external IP of haproxy
I have no idea what is the issue..
kubectl describe ingress command reslut:
kubectl describe ingress
Name: ravendb
Labels: app=ravendb
Namespace: default
Address: 34.111.56.107
Default backend: default-http-backend:80 (10.80.1.5:8080)
Rules:
Host Path Backends
---- ---- --------
a.example.development.run
/ ravendb-0:443 (10.80.0.14:443)
tcp-a.example.development.run
/ ravendb-0:38888 (10.80.0.14:38888)
b.example.development.run
/ ravendb-1:443 (10.80.0.12:443)
tcp-b.example.development.run
/ ravendb-1:38888 (10.80.0.12:38888)
c.example.development.run
/ ravendb-2:443 (10.80.0.13:443)
tcp-c.example.development.run
/ ravendb-2:38888 (10.80.0.13:38888)
Annotations: ingress.kubernetes.io/backends:
{"k8s-be-32116--bad4c61c2f1d097c":"HEALTHY","k8s1-bad4c61c-default-ravendb-0-38888-31a8aae1":"UNHEALTHY","k8s1-bad4c61c-default-ravendb-0-...
ingress.kubernetes.io/forwarding-rule: k8s2-fr-pocrmcsc-default-ravendb-gtrvt7cq
ingress.kubernetes.io/ssl-passthrough: true
ingress.kubernetes.io/target-proxy: k8s2-tp-pocrmcsc-default-ravendb-gtrvt7cq
ingress.kubernetes.io/url-map: k8s2-um-pocrmcsc-default-ravendb-gtrvt7cq
these are the yaml files:
HAPROXY
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-controller
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-controller
namespace: default
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-controller
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
namespace: default
spec:
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: ingress-default-backend
namespace: default
spec:
ports:
- port: 8080
selector:
run: ingress-default-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
serviceAccountName: ingress-controller
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress
args:
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
- --reload-strategy=reusesocket
ports:
- name: https
containerPort: 443
- name: stat
containerPort: 1936
livenessProbe:
httpGet:
path: /healthz
port: 10253
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: default
spec:
type: LoadBalancer
selector:
app: ingress-controller
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: stat
port: 1936
RAVENDB
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ravendb-settings
namespace: default
labels:
app: ravendb
data:
ravendb-0: >
{
"Setup.Mode": "None",
"DataDir": "/data/RavenData",
"Security.Certificate.Path": "/ssl/ssl",
"ServerUrl": "https://0.0.0.0",
"ServerUrl.Tcp": "tcp://0.0.0.0:38888",
"PublicServerUrl": "https://a.example.development.run",
"PublicServerUrl.Tcp": "tcp://tcp-a.example.development.run:38888",
"License.Path": "/license/license.json",
"License.Eula.Accepted": "true"
}
ravendb-1: >
{
"Setup.Mode": "None",
"DataDir": "/data/RavenData",
"Security.Certificate.Path": "/ssl/ssl",
"ServerUrl": "https://0.0.0.0",
"ServerUrl.Tcp": "tcp://0.0.0.0:38888",
"PublicServerUrl": "https://b.example.development.run",
"PublicServerUrl.Tcp": "tcp://tcp-b.example.development.run:38888",
"License.Path": "/license/license.json",
"License.Eula.Accepted": "true"
}
ravendb-2: >
{
"Setup.Mode": "None",
"DataDir": "/data/RavenData",
"Security.Certificate.Path": "/ssl/ssl",
"ServerUrl": "https://0.0.0.0",
"ServerUrl.Tcp": "tcp://0.0.0.0:38888",
"PublicServerUrl": "https://c.example.development.run",
"PublicServerUrl.Tcp": "tcp://tcp-c.example.development.run:38888",
"License.Path": "/license/license.json",
"License.Eula.Accepted": "true"
}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ravendb
namespace: default
labels:
app: ravendb
spec:
serviceName: ravendb
template:
metadata:
labels:
app: ravendb
spec:
containers:
- command:
- /bin/sh
- -c
- /opt/RavenDB/Server/Raven.Server --config-path /config/$HOSTNAME
image: ravendb/ravendb:latest
imagePullPolicy: Always
name: ravendb
ports:
- containerPort: 443
name: http-api
protocol: TCP
- containerPort: 38888
name: tcp-server
protocol: TCP
- containerPort: 161
name: snmp
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
- mountPath: /ssl
name: ssl
- mountPath: /license
name: license
- mountPath: /config
name: config
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 120
volumes:
- name: ssl
secret:
defaultMode: 420
secretName: ravendb-ssl
- configMap:
defaultMode: 420
name: ravendb-settings
name: config
- name: license
secret:
defaultMode: 420
secretName: ravendb-license
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
replicas: 3
selector:
matchLabels:
app: ravendb
volumeClaimTemplates:
- metadata:
labels:
app: ravendb
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ravendb
namespace: default
labels:
app: ravendb
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- host: a.example.development.run
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ravendb-0
port:
number: 443
- host: tcp-a.example.development.run
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ravendb-0
port:
number: 38888
- host: b.example.development.run
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ravendb-1
port:
number: 443
- host: tcp-b.example.development.run
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ravendb-1
port:
number: 38888
- host: c.example.development.run
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ravendb-2
port:
number: 443
- host: tcp-c.example.development.run
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ravendb-2
port:
number: 38888
---
apiVersion: v1
kind: Service
metadata:
name: ravendb-0
namespace: default
labels:
app: ravendb
node: "0"
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: http-api
port: 443
protocol: TCP
targetPort: 443
- name: tcp-server
port: 38888
protocol: TCP
targetPort: 38888
- name: snmp
port: 161
protocol: TCP
targetPort: 161
selector:
app: ravendb
statefulset.kubernetes.io/pod-name: ravendb-0
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: ravendb-1
namespace: default
labels:
app: ravendb
node: "1"
spec:
ports:
- name: http-api
port: 443
protocol: TCP
targetPort: 443
- name: tcp-server
port: 38888
protocol: TCP
targetPort: 38888
- name: snmp
port: 161
protocol: TCP
targetPort: 161
selector:
app: ravendb
statefulset.kubernetes.io/pod-name: ravendb-1
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: ravendb-2
namespace: default
labels:
app: ravendb
node: "2"
spec:
ports:
- name: http-api
port: 443
protocol: TCP
targetPort: 443
- name: tcp-server
port: 38888
protocol: TCP
targetPort: 38888
- name: snmp
port: 161
protocol: TCP
targetPort: 161
selector:
app: ravendb
statefulset.kubernetes.io/pod-name: ravendb-2
type: ClusterIP
SECRETS
apiVersion: v1
kind: Secret
metadata:
name: ravendb-license
namespace: default
labels:
app: ravendb
type: Opaque
data:
license.json: >
---
apiVersion: v1
kind: Secret
metadata:
name: ravendb-ssl
namespace: default
labels:
app: ravendb
type: Opaque
data:
ssl: >

Ingress endpoint displays a blank page with response 200 on GKE

Being completly new to google cloud, and almost new to kubernetes, I struggled my whole weekend trying to deploy my app in GKE.
My app consists of a react frontend, nodejs backend, postgresql database (connected to the backend with a cloudsql-proxy) and redis.
I serve the frontend and backend with an Ingress, everything seems to be working and all, my pods are running. The ingress-nginx exposes the endpoint of my app, but when when I open it, instead of seeing my app, I see blank page with a 200 response. And when I do kubectl logs MY_POD, I can see that my react app is running.
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: superflix-ingress-service
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: superflix-ui-node-service
servicePort: 3000
- path: /graphql/*
backend:
serviceName: superflix-backend-node-service
servicePort: 4000
Here is my backend:
kind: Service
apiVersion: v1
metadata:
name: superflix-backend-node-service
spec:
type: NodePort
selector:
app: app
ports:
- port: 4000
targetPort: 4000
# protocol: TCP
name: http
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: superflix-backend-deployment
namespace: default
spec:
replicas: 2
template:
metadata:
labels:
app: app
spec:
containers:
- name: superflix-backend
image: gcr.io/superflix-project/superflix-server:v6
ports:
- containerPort: 4000
# The following environment variables will contain the database host,
# user and password to connect to the PostgreSQL instance.
env:
- name: REDIS_HOST
value: superflix-redis.default.svc.cluster.local
- name: IN_PRODUCTION
value: "true"
- name: POSTGRES_DB_HOST
value: "127.0.0.1"
- name: POSTGRES_DB_PORT
value: "5432"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-env-secrets
key: REDIS_PASS
# [START cloudsql_secrets]
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=superflix-project:europe-west3:superflix-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
# [END volumes]
And here is my frontend:
kind: Service
apiVersion: v1
metadata:
name: superflix-ui-node-service
spec:
type: NodePort
selector:
app: app
ports:
- port: 3000
targetPort: 3000
# protocol: TCP
name: http
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: superflix-ui-deployment
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: superflix-ui
image: gcr.io/superflix-project/superflix-ui:v4
ports:
- containerPort: 3000
env:
- name: IN_PRODUCTION
value: 'true'
- name: BACKEND_HOST
value: superflix-backend-node-service
EDIT:
When I look at the stackdriver logs of my nginx-ingress-controller I have warnings:
Service "default/superflix-ui" does not have any active Endpoint.
Service "default/superflix-backend" does not have any active Endpoint.
I actually found what was the issue. I changed the ingress service path from /* to /, and now it is working perfectly.

Setting up LetEncrypt HTTPS Traefik Ingress for Kubernetes Cluster

I've setup Kubernetes to use the Traefik Ingress to provide name based routing. I am a little lost in terms of how to configure for the automatic LetsEncrypt SSL certs. How do I reference the TOML files and configure for HTTPs. I am using a simple container below with the NGINX image to test this.
The below is my YAML for the deployment/service/ingress.
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: hmweb
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nginx:latest
envFrom:
- configMapRef:
name: config
ports:
- containerPort: 80
I have also included my ingress.yaml
--
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
You could build a custom image and include the toml file that way, however that would NOT be best practice. Here's how I did it:
1) Deploy your toml configuration to kubernetes as a ConfigMap like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cfg-traefik
labels:
app: traefik
data:
traefik.toml: |
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "you#email.com"
storage = "/storage/acme.json"
entryPoint = "https"
acmeLogging = true
onHostRule = true
[acme.tlsChallenge]
2) Connect the configuration to your Traefik deployment. Here's my configuration:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: dpl-traefik
labels:
k8s-app: traefik
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik
template:
metadata:
labels:
k8s-app: traefik
name: traefik
spec:
serviceAccountName: svc-traefik
terminationGracePeriodSeconds: 60
volumes:
- name: config
configMap:
name: cfg-traefik
- name: cert-storage
persistentVolumeClaim:
claimName: pvc-traefik
containers:
- image: traefik:alpine
name: traefik
volumeMounts:
- mountPath: "/config"
name: "config"
- mountPath: "/storage"
name: cert-storage
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
- --configFile=/config/traefik.toml

I can't connect my ingress with my service

I have a problem with my ingress and my service, I can not get that when I connect to the IP of my server, I redirect to the service I have associated with port 80, which is my website. I pass you the configuration files and the description of the ingress:
apiVersion: v1
kind: Namespace
metadata:
name: bookstack
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: bookstack
- name: MYSQL_PASS
value: pass
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_USER
value: user
image: mysql:5.7
name: mysql
ports:
- containerPort: 3306
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
service: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: bookstack
spec:
containers:
- env:
- name: namespace
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DB_DATABASE
value: bookstack
- name: DB_HOST
value: mysql
- name: DB_PASSWORD
value: root
- name: DB_USERNAME
value: root
image: solidnerd/bookstack:latest
name: bookstack
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
type: NodePort
ports:
- name: http-port
port: 80
protocol: TCP
selector:
app: bookstack
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http
namespace: bookstack
spec:
backend:
serviceName: bookstack
servicePort: http-port
This is what appears on my ingress:
Name: http
Namespace: bookstack
Address:
Default backend: bookstack:http-port (10.36.0.22:80)
Rules:
Host Path Backends
---- ---- --------
* * bookstack:http-port (10.36.0.22:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"http","namespace":"bookstack"},"spec":{"backend":{"serviceName":"bookstack","servicePort":"http-port"}}}
Events: <none>
It doesn't return any external IP to connect me, why could it be? I want to avoid using LoadBalancer as a service type.
The main problem was that I didn't have activated the balancer that Google Kubernetes Engine offers by default, not having it active I couldn't generate an external ip because there wasn't a balancer. There are two solutions, either activate GKE's default load balancer or create a type of service: LoadBalancer.
Important to activate also within the deploy the readinessProbe
and livenessProbe.
An example:
readinessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
There wouldn't be an external IP specifically because NodePort represents all the nodes on your cluster on that specific port. So, essentially you would have to point an external load balancer or that traffic source to each of the nodes on your cluster on that specific NodePort.
Note that if you are using ExternalTrafficPolicy=Local only the nodes that have pods for your service will reply.