I can't connect my ingress with my service - kubernetes

I have a problem with my ingress and my service, I can not get that when I connect to the IP of my server, I redirect to the service I have associated with port 80, which is my website. I pass you the configuration files and the description of the ingress:
apiVersion: v1
kind: Namespace
metadata:
name: bookstack
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: bookstack
- name: MYSQL_PASS
value: pass
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_USER
value: user
image: mysql:5.7
name: mysql
ports:
- containerPort: 3306
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
service: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: bookstack
spec:
containers:
- env:
- name: namespace
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DB_DATABASE
value: bookstack
- name: DB_HOST
value: mysql
- name: DB_PASSWORD
value: root
- name: DB_USERNAME
value: root
image: solidnerd/bookstack:latest
name: bookstack
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
type: NodePort
ports:
- name: http-port
port: 80
protocol: TCP
selector:
app: bookstack
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http
namespace: bookstack
spec:
backend:
serviceName: bookstack
servicePort: http-port
This is what appears on my ingress:
Name: http
Namespace: bookstack
Address:
Default backend: bookstack:http-port (10.36.0.22:80)
Rules:
Host Path Backends
---- ---- --------
* * bookstack:http-port (10.36.0.22:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"http","namespace":"bookstack"},"spec":{"backend":{"serviceName":"bookstack","servicePort":"http-port"}}}
Events: <none>
It doesn't return any external IP to connect me, why could it be? I want to avoid using LoadBalancer as a service type.

The main problem was that I didn't have activated the balancer that Google Kubernetes Engine offers by default, not having it active I couldn't generate an external ip because there wasn't a balancer. There are two solutions, either activate GKE's default load balancer or create a type of service: LoadBalancer.
Important to activate also within the deploy the readinessProbe
and livenessProbe.
An example:
readinessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15

There wouldn't be an external IP specifically because NodePort represents all the nodes on your cluster on that specific port. So, essentially you would have to point an external load balancer or that traffic source to each of the nodes on your cluster on that specific NodePort.
Note that if you are using ExternalTrafficPolicy=Local only the nodes that have pods for your service will reply.

Related

Ingress creating health check on HTTP instead of TCP

I am actually trying to run 3 containers in my gke cluster. I have them exposed via a network load balancer and over that, I am using ingress so I can reach my services from different domains with SSL certs on them.
Here is the complete manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app:web
spec:
containers:
- name: web
image: us-east4-docker.pkg.dev/web:e856485 # docker image
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms
spec:
replicas: 3
selector:
matchLabels:
app: cms
template:
metadata:
labels:
app: cms
spec:
containers:
- name: cms
image: us-east4-docker.pkg.dev/cms:4e1fe2f # docker image
ports:
- containerPort: 8055
env:
- name : DB
value : "postgres"
- name : DB_HOST
value : 10.142.0.3
- name : DB_PORT
value : "5432"
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us-east4-docker.pkg.dev/api:4e1fe2f # docker image
ports:
- containerPort: 8080
env:
- name : HOST
value : "0.0.0.0"
- name : PORT
value : "8080"
- name : NODE_ENV
value : production
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: web-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: web
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: web
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: cms-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: cms
spec:
ports:
- port: 8055
protocol: TCP
targetPort: 8055
selector:
app: cms
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: api-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: api
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: api
type: NodePort
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: web-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: cms-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: api-cert
type: kubernetes.io/tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
spec:
tls:
- secretName: api-cert
- secretName: cms-cert
- secretName: web-cert
rules:
- host: web-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: web-lb
port:
number: 3000
- host: cms-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: cms-lb
port:
number: 8055
- host: api-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: api-lb
port:
number: 8080
The containers are accessible through the load balancer(network), but from ingress(L7 lb) the health check is failing.
I tried editing the health checks manually from HTTP:80 to TCP:8080/8055/3000 for 3 services and it works.
But eventually, ingress reverts it back to HTTP health check and it fails again. I also tried using NodePort instead of load balancer as service type but no luck.
Any help?
The first thing I would like to mention is that you need to recheck your implementation because from what I see, you are creating an Ingress which will create a LoadBanacer, and this Ingress is using three services of type LoadBalancer in which each one of them will also create its LoadBalancer (I'm assuming the default behaviour, unless you applied the famous workaround of deleting the service's LoadBalancer manually after it is created).
And I don't think this is correct unless you need that design for some reason. So, my suggestion is that you might want to change your services types to NodePort.
As for answering your question, what you are missing is:
You need to implement a BackendConfig with custom HealthCheck configurations.
1- Create the Backendconfig:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: api-lb-backendconfig
spec:
healthCheck:
checkIntervalSec: INTERVAL
timeoutSec: TIMEOUT
healthyThreshold: HEALTH_THRESHOLD
unhealthyThreshold: UNHEALTHY_THRESHOLD
type: PROTOCOL
requestPath: PATH
port: PORT
2- Use this config in your service/s
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"ports": {
"PORT_NAME_1":"api-lb-backendconfig"
}}'
spec:
ports:
- name: PORT_NAME_1
port: PORT_NUMBER_1
protocol: TCP
targetPort: TARGET_PORT
Once you apply such configurations, your Ingress's LoadBalanacer will be created with the BackendConfig "api-lb-backendconfig"
Consider this documentation page as your reference.

Cannot connect to my MiniKube external service ip/port?

I have a mongo yaml and web-app(NodeJS) yaml set up like this:
mongo-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
mongo-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
mongo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
# blueprint for pods, creates pods with mongo:5.0 image
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:5.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
---
# kind: service
# name: any
# selector: select pods to forward the requests to
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: mongo
ports:
- protocol: TCP
port: 8080
targetPort: 27017
and the webapp.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
# blueprint for pods, creates pods with mongo:5.0 image
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
---
# kind: service
# name: any
# selector: select pods to forward the requests to
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
# default ClusterIP
# nodeport = external service
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30100
I ran the commands for each file
kubectl apply -f
i checked the status of the webapp which returned:
app listening on port 3000!
I got the IP address by
minikube ip
and the port was 30100
Why cannot not I access this web app?
I get a site cant be reached error.
If you are on Mac, check your minikube driver. I had to stop, delete minikube, then restart while specifying the hyperkit driver like so.
minikube stop
minikube delete
docker start --vm-driver=hyperkit
The information listed here is pretty useful too.

Connect to Postgresql from inside kubernetes cluster

I setup a series of VM 192.168.2.(100,105,101,104) where kubernetes master is on 100 and two workers on 101,104. Also setup the postgres on 192.168.2.105, followed this tutorial but it is still unreachable from within. Tried it in minikube inside a test VM where minikube and postgres were installed in the same VM, worked just fine.
Changed the postgers config file from localhost to *, changed listen at pg_hba.conf to 0.0.0.0/0
Installed postgesql-12 and postgresql-client-12 in the VM 192.168.2.105:5432, now i added headless service to kubernetes which is as follows
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
------
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 192.168.2.105
ports:
- port: 5432
in my deployment I am defining this to access database
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:11.0.0
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: 'my-service:5432'
- name: DB_DATABASE
value: postgres
- name: DB_PASSWORD
value: admin
- name: DB_SCHEMA
value: public
- name: DB_USER
value: postgres
- name: DB_VENDOR
value: POSTGRES
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
Also the VMs are bridged, not on NAT.
What i am doing wrong here ?
The first thing we have to do is create the headless service with custom endpoint. The IP in my solution is only specific for my machine.
Endpoint with service:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgres-service
subsets:
- addresses:
- ip: 192.168.2.105
ports:
- port: 5432
As for my particular specs, I haven't defined any ingress or loadbalancer so i'll change the selector type from LoadBalancer to NodePort in the service after its deployed.
Now i deployed the keycloak with the the mentioned .yaml file
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
- name: https
port: 8443
targetPort: 8443
selector:
app: keycloak
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:11.0.0
env:
- name: KEYCLOAK_USER
value: "admin" # TODO give username for master realm
- name: KEYCLOAK_PASSWORD
value: "admin" # TODO give password for master realm
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: # <Node-IP>:<LoadBalancer-Port/ NodePort>
- name: DB_DATABASE
value: "keycloak" # Database to use
- name: DB_PASSWORD
value: "admin" # Database password
- name: DB_SCHEMA
value: public
- name: DB_USER
value: "postgres" # Database user
- name: DB_VENDOR
value: POSTGRES
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
After mentioning all the possible values, it connects successfully to the postgres server that is hosted on another server away from kubernetes master and workers node !

How can I use ingress with Haproxy on kuberenets cluster?

I want to use ingress with Haproxy in my kuberenets cluster, how should i use it?
I am have tried using it on my local system, I have used the HAproxy ingress controller in different namespace but I am getting 503 error randomly for the haproxy pod which has been created.
try this
default backend
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
spec:
replicas: 1
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
spec:
ports:
- name: port-1
port: 8080
protocol: TCP
targetPort: 8080
selector:
run: ingress-default-backend
haproxy ingress controller
apiVersion: v1
data:
dynamic-scaling: "true"
backend-server-slots-increment: "4"
kind: ConfigMap
metadata:
name: haproxy-configmap
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
replicas: 1
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress
args:
- --default-backend-service=default/ingress-default-backend
- --default-ssl-certificate=default/tls-secret
- --configmap=$(POD_NAMESPACE)/haproxy-configmap
- --reload-strategy=native
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1936
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
externalIPs:
- 172.17.0.50
ports:
- name: port-1
port: 80
protocol: TCP
targetPort: 80
- name: port-2
port: 443
protocol: TCP
targetPort: 443
- name: port-3
port: 1936
protocol: TCP
targetPort: 1936
selector:
run: haproxy-ingress
update externalIPs as per your environment

Ingress endpoint displays a blank page with response 200 on GKE

Being completly new to google cloud, and almost new to kubernetes, I struggled my whole weekend trying to deploy my app in GKE.
My app consists of a react frontend, nodejs backend, postgresql database (connected to the backend with a cloudsql-proxy) and redis.
I serve the frontend and backend with an Ingress, everything seems to be working and all, my pods are running. The ingress-nginx exposes the endpoint of my app, but when when I open it, instead of seeing my app, I see blank page with a 200 response. And when I do kubectl logs MY_POD, I can see that my react app is running.
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: superflix-ingress-service
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: superflix-ui-node-service
servicePort: 3000
- path: /graphql/*
backend:
serviceName: superflix-backend-node-service
servicePort: 4000
Here is my backend:
kind: Service
apiVersion: v1
metadata:
name: superflix-backend-node-service
spec:
type: NodePort
selector:
app: app
ports:
- port: 4000
targetPort: 4000
# protocol: TCP
name: http
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: superflix-backend-deployment
namespace: default
spec:
replicas: 2
template:
metadata:
labels:
app: app
spec:
containers:
- name: superflix-backend
image: gcr.io/superflix-project/superflix-server:v6
ports:
- containerPort: 4000
# The following environment variables will contain the database host,
# user and password to connect to the PostgreSQL instance.
env:
- name: REDIS_HOST
value: superflix-redis.default.svc.cluster.local
- name: IN_PRODUCTION
value: "true"
- name: POSTGRES_DB_HOST
value: "127.0.0.1"
- name: POSTGRES_DB_PORT
value: "5432"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-env-secrets
key: REDIS_PASS
# [START cloudsql_secrets]
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=superflix-project:europe-west3:superflix-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
# [END volumes]
And here is my frontend:
kind: Service
apiVersion: v1
metadata:
name: superflix-ui-node-service
spec:
type: NodePort
selector:
app: app
ports:
- port: 3000
targetPort: 3000
# protocol: TCP
name: http
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: superflix-ui-deployment
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: superflix-ui
image: gcr.io/superflix-project/superflix-ui:v4
ports:
- containerPort: 3000
env:
- name: IN_PRODUCTION
value: 'true'
- name: BACKEND_HOST
value: superflix-backend-node-service
EDIT:
When I look at the stackdriver logs of my nginx-ingress-controller I have warnings:
Service "default/superflix-ui" does not have any active Endpoint.
Service "default/superflix-backend" does not have any active Endpoint.
I actually found what was the issue. I changed the ingress service path from /* to /, and now it is working perfectly.