502 Bad Gateway when deploying Apollo Server application to GKE - kubernetes

I'm trying to deploy my Apollo Server application to my GKE cluster. However, when I visit the static IP for my site I receive a 502 Bad Gateway error. I was able to get my client to deploy properly in a similar fashion so I'm not sure what I'm doing wrong. My deployment logs seem to show that the server started properly. However my ingress indicates that my service is unhealthy since it seems to be failing the health check.
Here is my deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: <DEPLOYMENT_NAME>
labels:
app: <DEPLOYMENT_NAME>
spec:
replicas: 1
selector:
matchLabels:
app: <POD_NAME>
template:
metadata:
name: <POD_NAME>
labels:
app: <POD_NAME>
spec:
serviceAccountName: <SERVICE_ACCOUNT_NAME>
containers:
- name: <CONTAINER_NAME>
image: <MY_IMAGE>
imagePullPolicy: Always
ports:
- containerPort: <CONTAINER_PORT>
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- '/cloud_sql_proxy'
- '-instances=<MY_PROJECT>:<MY_DB_INSTANCE>=tcp:<MY_DB_PORT>'
securityContext:
runAsNonRoot: true
My service.yml
apiVersion: v1
kind: Service
metadata:
name: <MY_SERVICE_NAME>
labels:
app: <MY_SERVICE_NAME>
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: <CONTAINER_PORT>
selector:
app: <POD_NAME>
And my ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: <INGRESS_NAME>
annotations:
kubernetes.io/ingress.global-static-ip-name: <CLUSTER_NAME>
networking.gke.io/managed-certificates: <CLUSTER_NAME>
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: <SERVICE_NAME>
servicePort: 80
Any ideas what is causing this failure?

With Apollo Server you need the health check to look at the correct endpoint. So add the following to your deployment.yml under the container.
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: '/.well-known/apollo/server-health'
port: <CONTAINER_PORT>
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: '/.well-known/apollo/server-health'
port: <CONTAINER_PORT>

Related

Ingress controller gives "Service does not have any active Endpoint" only when app deployed in different namespace from ingress controller

I have an api that I am building and linking it to my nginx ingress controller to be used as a loadbalancer. My below yaml works when I change the namespace to default and I'm able to access the API no problem. When I change all the below to any other name space and install in that namespace, I get the error from ingress controller:
Service does not have any active Endpoint
so I thought that the nginx ingress controller (deployed in default namespace) was supposed to be able to see ingress in all namespaces. So is there something wrong with my deployment steps below?
---
apiVersion: v1
kind: Service
metadata:
name: demoapi
namespace: alex
spec:
type: ClusterIP
selector:
app: demoapi
ports:
- protocol: TCP
port: 80
targetPort: 8080
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demoapi
namespace: alex
labels:
app: demoapi
spec:
replicas: 1
selector:
matchLabels:
app: demoapi
template:
metadata:
labels:
app: demoapi
spec:
containers:
- name: demoapi
image: myrepo/demoapi:latest
ports:
- containerPort: 8080
livenessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-buffer-size: 512k
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: demoapi
namespace: alex
spec:
rules:
- host: mydomain.com
http:
paths:
- backend:
serviceName: demoapi
servicePort: 80
path: /api/v1/hello
tls:
- hosts:
- mydomain.com
secretName: mydomain-ssl-secret
I also checked nginx controller config - and it's set with default to watch all namespaces

Cant configure ingress on gcloud properly

I am trying to deploy a simple app on google cloud. I am testing the gitlab kluster integration.
Here is my yaml k8:
---
apiVersion: v1
kind: Service
metadata:
name: service
namespace: "my-service"
labels:
run: service
spec:
type: NodePort
selector:
run: "service"
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-api
namespace: "my-service"
labels:
run: service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-v1
namespace: "my-service"
labels:
run: service
spec:
replicas: 1
selector:
matchLabels:
run: service
template:
metadata:
labels:
run: service
spec:
serviceAccountName: service-api
containers:
- name: service
image: "gcr.io/test/service:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: test
mountPath: /usr/test
volumes:
- name: test
emptyDir: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "service-ingress"
namespace: "my-service"
labels:
run: service
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: service
servicePort: 9000
If I log into the pod I can curl the service on the nodePort designed IP, but if I try to hit the ingress address I just get a error:
I am not sure why there are 2 backend services on the loadbalancer that is created automatically, the one that points to my app shows as unhealthy
[load balancer backends1
You need to define readiness probe in your pod spec because GKE ingress controller picks up health check from the readiness probe.
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-v1
namespace: "my-service"
labels:
run: service
spec:
replicas: 1
selector:
matchLabels:
run: service
template:
metadata:
labels:
run: service
spec:
serviceAccountName: service-api
containers:
- name: service
image: "gcr.io/test/service:latest"
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: test
mountPath: /usr/test
volumes:
- name: test
emptyDir: {}

Google Cloud Platform (GCP) Ingress unhealthy backend

I have the following deployment running in Google Cloud Platform (GCP):
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: mybackend
labels:
app: backendweb
spec:
replicas: 1
selector:
matchLabels:
app: backendweb
tier: web
template:
metadata:
labels:
app: backendweb
tier: web
spec:
containers:
- name: mybackend
image: eu.gcr.io/teststuff/backend:latest
ports:
- containerPort: 8081
This uses the following service:
apiVersion: v1
kind: Service
metadata:
name: mybackend
labels:
app: backendweb
spec:
type: NodePort
selector:
app: backendweb
tier: web
ports:
- port: 8081
targetPort: 8081
Which uses this ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: backend-ip
labels:
app: backendweb
spec:
backend:
serviceName: mybackend
servicePort: 8081
When I spin all this up in Google Cloud Platform however, I get the following error message on my ingress:
All backend services are in UNHEALTHY state
I've looked through my pod logs with no indication about the problem. Grateful for any advice!
Most likely this problem is caused by not returning 200 on route '/' for your pod. Please check your pod configurations. If you don't want to return 200 at route '/', you could add a readiness rule for health check like this:
readinessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 5
periodSeconds: 5

How to setup HTTPS load balancer in kubernetes

I have a requirement to make my application to support the request over https and block the http port.I want to use certificate provided my company so do i need the jks certs or some other type. Im not sure how to make it https in gke. I have seen couple of documentation but they are not clear.This is my current kubernetes deployment file.Please let me know how can i configure it.
apiVersion: v1
kind: Service
metadata:
name: oms-integeration-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: integeration
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: integeration
spec:
replicas: 2
template:
metadata:
labels:
app: integeration
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:8080",
"--service=oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog",
"--rollout_strategy=managed",
]
- name: integeration-container
image: us.gcr.io/gcp-dsw-oms-int-{{env}}/gke/oms-integ-service:{{tag}}
readinessProbe:
httpGet:
path: /healthcheck
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
ports:
- containerPort: 8080
resources:
requests:
memory: 500M
env:
- name: LOGGING_FILE
value: "integeration-container"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: integeration-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "oms-int-ip"
kubernetes.io/ingress.class: "gce"
rules:
- host: "oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog"
http:
paths:
- path: /*
backend:
serviceName: oms-integeration-service
servicePort: 80
You have to create a secret that contains your SSL certificate and then reference that secret in your ingress spec as explained here

Configure Ingress Kubernetes - accessible only on single node

I had setup ingress on my Kubernetes Cluster running on VMWAre virtual machines by following everything similar to the specifications here. All the ports are open and accessible.
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example
My master is x.x.x.10 and nodes are x.x.x.12 and x.x.x.13.
After the creation of ingress/controllers, I need to get the IP where the nginx-controller runs
nginx-ingress-rc-kgfmd 1/1 Running 0 21h 172.16.5.5 x.x.x.12
so, it usually runs either on x.x.x.12 or x.x.x.13, and then when I do this it hits my web service
curl --resolve master.federated.fds:80:x.x.x.12 https://master.federated.fds/coffee
where master.federated.fds is the DNS resolvable name of Master.
I need to make it work without the help of IP address and only with the DNS resolvable name or else atleast with any of the node ip's
Eg: http://node2.federated.fds/coffee, when I curl this I get Connection refused error
Updating with specifications
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
labels:
app: coffee
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
# nodePort: 30080
type: NodePort
selector:
app: coffee
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
rules:
- host: jciamaster.federated.fds
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
nginx ing controller
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
hostPort: 80
I see that the port 80 is listening only on the node where nginx pod runs and not on any other node. Could someone pls let me know how to access the application through all node ip's or thro a url like jciamaster.federated.fds?
Thanks,
Update:
Tried to run with nginx controller as svc as suggested by Marc
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
#args:
#- -v=3
#- -nginx-configmaps=default/nginx-config
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx-ingress-label
name: nginx-ing-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
nodePort: 30000
type: NodePort
selector:
name: nginx-ingress
When I hit http://x.x.x.:30000/coffee it just hangs and does nothing.Anything I am doing wrong?
You can expose the nginx controller Pod with a NodePort Service, then you can access it on all nodes.