Being completly new to google cloud, and almost new to kubernetes, I struggled my whole weekend trying to deploy my app in GKE.
My app consists of a react frontend, nodejs backend, postgresql database (connected to the backend with a cloudsql-proxy) and redis.
I serve the frontend and backend with an Ingress, everything seems to be working and all, my pods are running. The ingress-nginx exposes the endpoint of my app, but when when I open it, instead of seeing my app, I see blank page with a 200 response. And when I do kubectl logs MY_POD, I can see that my react app is running.
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: superflix-ingress-service
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: superflix-ui-node-service
servicePort: 3000
- path: /graphql/*
backend:
serviceName: superflix-backend-node-service
servicePort: 4000
Here is my backend:
kind: Service
apiVersion: v1
metadata:
name: superflix-backend-node-service
spec:
type: NodePort
selector:
app: app
ports:
- port: 4000
targetPort: 4000
# protocol: TCP
name: http
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: superflix-backend-deployment
namespace: default
spec:
replicas: 2
template:
metadata:
labels:
app: app
spec:
containers:
- name: superflix-backend
image: gcr.io/superflix-project/superflix-server:v6
ports:
- containerPort: 4000
# The following environment variables will contain the database host,
# user and password to connect to the PostgreSQL instance.
env:
- name: REDIS_HOST
value: superflix-redis.default.svc.cluster.local
- name: IN_PRODUCTION
value: "true"
- name: POSTGRES_DB_HOST
value: "127.0.0.1"
- name: POSTGRES_DB_PORT
value: "5432"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-env-secrets
key: REDIS_PASS
# [START cloudsql_secrets]
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=superflix-project:europe-west3:superflix-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
# [END volumes]
And here is my frontend:
kind: Service
apiVersion: v1
metadata:
name: superflix-ui-node-service
spec:
type: NodePort
selector:
app: app
ports:
- port: 3000
targetPort: 3000
# protocol: TCP
name: http
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: superflix-ui-deployment
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: superflix-ui
image: gcr.io/superflix-project/superflix-ui:v4
ports:
- containerPort: 3000
env:
- name: IN_PRODUCTION
value: 'true'
- name: BACKEND_HOST
value: superflix-backend-node-service
EDIT:
When I look at the stackdriver logs of my nginx-ingress-controller I have warnings:
Service "default/superflix-ui" does not have any active Endpoint.
Service "default/superflix-backend" does not have any active Endpoint.
I actually found what was the issue. I changed the ingress service path from /* to /, and now it is working perfectly.
Related
My file directory looks like below:
deployment.yaml
config.yaml
import
realm.json
This is the deployment.yaml file that I used based on the suggestion from Harsh Manvar:
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
selector:
app: keycloak
type: NodePort
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
nodePort: 32488
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:17.0.1
args:
- "start-dev"
- "--import-realm"
env:
- name: KEYCLOAK_ADMIN
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: KEYCLOAK_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: KC_PROXY
value: "edge"
volumeMounts:
- name: keycloak-volume
mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /realms/master
port: 8080
initialDelaySeconds: 120
volumes:
- name: keycloak-volume
configMap:
name: keycloak-configmap
And my config.ymal looks like this (where the json_content is where I copy paste the content of the imported realm JSON file):
apiVersion: v1
data:
realm.json: |
{json_content}
kind: ConfigMap
metadata:
name: keycloak-configmap
But when I accessed to the keycloak dash's web GUI, the imported realm did not show up.
try with once
- mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
On older version(i think widelyfy onces) it was supported to import the keycloak realm using environment variables however it is stopped now : https://github.com/keycloak/keycloak/issues/10216
also, it's supported in version 18 you are using the 17
still with 17 you can give it try by passing an argument to the deployment config : official import doc
args:
- "start-dev"
- "--import-realm"
also if you also check thread some are suggesting to use variable : KEYCLOAK_REALM_IMPORT
i also come across this blog which point legacy option to import the realm do check it out once: http://www.mastertheboss.com/keycloak/keycloak-with-docker/
inside my ingress config i changed default backend.
spec:
defaultBackend:
service:
name: navigation-service
port:
number: 80
When I describe ingress i have got
Name: ingress-nginx
Namespace: default
Address: 127.0.0.1
Default backend: navigation-service:80 (10.1.173.59:80)
I trying to access it via localhost and i have got 404. However when i curl 10.1.173.59, i have got my static page. So my navigation-service its ok and something is wrong with defaultbacked? Even if i trying
- pathType: Prefix
path: /
backend:
service:
name: navigation-service
port:
number: 80
I have got 500 error.
What im doing wrong?
Edit: Works via NodePort but I need to access it via ingress.
apiVersion: apps/v1
kind: Deployment
metadata:
name: navigation-deployment
spec:
selector:
matchLabels:
app: navigation-deployment
template:
metadata:
labels:
app: navigation-deployment
spec:
containers:
- name: nginx
image: nginx:1.13.3-alpine
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/index.html
name: nginx-html
- mountPath: /etc/nginx/conf.d/default.conf
name: nginx-default
volumes:
- name: nginx-html
hostPath:
path: /home/x/navigation/index.html
- name: nginx-default
hostPath:
path: /home/x/navigation/default.conf
apiVersion: v1
kind: Service
metadata:
name: navigation-service
spec:
type: ClusterIP
selector:
app: navigation-deployment
ports:
- name: "http"
port: 80
targetPort: 80
If someone have this problem then you need to run ingress controller with args - --default-backend-service=namespace/service_name
We currently have a multi-tenant backend laravel application set up, with pusher websockets enabled on the same app. This application is built into a docker image and hosted on Digital Ocean container registry, and deployed via HELM to our Kubernetes Cluster.
We also have a front end application built in angular that tries to connect to the backend app via port 80 on the /ws/ path to establish a websocket connection.
When we try to access the tenant1.example.com/ws/ we get a 502 gateway error, which suggests the ports arent mapping correctly? but tenant1.example.com port 80 works just fine.
Our heml chart yaml is as follows:
NAME: tenant1
LAST DEPLOYED: Fri Dec 11 14:34:00 2020
NAMESPACE: tenants
STATUS: pending-install
REVISION: 1
USER-SUPPLIED VALUES:
subdomain: tenant1
COMPUTED VALUES:
affinity: {}
autoscaling:
enabled: true
maxReplicas: 1
minReplicas: 1
targetCPUUtilizationPercentage: 80
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: nginx
tag: ""
imagePullSecrets: []
ingress:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
enabled: true
hosts:
- host: example.com
pathType: Prefix
tls: []
migrate:
enabled: true
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
replicaCount: 1
resources:
requests:
cpu: 10m
rootDB: public
securityContext: {}
service:
port: 80
type: ClusterIP
serviceAccount:
annotations: {}
create: true
name: ""
setup:
enabled: true
subdomain: tenant1
tolerations: []
---
# Source: backend-api/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tenant1-backend-api
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
---
# Source: backend-api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: tenant1-backend-api-service
namespace: tenants
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
name: 'http'
selector:
app: tenant1-backend-api-deployment
---
# Source: backend-api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: tenant1-backend-api-ws-service
namespace: tenants
spec:
type: ClusterIP
ports:
- port: 6001
targetPort: 6001
name: 'websocket'
selector:
app: tenant1-backend-api-deployment
---
# Source: backend-api/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tenant1-backend-api-deployment
namespace: tenants
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
selector:
matchLabels:
app: tenant1-backend-api-deployment
template:
metadata:
labels:
app: tenant1-backend-api-deployment
namespace: tenants
spec:
containers:
- name: backend-api
image: "registry.digitalocean.com/rock/backend-api:latest"
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 6001
resources:
requests:
cpu: 10m
env:
- name: CONTAINER_ROLE
value: "backend-api"
- name: DB_CONNECTION
value: "pgsql"
- name: DB_DATABASE
value: tenant1
- name: DB_HOST
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_HOST
- name: DB_PORT
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_PORT
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-database-creds
key: DB_PASSWORD
---
# Source: backend-api/templates/hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: tenant1-backend-api-hpa
namespace: tenants
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: tenant1-backend-api-deployment
minReplicas: 1
maxReplicas: 1
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 80
---
# Source: backend-api/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tenant1-backend-api-ingress
namespace: tenants
labels:
helm.sh/chart: backend-api-0.1.0
app: backend-api
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: tenant1.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tenant1-backend-api-service
port:
number: 80
- path: /ws/
pathType: Prefix
backend:
service:
name: tenant1-backend-api-ws-service
port:
number: 6001
I have a problem with my ingress and my service, I can not get that when I connect to the IP of my server, I redirect to the service I have associated with port 80, which is my website. I pass you the configuration files and the description of the ingress:
apiVersion: v1
kind: Namespace
metadata:
name: bookstack
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: bookstack
- name: MYSQL_PASS
value: pass
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_USER
value: user
image: mysql:5.7
name: mysql
ports:
- containerPort: 3306
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
service: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: bookstack
spec:
containers:
- env:
- name: namespace
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DB_DATABASE
value: bookstack
- name: DB_HOST
value: mysql
- name: DB_PASSWORD
value: root
- name: DB_USERNAME
value: root
image: solidnerd/bookstack:latest
name: bookstack
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
type: NodePort
ports:
- name: http-port
port: 80
protocol: TCP
selector:
app: bookstack
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http
namespace: bookstack
spec:
backend:
serviceName: bookstack
servicePort: http-port
This is what appears on my ingress:
Name: http
Namespace: bookstack
Address:
Default backend: bookstack:http-port (10.36.0.22:80)
Rules:
Host Path Backends
---- ---- --------
* * bookstack:http-port (10.36.0.22:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"http","namespace":"bookstack"},"spec":{"backend":{"serviceName":"bookstack","servicePort":"http-port"}}}
Events: <none>
It doesn't return any external IP to connect me, why could it be? I want to avoid using LoadBalancer as a service type.
The main problem was that I didn't have activated the balancer that Google Kubernetes Engine offers by default, not having it active I couldn't generate an external ip because there wasn't a balancer. There are two solutions, either activate GKE's default load balancer or create a type of service: LoadBalancer.
Important to activate also within the deploy the readinessProbe
and livenessProbe.
An example:
readinessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
There wouldn't be an external IP specifically because NodePort represents all the nodes on your cluster on that specific port. So, essentially you would have to point an external load balancer or that traffic source to each of the nodes on your cluster on that specific NodePort.
Note that if you are using ExternalTrafficPolicy=Local only the nodes that have pods for your service will reply.
I'm trying to configure kubernetes and in my project I've separeted UI and API.
I created one Pod and I exposed both as services.
How can I set API_URL inside pod.yaml configuration in order to send requests from user's browser?
I can't use localhost because the communication isn't between containers.
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: project
labels:
name: project
spec:
containers:
- image: 'ui:v1'
name: ui
ports:
- name: ui
containerPort: 5003
hostPort: 5003
env:
- name: API_URL
value: <how can I set the API address here?>
- image: 'api:v1'
name: api
ports:
- name: api
containerPort: 5000
hostPort: 5000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-url
key: url
services.yaml
apiVersion: v1
kind: Service
metadata:
name: api
labels:
name: api
spec:
type: NodePort
ports:
- name: 'http'
protocol: 'TCP'
port: 5000
targetPort: 5000
nodePort: 30001
selector:
name: project
---
apiVersion: v1
kind: Service
metadata:
name: ui
labels:
name: ui
spec:
type: NodePort
ports:
- name: 'http'
protocol: 'TCP'
port: 80
targetPort: 5003
nodePort: 30003
selector:
name: project
The service IP is already available in a environment variable inside the pod, because Kubernetes initializes a set of environment variables for each service that exists at that moment.
To list all the environment variables of a pod
kubectl exec <pod-name> env
If the pod was created before the service you must delete it and create it again.
Since you named your service api, one of the variables the command above should list is API_SERVICE_HOST.
But you don't really need to lookup the service IP address inside environment variables. You can simply use the service name as the hostname. Any pod can connect to the service api, simply by calling api.default.svc.cluster (assuming your service is in the default namespace).
I created an Ingress to solve this issue and point to DNS instead of IP.
ingres.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: project
spec:
tls:
- secretName: tls
backend:
serviceName: ui
servicePort: 5003
rules:
- host: www.project.com
http:
paths:
- backend:
serviceName: ui
servicePort: 5003
- host: api.project.com
http:
paths:
- backend:
serviceName: api
servicePort: 5000
deployment.yaml
apiVersion: v1
kind: Pod
metadata:
name: project
labels:
name: project
spec:
containers:
- image: 'ui:v1'
name: ui
ports:
- name: ui
containerPort: 5003
hostPort: 5003
env:
- name: API_URL
value: https://api.project.com
- image: 'api:v1'
name: api
ports:
- name: api
containerPort: 5000
hostPort: 5000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-url
key: url