Kubernetes service as env var to frontend usage - kubernetes

I'm trying to configure kubernetes and in my project I've separeted UI and API.
I created one Pod and I exposed both as services.
How can I set API_URL inside pod.yaml configuration in order to send requests from user's browser?
I can't use localhost because the communication isn't between containers.
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: project
labels:
name: project
spec:
containers:
- image: 'ui:v1'
name: ui
ports:
- name: ui
containerPort: 5003
hostPort: 5003
env:
- name: API_URL
value: <how can I set the API address here?>
- image: 'api:v1'
name: api
ports:
- name: api
containerPort: 5000
hostPort: 5000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-url
key: url
services.yaml
apiVersion: v1
kind: Service
metadata:
name: api
labels:
name: api
spec:
type: NodePort
ports:
- name: 'http'
protocol: 'TCP'
port: 5000
targetPort: 5000
nodePort: 30001
selector:
name: project
---
apiVersion: v1
kind: Service
metadata:
name: ui
labels:
name: ui
spec:
type: NodePort
ports:
- name: 'http'
protocol: 'TCP'
port: 80
targetPort: 5003
nodePort: 30003
selector:
name: project

The service IP is already available in a environment variable inside the pod, because Kubernetes initializes a set of environment variables for each service that exists at that moment.
To list all the environment variables of a pod
kubectl exec <pod-name> env
If the pod was created before the service you must delete it and create it again.
Since you named your service api, one of the variables the command above should list is API_SERVICE_HOST.
But you don't really need to lookup the service IP address inside environment variables. You can simply use the service name as the hostname. Any pod can connect to the service api, simply by calling api.default.svc.cluster (assuming your service is in the default namespace).

I created an Ingress to solve this issue and point to DNS instead of IP.
ingres.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: project
spec:
tls:
- secretName: tls
backend:
serviceName: ui
servicePort: 5003
rules:
- host: www.project.com
http:
paths:
- backend:
serviceName: ui
servicePort: 5003
- host: api.project.com
http:
paths:
- backend:
serviceName: api
servicePort: 5000
deployment.yaml
apiVersion: v1
kind: Pod
metadata:
name: project
labels:
name: project
spec:
containers:
- image: 'ui:v1'
name: ui
ports:
- name: ui
containerPort: 5003
hostPort: 5003
env:
- name: API_URL
value: https://api.project.com
- image: 'api:v1'
name: api
ports:
- name: api
containerPort: 5000
hostPort: 5000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: postgres-url
key: url

Related

Microk8s ingress - defaultBackend

inside my ingress config i changed default backend.
spec:
defaultBackend:
service:
name: navigation-service
port:
number: 80
When I describe ingress i have got
Name: ingress-nginx
Namespace: default
Address: 127.0.0.1
Default backend: navigation-service:80 (10.1.173.59:80)
I trying to access it via localhost and i have got 404. However when i curl 10.1.173.59, i have got my static page. So my navigation-service its ok and something is wrong with defaultbacked? Even if i trying
- pathType: Prefix
path: /
backend:
service:
name: navigation-service
port:
number: 80
I have got 500 error.
What im doing wrong?
Edit: Works via NodePort but I need to access it via ingress.
apiVersion: apps/v1
kind: Deployment
metadata:
name: navigation-deployment
spec:
selector:
matchLabels:
app: navigation-deployment
template:
metadata:
labels:
app: navigation-deployment
spec:
containers:
- name: nginx
image: nginx:1.13.3-alpine
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/index.html
name: nginx-html
- mountPath: /etc/nginx/conf.d/default.conf
name: nginx-default
volumes:
- name: nginx-html
hostPath:
path: /home/x/navigation/index.html
- name: nginx-default
hostPath:
path: /home/x/navigation/default.conf
apiVersion: v1
kind: Service
metadata:
name: navigation-service
spec:
type: ClusterIP
selector:
app: navigation-deployment
ports:
- name: "http"
port: 80
targetPort: 80
If someone have this problem then you need to run ingress controller with args - --default-backend-service=namespace/service_name

Connect to Postgresql from inside kubernetes cluster

I setup a series of VM 192.168.2.(100,105,101,104) where kubernetes master is on 100 and two workers on 101,104. Also setup the postgres on 192.168.2.105, followed this tutorial but it is still unreachable from within. Tried it in minikube inside a test VM where minikube and postgres were installed in the same VM, worked just fine.
Changed the postgers config file from localhost to *, changed listen at pg_hba.conf to 0.0.0.0/0
Installed postgesql-12 and postgresql-client-12 in the VM 192.168.2.105:5432, now i added headless service to kubernetes which is as follows
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
------
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 192.168.2.105
ports:
- port: 5432
in my deployment I am defining this to access database
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:11.0.0
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: 'my-service:5432'
- name: DB_DATABASE
value: postgres
- name: DB_PASSWORD
value: admin
- name: DB_SCHEMA
value: public
- name: DB_USER
value: postgres
- name: DB_VENDOR
value: POSTGRES
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
Also the VMs are bridged, not on NAT.
What i am doing wrong here ?
The first thing we have to do is create the headless service with custom endpoint. The IP in my solution is only specific for my machine.
Endpoint with service:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgres-service
subsets:
- addresses:
- ip: 192.168.2.105
ports:
- port: 5432
As for my particular specs, I haven't defined any ingress or loadbalancer so i'll change the selector type from LoadBalancer to NodePort in the service after its deployed.
Now i deployed the keycloak with the the mentioned .yaml file
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
- name: https
port: 8443
targetPort: 8443
selector:
app: keycloak
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:11.0.0
env:
- name: KEYCLOAK_USER
value: "admin" # TODO give username for master realm
- name: KEYCLOAK_PASSWORD
value: "admin" # TODO give password for master realm
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: # <Node-IP>:<LoadBalancer-Port/ NodePort>
- name: DB_DATABASE
value: "keycloak" # Database to use
- name: DB_PASSWORD
value: "admin" # Database password
- name: DB_SCHEMA
value: public
- name: DB_USER
value: "postgres" # Database user
- name: DB_VENDOR
value: POSTGRES
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
After mentioning all the possible values, it connects successfully to the postgres server that is hosted on another server away from kubernetes master and workers node !

gRPC socket closed on kubernetes with ingress

I have a gRPC server that works fine on my local machine. I can send grpc requests from a python app and get back the right responses.
I put the server into a GKE cluster (with only one node). I had a normal TCP load balancer in front of the cluster. In this setup my local client was able to get the correct response from some requests, but not others. I think it was the gRPC streaming that didn't work.
I assumed that this is because the streaming requires an HTTP/2 connection which requires SSL.
The standard load balancer I got in GKE didn't seem to support SSL, so I followed the docs to set up an ingress load balancer which does. I'm using a Lets-Encrypt certificate with it.
Now all gRPC requests return
status = StatusCode.UNAVAILABLE
details = "Socket closed"
debug_error_string =
"{"created":"#1556172211.931158414","description":"Error received from
peer
ipv4:ip.of.ingress.service:443", "file":"src/core/lib/surface/call.cc", "file_line":1041,"grpc_message":"Socket closed","grpc_status":14}"
The IP address is the external IP address of my ingress service.
The ingress yaml looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rev79-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "rev79-ip"
ingress.gcp.kubernetes.io/pre-shared-cert: "lets-encrypt-rev79"
kubernetes.io/ingress.allow-http: "false" # disable HTTP
spec:
rules:
- host: sub-domain.domain.app
http:
paths:
- path: /*
backend:
serviceName: sandbox-nodes
servicePort: 60000
The subdomain and domain of the request from my python app match the host in the ingress rule.
It connects to a node-port that looks like this:
apiVersion: v1
kind: Service
metadata:
name: sandbox-nodes
spec:
type: NodePort
selector:
app: rev79
environment: sandbox
ports:
- protocol: TCP
port: 60000
targetPort: 9000
The node itself has two containers and looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rev79-sandbox
labels:
app: rev79
environment: sandbox
spec:
replicas: 1
template:
metadata:
labels:
app: rev79
environment: sandbox
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1.31
args: [
"--http2_port=9000",
"--service=rev79.endpoints.rev79-232812.cloud.goog",
"--rollout_strategy=managed",
"--backend=grpc://0.0.0.0:3011"
]
ports:
- containerPort: 9000
- name: rev79-uac-sandbox
image: gcr.io/rev79-232812/uac:latest
imagePullPolicy: Always
ports:
- containerPort: 3011
env:
- name: RAILS_MASTER_KEY
valueFrom:
secretKeyRef:
name: rev79-secrets
key: rails-master-key
The target of the node port is the ESP container which connects to the gRPC service deployed in the cloud, and the backend which is a Rails app that implements the backend of the API. This rails app isn't running the rails server, but a specialised gRPC server that comes with the grpc_for_rails gem
The grpc_server in the Rails app doesn't record any action in the logs, so I don't think the request gets that far.
kubectl get ingress reports this:
NAME HOSTS ADDRESS PORTS AGE
rev79-ingress sub-domain.domain.app my.static.ip.addr 80 7h
showing port 80, even though it's set up with SSL. That seems to be a bug. When I check with curl -kv https://sub-domain.domain.app the ingress server handles the request fine, and uses HTTP/2. It reurns an HTML formatted server error, but I'm not sure what generates that.
The API requires an API key, which the python client inserts into the metadata of each request.
When I go to the endpoints page of my GCP console I see that the API is not registering any requests since putting in the ingress loadbalancer, so it looks like the requests are not reaching the EPS container.
So why am I getting "socket closed" errors with gRPC?
I said I would come back and post an answer here once I got it working. It looks like I never did. Being a man of my word I'll post now my config files which are working for me.
in my deployment I've put a liveness and readiness probe for the ESP container. This made deployments happen smoothly without downtime:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rev79-sandbox
labels:
app: rev79
environment: sandbox
spec:
replicas: 3
template:
metadata:
labels:
app: rev79
environment: sandbox
spec:
volumes:
- name: nginx-ssl
secret:
secretName: nginx-ssl
- name: gcs-creds
secret:
secretName: rev79-secrets
items:
- key: gcs-credentials
path: "gcs.json"
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1.45
args: [
"--http_port", "8080",
"--ssl_port", "443",
"--service", "rev79-sandbox.endpoints.rev79-232812.cloud.goog",
"--rollout_strategy", "managed",
"--backend", "grpc://0.0.0.0:3011",
"--cors_preset", "cors_with_regex",
"--cors_allow_origin_regex", ".*",
"-z", " "
]
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 8080
timeoutSeconds: 5
failureThreshold: 1
volumeMounts:
- name: nginx-ssl
mountPath: /etc/nginx/ssl
readOnly: true
ports:
- containerPort: 8080
- containerPort: 443
protocol: TCP
- name: rev79-uac-sandbox
image: gcr.io/rev79-232812/uac:29eff5e
imagePullPolicy: Always
volumeMounts:
- name: gcs-creds
mountPath: "/app/creds"
ports:
- containerPort: 3011
name: end-grpc
- containerPort: 3000
env:
- name: RAILS_MASTER_KEY
valueFrom:
secretKeyRef:
name: rev79-secrets
key: rails-master-key
This is my service config that exposes the deployment to the load balancer:
apiVersion: v1
kind: Service
metadata:
name: rev79-srv-ingress-sandbox
labels:
type: rev79-srv
annotations:
service.alpha.kubernetes.io/app-protocols: '{"rev79":"HTTP2"}'
cloud.google.com/neg: '{"ingress": true}'
spec:
type: NodePort
ports:
- name: rev79
port: 443
protocol: TCP
targetPort: 443
selector:
app: rev79
environment: sandbox
And this is my ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rev79-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "rev79-global-ip"
spec:
tls:
- secretName: sandbox-api-rev79-app-tls
rules:
- host: sandbox-api.rev79.app
http:
paths:
- backend:
serviceName: rev79-srv-ingress-sandbox
servicePort: 443
I'm using cert-manager to manage the certificates.
It was a long time agao now. I can't remember if there was anything else I did to solve the issue I was having

define an url for an application inside kubernetes

hy folks
Currently i trying to setup an url in my kubernetes
I wrote a service to be able to connect to the dns to resolv all external URL.
I defined as well an Ingress
kind: Ingress
metadata:
name: dnsingressresource
spec:
# tls:
# - hosts:
# - < domain>
# secretName: <tls_secret_name>
rules:
- host: cloud.devlan.xx.xxx
http:
paths:
- path: /mobdev1/auth
backend:
serviceName: service-cas-nodeport
servicePort: 2488
if i want to go to the url of my application i've to write this
https://cloud.devlan.xx.xxx:2488/mobdev1/auth/login
I trying to get this
https://cloud.devlan.xx.xxx/mobdev1/auth/login
do you know how i can get it ?
You should specify port 80 for your Service and targetPort should be the port in your container
Defining a Service
deployment.yaml
kind: Deployment
...
spec:
containers:
- name: my-app
image: "my-image:my-tag"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 2488
protocol: TCP
service.yaml
apiVersion: v1
kind: Service
...
spec:
type: NodePort
ports:
- port: 80
targetPort: 2488
protocol: TCP
name: http
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
...
spec:
backend:
serviceName: my-service
servicePort: 80

Kubernetes ingress with 2 services does not always find the correct service

I have a Kubernetes cluster with a backend service and a security service.
The ingress is defined as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: solidary-life
annotations:
kubernetes.io/ingress.global-static-ip-name: sl-ip
certmanager.k8s.io/acme-http01-edit-in-place: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/ssl-redirect: "true"
labels:
app: sl
spec:
rules:
- host: app-solidair-vlaanderen.com
http:
paths:
- path: /v0.0.1/*
backend:
serviceName: backend-backend
servicePort: 8080
- path: /auth/*
backend:
serviceName: security-backend
servicePort: 8080
tls:
- secretName: solidary-life-tls
hosts:
- app-solidair-vlaanderen.com
The backend service is configured like:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: sl
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: backend-app
image: gcr.io/solidary-life-218713/sv-backend:0.0.6
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /v0.0.1/api/online
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: backend-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
and the auth server service:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: security
labels:
app: sl-security
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: security-app
image: gcr.io/solidary-life-218713/sv-security:0.0.1
ports:
- name: http
containerPort: 8080
- name: management
containerPort: 9090
- name: jgroups-tcp
containerPort: 7600
- name: jgroups-tcp-fd
containerPort: 57600
- name: jgroups-udp
containerPort: 55200
protocol: UDP
- name: jgroups-udp-mc
containerPort: 45688
protocol: UDP
- name: jgroups-udp-fd
containerPort: 54200
protocol: UDP
- name: modcluster
containerPort: 23364
- name: modcluster-udp
containerPort: 23365
protocol: UDP
- name: txn-recovery-ev
containerPort: 4712
- name: txn-status-mgr
containerPort: 4713
readinessProbe:
httpGet:
path: /auth/
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: security-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
Now I can go to the url's:
https://app-solidair-vlaanderen.com/v0.0.1/api/online
https://app-solidair-vlaanderen.com/auth/
Sometimes this works, sometimes I get 404's. This is quite annoying and I am quite new to Kubernetes. I don't find the error.
Can it have something to do with the "sl" label that's on both the backend and security service definition?
Yes. At least that must be the start of the issue, assuming all your services are on the same Kubernetes namespace. Can you use a different label for each?
So, in essence, you have 2 services that are randomly selecting pods belonging to the security Deployment and the backend deployment. One way to determine where your service is really sending requests is by looking at its endpoints and running:
kubectl -n <your-namespace> <get or describe> ep