How can I expose my Hasura microservice on multiple ports? - kubernetes

My microservice has multiple containers, each of which needs access to a different port. How do I expose this service on multiple ports using the Hasura CLI and project configuration files?
Edit: Adding the microservice's k8s.yaml (as requested by #iamnat)
Let's say I have two containers, containerA and containerB, that I want to expose over HTTP on ports 6379 and 8000 respectively.
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: www
hasuraService: custom
name: www
namespace: '{{ cluster.metadata.namespaces.user }}'
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: www
spec:
containers:
- name: containerA
image: imageA
ports:
- containerPort: 6379
- name: containerB
image: imageB
ports:
- containerPort: 8000
securityContext: {}
terminationGracePeriodSeconds: 0
status: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: www
hasuraService: custom
name: www
namespace: '{{ cluster.metadata.namespaces.user }}'
spec:
ports:
- port: 6379
name: containerA
protocol: HTTP
targetPort: 6379
- port: 8000
name: containerB
protocol: HTTP
targetPort: 8000
selector:
app: www
type: ClusterIP
status:
loadBalancer: {}
kind: List
metadata: {}

TL;DR:
- Add an API gateway route for each HTTP endpoint you want to expose [docs]
Inside the kubernetes cluster, give your k8s spec this is what your setup will look like:
http://www.default:8000 -> containerA
http://www.default:6379 -> containerB
So you need to create a route for each of those HTTP paths in conf/routes.yaml.
www-a:
/:
upstreamService:
name: www
namespace: {{ cluster.metadata.namespaces.user }}
upstreamServicePath: /
upstreamServicePort: 8000
corsPolicy: allow_all
www-b:
/:
upstreamService:
name: www
namespace: {{ cluster.metadata.namespaces.user }}
upstreamServicePath: /
upstreamServicePort: 6379
corsPolicy: allow_all
This means that, you'll get the following:
https://www-a.domain.com -> containerA
https://www-a.domain.com -> containerB

Related

How to create a HTTPS route to a Service that is listening on Https with Traefik, and Kubernetes

I'm a newbie in kubernetes and Traefik.
I follow up that tutorial:
https://docs.traefik.io/user-guides/crd-acme/
And I changed it to use my Service in Scala, that it is under https and 9463 port.
I'm trying to deploy my Scala service with kubernetes and traefik.
When I forward directly to the service :
kubectl port-forward service/core-service 8001:9463
And I perform a curl -k 'https://localhost:8001/health' :
I get the "{Message:Ok}"
But when I perform a port forward to traefik
kubectl port-forward service/traefik 9463:9463 -n default
And perform a curl -k 'https://ejemplo.com:9463/tls/health'
I get an "Internal server error"
I guess the problem is that my "core-service" is listening over HTTPS protocol, that's what I add scheme:https.
I tried to find the solution over the documentation but it is confusing.
Those are my yml files:
Services.yaml
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 9463
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: core-service
spec:
ports:
- protocol: TCP
name: websecure
port: 9463
selector:
app: core-service
Deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.0
args:
- --api.insecure
- --accesslog
- --entrypoints.websecure.Address=:9463
- --providers.kubernetescrd
- --certificatesresolvers.default.acme.tlschallenge
- --certificatesresolvers.default.acme.email=foo#you.com
- --certificatesresolvers.default.acme.storage=acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
- --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
ports:
- name: websecure
containerPort: 9463
- name: admin
containerPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: core-service
labels:
app: core-service
spec:
replicas: 1
selector:
matchLabels:
app: core-service
template:
metadata:
labels:
app: core-service
spec:
containers:
- name: core-service
image: core-service:0.1.4-SNAPSHOT
ports:
- name: websecure
containerPort: 9463
livenessProbe:
httpGet:
port: 9463
scheme: HTTPS
path: /health
initialDelaySeconds: 10
IngressRoute2.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
From the docs
A TLS router will terminate the TLS connection by default. However,
the passthrough option can be specified to set whether the requests
should be forwarded "as is", keeping all data encrypted.
In your case SSL Passthrough need to be enabled because the pod is expecting HTTPS traffic.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
passthrough: true

Could two cluster IP services be connected in Kubernetes?

The situation is that I want to connect two cluster IP services that are inside a tenant which already has Traefik as NodePort so that any of these two services can be a LoadBalancer because the NodePort is used by Traefik.
The two services I am trying to connect work as follows. The first one, which I called "Master", will receive a post from the client with a text and will call the other service, called "slave", which will add some text ("Hola Patri") to the text sent by the client. The two services are flask services defined by the app.py in the Docker image. You can see the app.py for both images below:
master/app.py
from flask import Flask, request
import requests
app = Flask(__name__)
#app.route("/", methods = ['GET', 'POST'])
def put():
if request.method == 'POST':
text = request.get_data()
r = requests.post("http://slave:5001",data=text)
result = r.text
return result
if __name__ == '__main__':
app.run(host="0.0.0.0", port=5000, debug=True)
slave/app.py
from flask import Flask, request
app = Flask(__name__)
#app.route("/", methods = ['GET', 'POST'])
def put():
if request.method == 'POST':
text = request.get_data()
#text = request.data
texto_final = str(text) + 'Hola Patri'
return texto_final
if __name__ == '__main__':
app.run(host="0.0.0.0", port=5001, debug=True)
The configuration of the deployments and the services are defined in two yamls: master_src.yaml and slave_src.yaml.
master_src.yaml
kind: Namespace
apiVersion: v1
metadata:
name: innovation
labels:
name: innovation
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: master
namespace: innovation
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: master
imagePullPolicy: Always
securityContext:
runAsUser: 1000
runAsNonRoot: true
image: reg-dhc.app.corpintra.net/galiani/innovation:mastertest
ports:
- protocol: TCP
containerPort: 5000
imagePullSecrets:
- name: galiani-innovation-pull-secret
---
apiVersion: v1
kind: Service
metadata:
name: master
namespace: innovation
spec:
ports:
- protocol: TCP
port: 5000
targetPort: 5000
selector:
app: myapp
slave_src.yaml
kind: Namespace
apiVersion: v1
metadata:
name: innovation
labels:
name: innovation
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: slave
namespace: innovation
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: slave
imagePullPolicy: Always
securityContext:
runAsUser: 1000
runAsNonRoot: true
image: reg-dhc.app.corpintra.net/galiani/innovation:slavetest
ports:
- protocol: TCP
containerPort: 5001
imagePullSecrets:
- name: galiani-innovation-pull-secret
---
apiVersion: v1
kind: Service
metadata:
name: slave
namespace: innovation
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 5001
targetPort: 5001
I have also created a network policy to allow the traffic between the two services. The yaml used to define the network policy is the following.
networkpolicy_src.yaml
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: ingress-to-all
namespace: innovation
spec:
podSelector:
matchLabels:
app: myapp
ingress:
- from:
- podSelector:
matchLabels:
app: myapp
ports:
- port: 5000
protocol: TCP
- port: 5001
protocol: TCP
policyTypes:
- Ingress
The connection between the master service and the slave service is not working. I can access to the master and slave independently. Nevertheless, when I try to make a POST to the master (using curl) and it should connect to the slave, I get the following error:
curl: (52) Empty reply from server
Thank you for your help in advance!
For the new question I have regarding the connection using traefik. Here is the yaml of the trafik ingress:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-innovation
namespace: innovation
annotations:
traefik.frontend.rule.type: PathPrefixStrip
spec:
rules:
- http:
paths:
- path: /master
backend:
serviceName: master
servicePort: 5000
- path: /slave
backend:
serviceName: slave
servicePort: 5001
I have also corrected the networkpolicy yaml and now it is:
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: master-to-slave
namespace: innovation
spec:
podSelector:
matchLabels:
app: app-slave
ingress:
- ports:
- port: 5000
protocol: TCP
- port: 5001
protocol: TCP
- from:
- namespaceSelector:
matchLabels:
app: app-master
Thanks again for your help!
The problem could be having same label app: myapp for both master and slave. Change the label to app: master for master deployment and service and app: slave for slave deployment and service.

How can my services communicate with each other in a kubernetes deployment?

Part of my deployment looks like this
client -- main service __ service 1
|__ service 2
NOTE: Each of these 4 services is a container and I'm trying to do this where each is in it's own Pod (without using multi container pod)
Where main service must make a call to service 1, get results then send those results to service 2, get that result and send it back to the web client
main service operates in this order
receive request from web client pot :80
make request to http://localhost:8000 (service 1)
make request to http://localhost:8001 (service 2)
merge results
respond to web client with result
My deployments for service 1 and 2 look like this
SERVICE 1
apiVersion: v1
kind: Service
metadata:
name: serviceone
spec:
selector:
run: serviceone
ports:
- port: 80
targetPort: 5050
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: serviceone-deployment
spec:
replicas: 1
selector:
matchLabels:
run: serviceone
template:
metadata:
labels:
run: serviceone
spec:
containers:
- name: serviceone
image: test.azurecr.io/serviceone:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5050
SERVICE 2
apiVersion: v1
kind: Service
metadata:
name: servicetwo
spec:
selector:
run: servicetwo
ports:
- port: 80
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: servicetwo-deployment
spec:
replicas: 1
selector:
matchLabels:
run: servicetwo
template:
metadata:
labels:
run: servicetwo
spec:
containers:
- name: servicetwo
image: test.azurecr.io/servicetwo:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
But I don't know what the service and deployment would look like for the main service that has to make request to two other services.
EDIT: This is my attempt at the service/deployment for main service
apiVersion: v1
kind: Service
metadata:
name: mainservice
spec:
selector:
run: mainservice
ports:
- port: 80 # incoming traffic from web client pod
targetPort: 80 # traffic goes to container port 80
selector:
run: serviceone
ports:
- port: ?
targetPort: 8000 # the port the container is hardcoded to send traffic to service one
selector:
run: servicetwo
ports:
- port: ?
targetPort: 8001 # the port the container is hardcoded to send traffic to service two
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mainservice-deployment
spec:
replicas: 1
selector:
matchLabels:
run: mainservice
template:
metadata:
labels:
run: mainservice
spec:
containers:
- name: mainservice
image: test.azurecr.io/mainservice:v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
EDIT 2: alternate attempt at the service after finding this https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services
apiVersion: v1
kind: Service
metadata:
name: mainservice
spec:
selector:
run: mainservice
ports:
- name: incoming
port: 80 # incoming traffic from web client pod
targetPort: 80 # traffic goes to container port 80
- name: s1
port: 8080
targetPort: 8000 # the port the container is hardcoded to send traffic to service one
- name: s2
port: 8081
targetPort: 8001 # the port the container is hardcoded to send traffic to service two
The main service doesn't need to know anything about the services it calls other than their names. Simply access those services using the name of the Service, i.e. service1 and service2 (http://service1:80) and the requests will be forwarded to the correct pod.
Reference: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

How to enable subdomain with GKE

I have different Kubernetes deployment in GKE and I would like to access them from different external subdomains.
I tried to create 2 deployments with subdomain "sub1" and "sub2" and hostname "app" another deployment with hostname "app" and a service that expose it on the IP XXX.XXX.XXX.XXX configured on the DNS of app.mydomain.com
I would like to access the 2 child deployment from sub1.app.mydomain.com and sub2.app.mydomain.com
This should be automatic, adding new deployment I cannot change every time the DNS records.
Maybe I'm approaching the problem in the wrong way, I'm new in GKE, any suggestions?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-host
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-host
type: proxy
spec:
hostname: app
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-1
type: app
spec:
hostname: app
subdomain: sub1
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-2
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-2
type: app
spec:
hostname: app
subdomain: sub2
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-expose-dns
spec:
ports:
- port: 80
selector:
name: my-host
type: LoadBalancer
You want Ingress. There are several options available (Istio, nginx, traefik, etc). I like using nginx and it's really easy to install and work with. Installation steps can be found at kubernetes.github.io.
Once the Ingress Controller is installed, you want to make sure you've exposed it with a Service with type=LoadBalancer. Next, if you are using Google Cloud DNS, set up a wildcard entry for your domain with an A record pointing to the external IP address of your Ingress Controller's Service. In your case, it would be *.app.mydomain.com.
So now all of your traffic to app.mydomain.com is going to that load balancer and being handled by your Ingress Controller, so now you need to add Service and Ingress Entities for any service you want.
apiVersion: v1
kind: Service
metadata:
name: my-service1
spec:
selector:
app: my-app-1
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
apiVersion: v1
kind: Service
metadata:
name: my-service2
spec:
selector:
app: my-app2
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: sub1.app.mydomain.com
http:
paths:
- backend:
serviceName: my-service1
servicePort: 80
- host: sub2.app.mydomain.com
http:
paths:
- backend:
serviceName: my-service2
servicePort: 80
Routing shown is host based, but you just as easily could have handled those services as path based, so all traffic to app.mydomain.com/service1 would go to one of your deployments.
SOLVED!
This is the correct nginx configuration:
server {
listen 80;
server_name ~^(?<subdomain>.*?)\.;
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
location / {
proxy_pass http://$subdomain.my-internal-host.default.svc.cluster.local;
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
It could be a solution, for my case I need something more dynamic. I would not update the ingress each time I add a subdomain.
I've almost solved using an nginx proxy like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-1
type: app
spec:
hostname: sub1
subdomain: my-internal-host
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-2
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-2
type: app
spec:
hostname: sub2
subdomain: my-internal-host
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-dns-file
data:
nginx.conf: |
server {
listen 80;
server_name ~^(?.*?)\.;
location / {
proxy_pass http://$subdomain.my-internal-host;
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-proxy
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-proxy
type: app
spec:
subdomain: my-internal-host
containers:
- image: nginx:alpine
name: nginx
volumeMounts:
- name: nginx-config-dns-file
mountPath: /etc/nginx/conf.d/default.conf.test
subPath: nginx.conf
ports:
- name: nginx
containerPort: 80
hostPort: 80
volumes:
- name: nginx-config-dns-file
configMap:
name: nginx-config-dns-file
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-internal-host
spec:
selector:
type: app
clusterIP: None
ports:
- name: sk-port
port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: sk-expose-dns
spec:
ports:
- port: 80
selector:
name: my-proxy
type: LoadBalancer
I did understand that I need the service 'my-internal-host' to allow all the deployments to see each other internally.
The problem now is only the proxy_pass of nginx, if I change it with 'proxy_pass http://sub1.my-internal-host;' it works, but not with the regexp var.
The problem is related to the nginx resolver.

Kubernetes ingress with 2 services does not always find the correct service

I have a Kubernetes cluster with a backend service and a security service.
The ingress is defined as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: solidary-life
annotations:
kubernetes.io/ingress.global-static-ip-name: sl-ip
certmanager.k8s.io/acme-http01-edit-in-place: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/ssl-redirect: "true"
labels:
app: sl
spec:
rules:
- host: app-solidair-vlaanderen.com
http:
paths:
- path: /v0.0.1/*
backend:
serviceName: backend-backend
servicePort: 8080
- path: /auth/*
backend:
serviceName: security-backend
servicePort: 8080
tls:
- secretName: solidary-life-tls
hosts:
- app-solidair-vlaanderen.com
The backend service is configured like:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: sl
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: backend-app
image: gcr.io/solidary-life-218713/sv-backend:0.0.6
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /v0.0.1/api/online
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: backend-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
and the auth server service:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: security
labels:
app: sl-security
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: security-app
image: gcr.io/solidary-life-218713/sv-security:0.0.1
ports:
- name: http
containerPort: 8080
- name: management
containerPort: 9090
- name: jgroups-tcp
containerPort: 7600
- name: jgroups-tcp-fd
containerPort: 57600
- name: jgroups-udp
containerPort: 55200
protocol: UDP
- name: jgroups-udp-mc
containerPort: 45688
protocol: UDP
- name: jgroups-udp-fd
containerPort: 54200
protocol: UDP
- name: modcluster
containerPort: 23364
- name: modcluster-udp
containerPort: 23365
protocol: UDP
- name: txn-recovery-ev
containerPort: 4712
- name: txn-status-mgr
containerPort: 4713
readinessProbe:
httpGet:
path: /auth/
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: security-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
Now I can go to the url's:
https://app-solidair-vlaanderen.com/v0.0.1/api/online
https://app-solidair-vlaanderen.com/auth/
Sometimes this works, sometimes I get 404's. This is quite annoying and I am quite new to Kubernetes. I don't find the error.
Can it have something to do with the "sl" label that's on both the backend and security service definition?
Yes. At least that must be the start of the issue, assuming all your services are on the same Kubernetes namespace. Can you use a different label for each?
So, in essence, you have 2 services that are randomly selecting pods belonging to the security Deployment and the backend deployment. One way to determine where your service is really sending requests is by looking at its endpoints and running:
kubectl -n <your-namespace> <get or describe> ep