Botpress behind a Nginx reverse proxy - kubernetes

I am thinking of setting up multiple chatbots as in a containerized platform lets say docker or Kubernetes, and I would want to be able to access these chatbots through a reverse proxy such as Nginx. any help is appreciated.
My example scenario
I have a multiple chatbots, lets call them Bravo, Charlie, Delta
Bravo's IP address and port is 10.0.0.2:8080
Charlie's IP : 10.0.0.3:8080
Delta's IP :10.0.0.4:8080
All of these bots are living in containers behind a nginx proxy.
Now if I want to access these chatbots, I am able to get to the browser with 10.0.0.2:8080 and use the chatbots,
If I could setup a domain (alpha,org) and want to access these chatbots as alpha,com/bravo , or alpha,com/charlie and alpha,com/delta how would I be able to achieve this.?
The Proxy pass directive works only for the index_html and the chatbot application seems to have some kind of base url path that I am unable to figure out.
nginx returns a blank page if I inspect the traffic. Help me debug this.

You can use nginx-ingress controller with this ingress definition: (But first you need to deploy nginx-ingress controller on your cluster, you can use this link)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: alpha-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: alpha.com
http:
paths:
- path: /bravo
backend:
serviceName: BravoService
servicePort: 80
- path: /charlie
backend:
serviceName: CharlieService
servicePort: 80
- path: /delta
backend:
serviceName: DeltaService
servicePort: 80 # You could also use named ports if you already named the port in the service like bravo-http-port
This expects that you have already defined and deployed your services with associated deployments. for Ex:
apiVersion: v1
kind: Service
metadata:
name: BravoService
labels:
app: bravo
spec:
type: NodePort
selector:
app: bravo
ports:
- name: bravo-http-port
protocol: TCP
port: 80
targetPort: bravo-port
nodePort: 8080
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: bravo-deployment
labels:
app: bravo
spec:
# init with 3 replicas
replicas: 1
selector:
matchLabels:
app: bravo
template:
metadata:
labels:
app: bravo
spec:
containers:
- name: bravo-container
image: my-docker-repo/project:1.0
ports:
- name: bravo-port
containerPort: 8080
If you have more questions on this please don't hesitate.

Related

GKE Ingress with Multiple Backend Services returns 404

I'm trying to create a GKE Ingress that points to two different backend services based on path. I've seen a few posts explaining this is only possible with an nginx Ingress because gke ingress doesn't support rewrite-target. However, this Google documentation, GKE Ingresss - Multiple backend services, seems to imply otherwise. I've followed the steps in the docs but haven't had any success. Only the service that is available on the path prefix of / is returned. Any other path prefix, like /v2, returns a 404 Not found.
Details of my setup are below. Is there an obvious error here -- is the Google documentation incorrect and this is only possible using nginx ingress?
-- Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: app-static-ip
networking.gke.io/managed-certificates: app-managed-cert
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /v2
pathType: Prefix
backend:
service:
name: api-2-service
port:
number: 8080
-- Service 1
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app: api
spec:
type: NodePort
selector:
app: api
ports:
- port: 80
targetPort: 5000
-- Service 2
apiVersion: v1
kind: Service
metadata:
name: api-2-service
labels:
app: api-2
spec:
type: NodePort
selector:
app: api-2
ports:
- port: 8080
targetPort: 5000
GCP Ingress supports multiple paths. This is also well described in Setting up HTTP(S) Load Balancing with Ingress. For my test I've used both Hello-world v1 and v2.
There are 3 possible issues.
Issue is with container ports opened. You can check it using netstat:
$ kk exec -ti first-55bb869fb8-76nvq -c container -- bin/sh
/ # netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/hello-app
Issue might be also caused by the Firewall configuration. Make sure you have proper settings. (In general, in the new cluster I didn't need to add anything but if you have more stuff and have specific Firewall configurations it might block).
Misconfiguration between port, containerPort and targetPort.
Below my example:
1st deployment with
apiVersion: apps/v1
kind: Deployment
metadata:
name: first
labels:
app: api
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: container
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app: api
spec:
type: NodePort
selector:
app: api
ports:
- port: 5000
targetPort: 8080
2nd deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: second
labels:
app: api-2
spec:
selector:
matchLabels:
app: api-2
template:
metadata:
labels:
app: api-2
spec:
containers:
- name: container
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: api-2-service
labels:
app: api-2
spec:
type: NodePort
selector:
app: api-2
ports:
- port: 6000
targetPort: 8080
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 5000
- path: /v2
pathType: Prefix
backend:
service:
name: api-2-service
port:
number: 6000
Outputs:
$ curl 35.190.XX.249
Hello, world!
Version: 1.0.0
Hostname: first-55bb869fb8-76nvq
$ curl 35.190.XX.249/v2
Hello, world!
Version: 2.0.0
Hostname: second-d7d87c6d8-zv9jr
Please keep in mind that you can also use Nginx Ingress on GKE by adding specific annotation.
kubernetes.io/ingress.class: "nginx"
Main reason why people use nginx ingress on GKE is using rewrite annotation and possibility to use ClusterIP or NodePort as serviceType, where GCP ingress allows only NodePort serviceType.
Additional information you can find in GKE Ingress for HTTP(S) Load Balancing

How to manage 'link forwarding' for a website inside a kubernetes cluster

I have a website that uses relative href links (eg. the link directs to "/login", rather than "http://somesite.com/login"). This works A-OK in a normal server but I'd like to use the website inside a docker container as part of a kubernetes cluster. My goal is to have multiple replicas of the website to manage high load times, defaulting to two nodes with dynamic scaling.
I set us the service as a 'loadbalancer' as follows:
apiVersion: v1
kind: Service
metadata:
name: websiteservice
spec:
selector:
app: websiteapp
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
The issue here is that if I navigate to the URL of the load balancer (that's automatically created at my host - digital ocean) then although the homepage of the website loads, any other pages give me a 404 because rather than loading the "/login" page of the container it loads the "/login" page of the load balancer which doesn't exist. How can I configure my cluster or load balancer to forward all subdirectories (anything after the .com) to the webserver?
EDIT 1
In comments I was advised to set up ingress. I think I've done so with this change to my yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: somesite.co.uk
http:
paths:
- backend:
serviceName: websiteservice
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: websiteservice
spec:
selector:
app: websiteapp
type: ClusterIP
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: websiteapp
spec:
replicas: 2
selector:
matchLabels:
app: websiteapp
template:
metadata:
labels:
app: websiteapp
spec:
containers:
- name: websiteapp
image: mydocker.co.uk/websiteimg
imagePullSecrets:
- name: regcred
nodeSelector:
beta.kubernetes.io/instance-type: s-2vcpu-4gb
But I'm still not able to navigate beyond the home page of my website.
Your Ingress file should look like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: somesite.co.uk
http:
paths:
- path: /*
backend:
serviceName: websiteservice
servicePort: 80
Simply you have add path to redirect traffic.
Take a look: host-path-routing, ingress-path-matching.

kubernetes ingress-nginx gives 502 error and the address field is empty

I am setting up kubernetes on a AWS environment using kubeadm. I have setup ingress-nginx to access the service on port 443. I have checked the service configurations which look good. I am receiving 502 bad gateway and also the Address field in ingress is empty.
Front end service
apiVersion: v1
kind: Service
metadata:
labels:
name: voyager-configurator-webapp
name: voyager-configurator-webapp
spec:
ports:
-
port: 443
targetPort: 443
selector:
component: app
name: voyager-configurator-webapp
type: ClusterIP
Ingress yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-resource
spec:
tls:
- hosts:
- kubernetes-test.xyz.com
secretName: default-server-secret
rules:
- host: kubernetes-test.xyz.com
http:
paths:
- backend:
serviceName: voyager-configurator-webapp
servicePort: 443
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress-resource <none> kubernetes-test.xyz.com 80, 443 45m
What could be the issue here ? Any help will be appreciated.
Make sure that your service is created in proper namespace - if not add namespace field in service definition. It is not good approach to add label called name with the same name as your service, instead you can use different one to avoid mistake and configurations problem.
Read more about selectors and labels: labels-selectors.
Your frontend service should look like that:
piVersion: v1
kind: Service
name: voyager-configurator-webapp
metadata:
labels:
component: app
appservice: your-example-app
spec:
ports:
- protocol: TCP
port: 443
targetPort: 443
selector:
component: app
app: your-example-app
type: ClusterIP
Your ingress should look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress-resource
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- kubernetes-test.xyz.com
secretName: default-server-secret
rules:
- host: kubernetes-test.xyz.com
http:
paths:
- path: /
backend:
serviceName: voyager-configurator-webapp
servicePort: 443
You have to define path to backend to with Ingress should send traffic.
Remember that is good to follow some examples and instructions during setup to avoid problems and waste of time during debugging.
Take a look: nginx-ingress-502-bad-gateway, aws-kubernetes-ingress-nginx.

Exposing UDP and TCP ports for sftp server using Ingress in GKE

I am trying to set up a multi-cluster deployment in which there are multiple clusters and one ingress is load balancing the requests between them.
HTTP services work well with the set-up the problem here is the sftp server.
Referring to this answer and this documentation I am trying to access port 22 of the sftp service.
Deployment of sftp is being exposed on port 22. Below is the manifest:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: sftp
labels:
environment: production
app: sftp
spec:
replicas: 1
minReadySeconds: 10
template:
metadata:
labels:
environment: production
app: sftp
annotations:
container.apparmor.security.beta.kubernetes.io/sftp: runtime/default
spec:
containers:
- name: sftp
image: atmoz/sftp:alpine
imagePullPolicy: Always
args: ["user:1001:100:upload"]
ports:
- containerPort: 22
securityContext:
capabilities:
add: ["SYS_ADMIN"]
resources: {}
Here is the simple manifest for the sftp-service using NodePort service:
apiVersion: v1
kind: Service
metadata:
labels:
environment: production
name: sftp-service
spec:
type: NodePort
ports:
- name: sftp-port
targetPort: 9000
port: 9000
nodePort: 30063
protocol: TCP
selector:
app: sftp
ConfigMap create to referring to the above mentioned documentation and answer looks like below:
apiVersion: v1
kind: ConfigMap
metadata:
name: sftp-service
data:
9000: "default/sftp-service:22"
And finally the ingress manifest is something like below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-foo
annotations:
kubernetes.io/ingress.global-static-ip-name: static-ip
kubernetes.io/ingress.class: gce-multi-cluster
spec:
backend:
serviceName: http-service-zone-printer
servicePort: 80
rules:
- http:
paths:
- path: /sftp
backend:
serviceName: sftp-service
servicePort: 22
template:
spec:
containers:
- name: proxy-port
args:
- "--tcp-services-configmap=default/sftp-service"
I feel, I have not understood the way to expose the TCP/UDP port for sftp server on kubernetes using ingress. What am I doing wrong here?
Is there any other way to simple setup an sftp using ingress and NodePort service in a multicluster deployment?
Here is the official document I am referring to do the set-up.
looks like this isn't supported with ingress which is the reason that this issue exist
A possible solution could be to use nodeport for sftp as described in this document
You need to run an HTTP server.
You can run an HTTP server that exposes the same files maybe with a side container in the same pod

Ingress resource does not give access to exposed services

Hi I am currently trying to deploy my application using google kubernetes engine. I exposed my front and back services as NodePort, I created a global static IP address named "ip". and I created an ingress ressource .
The ingress ressource was working fine until I added the path rules.
here Is my ingress ressource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: ip
labels:
app: myapp
part: ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: backapp
servicePort: 9000
- path: /front/*
backend:
serviceName: frontapp
servicePort: 3000
And here is my services
back :
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
part: back
name: backapp
namespace: default
spec:
clusterIP: 10.*.*.*
externalTrafficPolicy: Cluster
ports:
- nodePort: 30646
port: 9000
protocol: TCP
targetPort: 9000
selector:
app: myapp
part: back
sessionAffinity: None
type: NodePort
front:
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
part: front
name: frontapp
namespace: default
spec:
clusterIP: 10.*.*.*
externalTrafficPolicy: Cluster
ports:
- nodePort: 31609
port: 3000
protocol: TCP
targetPort: 3000
selector:
app: myapp
part: front
sessionAffinity: None
type: NodePort
Every time I try to go to
http://external-ingress-ip/front
http://external-ingress-ip/front/home
http://external-ingress-ip/users
http://external-ingress-ip/...
All I get is default backend - 404
So my question is: what is wrong with my configuration, what changed when I added the paths ?
A Kubernetes NodePort service is the most basic way to get external traffic directly to your service.
NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that
is sent to this port is forwarded to the service.
Back to your issue.
Try to use that configuration. It is a bit more clear and contain only needed options.
Please keep in mind that ingress.global-static-ip-name and targetPortof both Services to your values of Pods’ ports.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: $IP # Reserved IP address
labels:
app: myapp
part: ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: backapp
servicePort: 9000
- path: /front/*
backend:
serviceName: frontapp
servicePort: 3000`
Also, there is a need to define separate services to process incoming traffic:
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
part: back
name: backapp
namespace: default
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 9000 # Port on the pod with 'back' application
selector:
app: myapp
part: back
type: NodePort
And the second configuration for frontend services:
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
part: front
name: frontapp
namespace: default
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000 # Port on the pod with 'front' application
selector:
app: myapp
part: front
type: NodePort
If it will not be OK with the new configuration, please write a comment with details.
(I would like to say Thank you to Anton Kostenko for helping hands and made configuration files working)