CockroachDB Kubernetes Cluster Ingress Setup - kubernetes

So my goal is to be able to access my CockroachDB from domain like db.test.com with cert.
I want to use cert-manager letsencrypt to issue keys. And it should work with CF (in non proxy mode as I think they do not support tcp for this)
At first to test everything I used normal kubectl port-forward which worked, but now I needed to expose it always.
I have tried using Ingress (using ingress-nginx)
I know that Ingress is mostly HTTP/HTTPS but I saw it can be used for the thing I need and IN CF I cannot point to port that I needed.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tcp-example-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/tcp-services: "cluster-cockroachdb-public"
nginx.ingress.kubernetes.io/tcp-service-port: "26257"
nginx.ingress.kubernetes.io/backend-protocol: "TCP"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- db.test.com
secretName: db-access-ssl-cert-production
rules:
- host: db.test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: cluster-cockroachdb-public
port:
number: 26257
Attempting to connect does not work, and in logs I can see 400 status code with strange characters like \x20...
No matter what I tried I could not get it to work..
I did manage to get web-ui portion working that was easy enough.
Other resource that might be helpful is the values.yaml that I used
conf:
cache: "2Gi"
max-sql-memory: "2Gi"
# My WEB-UI that works
ingress:
enabled: true
labels: {}
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-production
paths: [/]
hosts:
- db-ui.test.com
tls:
- hosts: [db-ui.test.com]
secretName: ssl-cert-production
Everything else is default

I solved my issue by following the tutorial below:
https://mailazy.com/blog/exposing-tcp-udp-services-ingress/
also mentioned here
https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
Ingress does not support TCP or UDP services so we use ingress-nginx config for it we patch ingress-nginx values of chart and add custom one (Copy default values.yaml from github helm chart for ingress-nginx)
I just edited this portion:
# -- TCP service key-value pairs
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
tcp:
"26257": "default/cluster-cockroachdb-public:26257"
After that we run helm upgrade command to replace values of ingress-nginx and after that it should work for anyone else as well.
If you are using cloudflare make sure to disable proxy!

Related

Deploying Jaeger in a url different than root

I am trying to deploy Jaeger all-in-one image in a kubernetes cluster.
Jaeger is not in the root of the url, meaning it's accessible through https://somedomain.com/xyz/jaeger
I have an ingress rule which seems to be pointing correctly to a Service which is also referencing fine the pod in a deployment (I can see all this in Rancher UI).
But somehow when I try to access, nginx is throwing a 502 Bad Gateway error.
This is how the ingress rule looks like
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: somedomain.com
http:
# Jaeger
- path: /xyz/jaeger(/|$)(.*)
pathType: Prefix
backend:
service:
name: jaeger
port:
number: 16868
Then in the pod definition I tried using the QUERY_BASE_PATH env var setting it to /xyz/jaeger but that made no difference at all.
The problem was an incorrect port being specified.
16868 instead of 16686

Ingress on self-hosted Kubernetes on custom interface

I would like to deploy an ngingx-ingress controller on my self-hosted Kubernetes (microk8s) that is configurable to listen on one or more interfaces (external IPs).
Not even sure if that is easily possible and I should just switch to an external solution, such as HAProxy or an nginx.
Required behavior:
192.168.0.1-H"domain.com":443/frontend -> 192.168.0.1 (eth0) -> ingress -> service-frontend
192.168.0.1-H"domain.com":443/backend -> 192.168.0.1 (eth0) -> ingress -> service-backend
88.88.88.88-H"domain.com":443/frontend -> 88.88.88.88 (eth1) -> ? -> [403 or timeout]
88.88.88.88-H"domain.com":443/backend -> 88.88.88.88 (eth1) -> ? -> [403 or timeout]
And then later the eth1 interface should be able to be switched on, so that requests on that interface behave the same as on eth0.
I would like to be able to deploy multiple instances of services for load-balancing. I would like to keep the configuration in my namespace (if possible) so I can always delete and apply everything at once.
I'm using this guide as a reference: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
I was able to get something working with minikube, but obviously could not expose any external IPs and performance was quite bad. For that, I just configured a "kind: Ingress" and that was it.
So far, the ingress controller that's default on microk8s seems to listen on all interfaces and I can only configure it in its own namespace. Defining my own ingress seems to not have any effect.
I would like to deploy an ngingx-ingress controller on my self-hosted
Kubernetes (microk8s) that is configurable to listen on one or more
interfaces (external IPs).
For above scenario, you have to deploy the multiple ingress controller of Nginx ingress and keep the different class name in it.
Official document : https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
So in this scenario, you have to create the Kubernetes service with Loadbalancer IP and each will point to the respective deployment and class will be used in the ingress object.
If you looking forward to use the multiple domains with a single ingress controller you can easily do it by mentioning the host into ingress.
Example for two domain :
bar.foo.dev
foo.bar.dev
YAML example
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-bar
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- bar.foo.dev
secretName: tls-secret-bar
rules:
- host: bar.foo.dev
http:
paths:
- backend:
serviceName: barfoo
servicePort: 80
path: /(.*)
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontdoor-foo
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- foo.bar.dev
secretName: tls-secret-foo
rules:
- host: foo.bar.dev
http:
paths:
- backend:
serviceName: foobar
servicePort: 9000
path: /(.*)
One potential fix was much simpler than anticipated, no messing with MetalLB needed or anything else.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "public"
nginx.ingress.kubernetes.io/whitelist-source-range: 192.168.0.0/24
...
This does not answer the question of splitting an Ingress across multiple interfaces, but it does solve the problem of restricting public access.
By default, bare-metal ingress will listen on all interfaces, which might be a security issue.
This solution works without enabling ingress on Microk8s:
install ingress controller : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/baremetal/deploy.yaml
create your deployment and service and add this Ingress resource (all in the one namespace):
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/service-upstream: 'true'
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
name: ingress-resource
namespace: namespace-name
spec:
rules:
- http:
paths:
- backend:
service:
name: service-name
port:
number: service-port
path: /namespace-name/service-name(/|$)(.*)
pathType: Prefix
kubectl get svc -n ingress-nginx
now get either CLUSTER-IP or EXTERNAL-IP and :
curl ip/namespace-here/service-here

Ingress hostname wont change

I have an Ingress setup and initially I used a placeholder name, "thesis.info". Now I would like change this hostname but whenever I to change it I just end up getting 404 errors.
Change the spec.tls.rules.host value in the yaml to the new hostname
Change CN value which openssl uses for the crt and key that are generated for TLS
Edit the value /etc/hosts value on my local machine
Is there a step I am missing that could be causing a problem. I am baffled by why it works with one value but not the other.
Below is the ingress yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
namespace: thesis
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
tls:
- hosts:
- thesis
secretName: ingress-tls
rules:
- host: pod1out.ie
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
---
Most likely, you can find a hint on what is going on in the nginx logs. If you have access, you can access the logs using something like this:
kubectl -n <ingress-namespace> get pods
# should be one or more nginx pods
kubectl -n <ingress-namespace> logs <nginx-pod>
Not sure if this is the only issue, but according to the documentation, the host in 'tls' has to match explicitly the host in the rules:
spec:
tls:
- hosts:
- pod1out.ie
secretName: ingress-tls
rules:
- host: pod1out.ie
Before struggling with tls, I would recommend making the http route itself work (eg. by creating another ingress resource), and if this works with the host you want, go for tls.

Prometheus dashboard exposed over ingress controller

I am trying to setup Prometheus in k8 cluster, able to run using helm. Accessing dashboard when i expose prometheus-server as LoadBalancer service using external ip.
Same does not work when I try to configure this service as ClusterIP and making it as backend using ingress controller. Receiving 404 error, any thoughts on how to troubleshoot this?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ops-ingress
annotations:
#nginx.org/server-snippet: "proxy_ssl_verify off;"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- path: /prometheus(/|$)(.*)
backend:
serviceName: prometheus-server
servicePort: 80
with above ingress definition in place, url “http://<>/prometheus/ getting redirected to http://<>/graph/ and then 404 error page getting rendered. When url adjusted to http://<>/prometheus/graph some of webcontrols gets rendered with lots of errors on browser console.
Prometheus might be expecting to have control over the root path (/).
Please change the Ingress to prometheus.example.com and it should work fine. (Changing it to a subdomain)
Please change your Ingress configuration file, add host field:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ops-ingress
annotations:
#nginx.org/server-snippet: "proxy_ssl_verify off;"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: prometheus.example.com
http:
paths:
- path: /prometheus(/|$)(.*)
backend:
serviceName: prometheus-server
servicePort: 80
then apply changes executing command:
$ kubectl aply -f your_ingress_congifguration_file.yaml
The host header field in a request provides the host and port
information from the target URI, enabling the origin server to
distinguish among resources while servicing requests for multiple
host names on a single IP address.
Please take a look here: hosts-header.
Ingress definition: ingress.
Useful information: helm-prometheus.
Useful documentation: ingress-path-matching.

How to disable http traffic and force https with kubernetes ingress on gcloud

Hi i tried the new annotation for ingress explained here
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ssl-iagree-ingress
annotations:
kubernetes.io/ingress.allowHTTP: "false"
spec:
tls:
- secretName: secret-cert-myown
backend:
serviceName: modcluster
servicePort: 80
but i can still access it trough http, this is my setup on gcloud ingress--apache:80
Well i was able to resolve the issue, thanks to Mr Danny, from this pull request here, there was a typo in
kubernetes.io/ingress.allowHTTP: "false"
change it to
kubernetes.io/ingress.allow-http: "false"
and it works fine now.
ps: only for master version 1.3.5