I am new to Trino, I have installed Trino using helm chart https://trinodb.github.io/charts
but I am not able to get it working on https,
Details of my cluster
I have a kubeadm cluster deployed on barebones EC2, also have a Ha Proxy installed thats managing the ingress of other services, I have added ingress for the same.
Is there some config I need to pass to it, I tried to read the documents but had tough time understanding those.
Tried
Added entries in haproxy
generated the certs
created ingress for trino
Expecting
trino to open on a https secure connection
Related
I set up a two clusters with rancher 2.5.x, one single-node management cluster for running the rancher server and one "production" server which handles the application stacks.
This worked all fine, now during updating rancher server to 2.6 something failed apparently and the rancher server is down ever since. The management cluster itself is still up, only the rancher server not. However, since the access is passed throught rancher server I cannot connect to any of the clusters via kubectl or helm.
I do see that all required containers on the management cluster are still up and running:
Also, i can ssh to this server. So I do have access to all resources, but since i cannot connect to the cluster istself i cannot fix this issue. I guess it would be quite easy to just fix the rancher helm release to make it work again. But I have no idea how i could do that. I thought about running kubectl or helm locally on the node in the management cluster, but i don't know how to get the kubeconfig for that. The kubeconfig i used before connects to the rancher server, which happens to be the problem now.
Is there any chance to connect to the cluster without using the rancher generated kubeconfig?
How to setup basic auth for Prometheus deployed on K8s cluster using yamls??
I was able to achieve this easily when Prometheus was deployed on a host locally using tar file. But when it is deployed as a pod in K8s cluster, tried almost everything on the internet but no luck.
Any kind of help would be really appreciated!
Thanks!
I'm not sure why would official documentation work only in a vm and not container, but if it truly not work than you can use webserver to hide your web interface behind it and setup authentication on it.
My setup (running locally in two minikubes) is I have two k8s clusters:
frontend cluster is running a golang api-server,
backend cluster is running an ha bitnami postgres cluster (used bitnami postgresql-ha chart for this)
Although if i set the pgpool service to use nodeport and i get the ip + port for the node that the pgpool pod is running on i can hardwire this (host + port) to my database connector in the api-server (in the other cluster) this works.
However what i haven't been able to figure out is how to generically connect to the other cluster (e.g. to pgpool) without using the ip address?
I also tried using Skupper, which also has an example of connecting to a backend cluster with postgres running on it, but their example doesn't use bitnami ha postgres helm chart, just a simple postgres install, so it is not at all the same.
Any ideas?
For those times when you have to, or purposely want to, connect pods/deployments across multiple clusters, Nethopper (https://www.nethopper.io/) is a simple and secure solution. The postgresql-ha scenario above is covered under their free tier. There is a two cluster minikube 'how to' tutorial at https://www.nethopper.io/connect2clusters which is very similar to your frontend/backend use case. Nethopper is based on skupper.io, but the configuration is much easier and user friendly, and is centralized so it scales to many clusters if you need to.
To solve your specific use case, you would:
First install your api server in the frontend and your bitnami postgresql-ha chart in the backend, as you normally would.
Go to https://mynethopper.com/ and
Register
Clouds -> define both clusters (clouds), frontend and backend
Application Network -> create an application network
Application Network -> attach both clusters to the network
Application Network -> install nethopper-agent in each cluster with copy paste instructions.
Objects -> import and expose pgpool (call the service 'pgpool') in your backend.
Objects -> distribute the service 'pgpool' to frontend, using a distribution rule.
Now, you should see 'pgpool' service in the frontend cluster
kubectl get service
When the API server pods in the frontend request service from pgpool, they will connect to pgpool in the backend, magically. It's like the 'pgpool' pod is now running in the frontend.
The nethopper part should only take 5-10 minutes, and you do NOT need IP addresses, TLS certs, K8s ingresses or loadbalancers, a VPN, or an istio service mesh or sidecars.
After moving to the one cluster architecture, it became easier to see how to connect to the bitnami postgres-ha cluster, by trying a few different things finally this worked:
-postgresql-ha-postgresql-headless:5432
(that's the host and port I'm using to call from my golang server)
Now i believe it should be fairly straightforward to also run the two cluster case using skupper to bind to the headless service.
So I have a spring boot application, which is deployed as a pod in Kubernetes. I also have a Keycloak server running in Kubernetes (Same namespace). I am facing an issue with logging into my application through a browser on my local machine.
So I am specifying the auth-server-url=http://keycloak-service-name:8080/auth, so that my pod can access it, and it can. The problem arrises when I try to log in to my application, as it redirects to http://keycloak-service-name:8080/auth and this cannot be resolved locally as it is the Kubernetes service.
I also have ingress set up, so I tried specify the auth URL as the ingress http://keycloak-ingress/auth, but then my pod cannot access this and gets an error "Failed to load URLs from ..." as it cannot resolve the ingress domain. However, I can access the ingress from my browser.
I feel like I am missing something really obvious here, I need some kind of URL that is accessible to both the pod within the cluster, as well as outside the cluster. Or maybe there is someway to specify a seperate URL for the lookup my application is doing to "Load the URL's"?
The only way I have managed to get this to work is by exposing the service externally and using the external IP and port, but this is not an acceptable solution.
I found out that there is a frontend URL parameter in the keycloak server. I set this to point to my ingress, and set the auth-server-url to point to my keycloak service name. This solved my problem, in that when my application does a lookup internally it uses the service, but when I access the frontend it uses the ingress.
We have configured the Kubernetes cluster on bare-metal server with v1.15.1 and Istio-1.4.0 (demo) with mTLS enabled.
And our mysql server is outside the K8s cluster on Azure VM's.
Now when we inject istio-proxy while deploying the application we are unable to connect to mysql server via jdbc and also tried my mysql client. But when remove the istio-proxy by re-deploying we are able to connect instantly with out any issue.
When through many blogs wrt istio and mysql, tried with removing the default mesh policy but tht didnt work. The case in istio faq's is when the mysql is in k8s cluster with istio injected.
You can configure auto-mtls for istio by configuring values.global.mtls.auto=true (ie it uses mtls when possible and falls back for other connections
https://istio.io/docs/tasks/security/authentication/auto-mtls/
Serviceentry and destionation rule does the work form my case