We have configured the Kubernetes cluster on bare-metal server with v1.15.1 and Istio-1.4.0 (demo) with mTLS enabled.
And our mysql server is outside the K8s cluster on Azure VM's.
Now when we inject istio-proxy while deploying the application we are unable to connect to mysql server via jdbc and also tried my mysql client. But when remove the istio-proxy by re-deploying we are able to connect instantly with out any issue.
When through many blogs wrt istio and mysql, tried with removing the default mesh policy but tht didnt work. The case in istio faq's is when the mysql is in k8s cluster with istio injected.
You can configure auto-mtls for istio by configuring values.global.mtls.auto=true (ie it uses mtls when possible and falls back for other connections
https://istio.io/docs/tasks/security/authentication/auto-mtls/
Serviceentry and destionation rule does the work form my case
Related
I am new to Trino, I have installed Trino using helm chart https://trinodb.github.io/charts
but I am not able to get it working on https,
Details of my cluster
I have a kubeadm cluster deployed on barebones EC2, also have a Ha Proxy installed thats managing the ingress of other services, I have added ingress for the same.
Is there some config I need to pass to it, I tried to read the documents but had tough time understanding those.
Tried
Added entries in haproxy
generated the certs
created ingress for trino
Expecting
trino to open on a https secure connection
I set up a two clusters with rancher 2.5.x, one single-node management cluster for running the rancher server and one "production" server which handles the application stacks.
This worked all fine, now during updating rancher server to 2.6 something failed apparently and the rancher server is down ever since. The management cluster itself is still up, only the rancher server not. However, since the access is passed throught rancher server I cannot connect to any of the clusters via kubectl or helm.
I do see that all required containers on the management cluster are still up and running:
Also, i can ssh to this server. So I do have access to all resources, but since i cannot connect to the cluster istself i cannot fix this issue. I guess it would be quite easy to just fix the rancher helm release to make it work again. But I have no idea how i could do that. I thought about running kubectl or helm locally on the node in the management cluster, but i don't know how to get the kubeconfig for that. The kubeconfig i used before connects to the rancher server, which happens to be the problem now.
Is there any chance to connect to the cluster without using the rancher generated kubeconfig?
My setup (running locally in two minikubes) is I have two k8s clusters:
frontend cluster is running a golang api-server,
backend cluster is running an ha bitnami postgres cluster (used bitnami postgresql-ha chart for this)
Although if i set the pgpool service to use nodeport and i get the ip + port for the node that the pgpool pod is running on i can hardwire this (host + port) to my database connector in the api-server (in the other cluster) this works.
However what i haven't been able to figure out is how to generically connect to the other cluster (e.g. to pgpool) without using the ip address?
I also tried using Skupper, which also has an example of connecting to a backend cluster with postgres running on it, but their example doesn't use bitnami ha postgres helm chart, just a simple postgres install, so it is not at all the same.
Any ideas?
For those times when you have to, or purposely want to, connect pods/deployments across multiple clusters, Nethopper (https://www.nethopper.io/) is a simple and secure solution. The postgresql-ha scenario above is covered under their free tier. There is a two cluster minikube 'how to' tutorial at https://www.nethopper.io/connect2clusters which is very similar to your frontend/backend use case. Nethopper is based on skupper.io, but the configuration is much easier and user friendly, and is centralized so it scales to many clusters if you need to.
To solve your specific use case, you would:
First install your api server in the frontend and your bitnami postgresql-ha chart in the backend, as you normally would.
Go to https://mynethopper.com/ and
Register
Clouds -> define both clusters (clouds), frontend and backend
Application Network -> create an application network
Application Network -> attach both clusters to the network
Application Network -> install nethopper-agent in each cluster with copy paste instructions.
Objects -> import and expose pgpool (call the service 'pgpool') in your backend.
Objects -> distribute the service 'pgpool' to frontend, using a distribution rule.
Now, you should see 'pgpool' service in the frontend cluster
kubectl get service
When the API server pods in the frontend request service from pgpool, they will connect to pgpool in the backend, magically. It's like the 'pgpool' pod is now running in the frontend.
The nethopper part should only take 5-10 minutes, and you do NOT need IP addresses, TLS certs, K8s ingresses or loadbalancers, a VPN, or an istio service mesh or sidecars.
After moving to the one cluster architecture, it became easier to see how to connect to the bitnami postgres-ha cluster, by trying a few different things finally this worked:
-postgresql-ha-postgresql-headless:5432
(that's the host and port I'm using to call from my golang server)
Now i believe it should be fairly straightforward to also run the two cluster case using skupper to bind to the headless service.
I am new to Google Cloud Platform and the following context:
I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.
All services like GCE and GKE are in the same region (us-east-1).
I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.
I have also checked the firewall settings on GCP as well as the bindIp setting on the MongoDB server and it has no blocking on that.
Does anyone know what may be happening? Thank you very much.
In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).
I had to whitelist cluster pod network listed in cluster details:
Pod address range 10.8.0.0/14
https://console.cloud.google.com/kubernetes/list
https://console.cloud.google.com/networking/firewalls/list
By default, containers in a GKE cluster should be able to access GCE VMs of the same VPC through internal IPs. It is just like you access the internet (e.g., google.com) from GKE containers, GKE and VPC know how to route the traffic. The problem must be with other configurations (firewall or your application).
You can do a test, start a simple HTTP server in the GCE VM, say the internal IP is 10.138.0.5:
python -m SimpleHTTPServer 8080
then create a GKE container and try to access the service:
kubectl run my-client -it --image=tutum/curl --generator=run-pod/v1 -- curl http://10.138.0.5:8080
I'm trying to use Istio in a K8s 1.6 cluster on AWS.
I have a Kafka pod/service running the old fashion way, with a "kafka-zk-broker-kafka.dev" service without IP, so the kafka-zk-broker-kafka.dev service (I'm in the dev namespace) resolve to the internal name of my 3 Kafka pods. This is working great.
~ # nslookup kafka-zk-broker-kafka.dev
Name: kafka-zk-broker-kafka.dev
Address 1: 10.33.0.11 kafka-zk-kafka-0.kafka-zk-broker-kafka.dev.svc.cluster.local
Address 2: 10.38.96.16 kafka-zk-kafka-2.kafka-zk-broker-kafka.dev.svc.cluster.local
Address 3: 10.40.128.13 kafka-zk-kafka-1.kafka-zk-broker-kafka.dev.svc.cluster.local
I deployed a kafka producer application, using Istio sidecart as it is also exposing a gRPC port for internal uses.
Deployment went fine, but my application can't connect to to the "kafka-broker" service. DNS resolution is OK, but I can't reach the service port (TCP:9092) using either kafka client or telnet.
What I understand is that, when the Istio (envoy) sidecart is deployed, everything out of the POD is going through the Envoy proxy...
So the envoy proxy does not know how to reach regular services ?
Am I missing something ? is there a way to mix Istio/Envoy with regular k8s services ?
What you are doing should work, but I think you're running into this known bug: https://github.com/istio/issues/issues/37