Accessing a public psql server in a kubernetes cluster - postgresql

I have a PostgreSQL server running in azure with port no 5432, which is publically accessible. There is a Kubernetes cluster running several pods. I am trying to access the PostgresSQL server in a pod but It says the port is not open. although, on my personal machine, It is accessible.
Does anyone have any idea what is going on?

So Postgres is accessible from your machine but not from a pod? Does the pod have internet access? do you get an error ?
You can access the postgres locally in the kubernetes cluster:
if the pod is in the same namespace you can use psql -h "svc name" where svc of the postgres pod.
If the pod is in a different namespace you need to use the full dns name
in this example the svc name psql is in the prod namesapce.
psql.prod.svc.cluster.local.
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

Related

Access Kubernetes Dashboard using kubectl proxy remotly for multiple user

I have setup kubernetes cluster in EKS. API server access is in private mode. I have bastion host from which i can run kubectl commands. I want to access kubernetes dashboard remotly.
One thing i can do is ssh -L localhost:8001:127.0.0.1:8001 # kubectl proxy. this wil provide me an access remotly.
If somone else will execute ssh -L localhost:8001:127.0.0.1:8001 # kubectl proxy then it will get an error. "error: listen tcp 127.0.0.1:8001: bind: address already in use". Because somebody else is accessing kubectl proxy.
How to solve this issue. I want to access kubernetes dashboard on multiple machine at the same time.

How to locally access influx db deployed on GCP kubernetes

Influxdb 1.8 is deployed on kubernets using helm charts. influx db is deployed as Stateful Set that exposes a service with one running pods. Am able to ssh into running pods using kubectl exec command and its running fine. I can also see databases using influx cli after logging into pods
But i need to access this influx db on my local system to execute queries directly from my system using curl command. Deployed influxdb has no external IP/DNS. It ha internal endpoint that usually starts with 10...*
Can anybody guide me on how can i access influxdb on my local system using curl command?
You can use the kubectl port-forward command. You can use it to either map a Pod or a Service TCP port to a port on your local machine:
> kubectl port-forward service/your-influxdb-service 8086:8086
^ ^
| |
local port remote/service port
While that command is running, kubectl will forward all connections to your local port 8086 to the same port of your InfluxDB service. All traffic will be funneled through kubectl and your API server, so this is not exactly suited for high-throughput scenarios, but should be sufficient for occasional debugging and testing.

Communication between pods on different namespaces

I have an application pod running on a namespace named frontend and a database pod running on a different namespace named backend. I need to have communication between both the pods residing in different namespaces. The database container is up and running but while the application container has an error of crashloopbackoff.
When I saw the logs of the application pod, there is an error while resolving the database hostname which was provided through the environment variable PGHOST and was equal to the name of the database container. But it seems that the application container is unable to resolve the database host.
Therefore, how should I connect them. I suppose that the problem is due to different namespaces. So how do I connect them and make them communicate.
error:
> The Gemfile's dependencies are satisfied rake aborted!
> PG::ConnectionBad: could not translate host name "postgres" to
> address: Name or service not known
I am assuming you have a ClusterIP type service with name postgres in xyz namespace. Then you can access it from another namespace by specifying postgres.xyz
Kubernetes has a DNS system CoreDNS which will resolve the hostname postgres.xyz to the POD IP.

access postgres in kubernetes from an application outside the cluster

Am trying to access postgres db deployed in kubernetes(kubeadm) on centos vms from another application running on another centos vm. I have deployed postgres service as 'NodePort' type. My understanding is we can deploy it as LoadBalancer type only on cloud providers like AWS/Azure and not on baremetal vm. So now am trying to configure 'ingress' with NodePort type service. But am still unable to access my db other than using kubectl exec $Pod-Name on kubernetes master.
My ingress.yaml is
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: postgres-ingress
spec:
backend:
serviceName: postgres
servicePort: 5432
which does not show up any address as below
NAME HOSTS ADDRESS PORTS AGE
postgres-ingress * 80 4m19s
am not even able to access it from pgadmin on my local mac. Am I missing something?
Any help is highly appreciated.
Ingress won't work, it's only designed for HTTP traffic, and the Postgres protocol is not HTTP. You want solutions that deal with just raw TCP traffic:
A NodePort service alone should be enough. It's probably the simplest solution. Find out the port by doing kubectl describe on the service, and then connect your Postgres client to the IP of the node VM (not the pod or service) on that port.
You can use port-forwarding: kubectl port-forward pod/your-postgres-pod 5432:5432, and then connect your Postgres client to localhost:5432. This is my preferred way for accessing the database from your local machine (it's very handy and secure) but I wouldn't use it for production workloads (kubectl must be always running so it's somewhat fragile and you don't get the best performance).
If you do special networking configuration, it is possible to directly access the service or pod IPs from outside the cluster. You have to route traffic for the pod and service CIDR ranges to the k8s nodes, this will probably involve configuring your VM hypervisors, routers and firewalls, and is highly dependent on what networking (CNI) plugin are you using for your Kubernetes cluster.

SSH to Kubernetes pod using Bastion

I have deployed Google cloud Kubernetes cluster. The cluster has internal IP only.
In order to access it, I created a virtual machine bastion-1 which has external IP.
The structure:
My Machine -> bastion-1 -> Kubernetes cluster
The connection to the proxy station:
$ ssh bastion -D 1080
now using kubectl using proxy:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pods
No resources found.
The Kubernetes master server is responding, which is a good sign.
Now, trying to ssh a pod:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl exec -it "my-pod" -- /bin/bash
error: error sending request: Post https://xxx.xxx.xxx.xxx/api/v1/namespaces/xxx/pods/pod-xxx/exec?command=%2Fbin%2Fbash&container=xxx&container=xxx&stdin=true&stdout=true&tty=true: EOF
Question:
How to allow ssh connection to pod via bastion? What I'm doing wrong?
You can't do this right now.
The reason is because the connections used for commands like exec and proxy use SPDY2.
There's a bug report here with more information.
You'll have to switch to using a HTTP proxy