How can I connect to CockroachDB from outside the Kubernetes cluster? - kubernetes

I've set up and deployed a Kubernetes stateful set containing three CockroachDB pods, as per docs. My ultimate objective is to query the database without requiring use of kubectl. My intermediate objective is to query the database without actually shelling into the database pod.
I forwarded a port from a pod to my local machine, and attempted to connect:
$ kubectl port-forward cockroachdb-0 26257
Forwarding from 127.0.0.1:26257 -> 26257
Forwarding from [::1]:26257 -> 26257
# later, after attempting to connect:
Handling connection for 26257
E0607 16:32:20.047098 80112 portforward.go:329] an error occurred forwarding 26257 -> 26257: error forwarding port 26257 to pod cockroachdb-0_mc-red, uid : exit status 1: 2017/06/07 04:32:19 socat[40115] E connect(5, AF=2 127.0.0.1:26257, 16): Connection refused
$ cockroach node ls --insecure --host localhost --port 26257
Error: unable to connect or connection lost.
Please check the address and credentials such as certificates (if attempting to
communicate with a secure cluster).
rpc error: code = Internal desc = transport is closing
Failed running "node"
Anyone manage to accomplish this?

From inside the Kubernetes cluster, you can talk to the database by connecting the cockroachdb-public DNS name. In the docs, that corresponds to the example command:
kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never -- sql --insecure --host=cockroachdb-public
While that command is using the CockroachDB image, any Postgres client driver you use should be able to connect to cockroachdb-public when running with the Kubernetes cluster.
Connecting to the database from outside of the Kubernetes cluster will require exposing the cockroachdb-public service. The details will depend somewhat on how your Kubernetes cluster was deployed, so I'd recommend checking out their docs on that:
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#exposing-the-service
And in case you're curious, the reason forwarding port 26257 isn't working for you is because port forwarding from a pod only works if the process in the pod is listening on localhost, but the CockroachDB process in the statefulset configuration is set up to listen on the pod's hostname (as configured via the --host flag).

Related

I installed airflow using microk8s helm, but access to localhost:8080 is denied

I am just getting started with Kubernetes. I'm using microk8s and I'm currently having a mental breakdown. My 5 days are being spent here.
I currently have airflow installed using microk8s helm. However, I forwarded to port 8080, but the connection is being refused.And i'm using AWS EC2
helm chart
https://github.com/airflow-helm/charts/tree/main/charts/airflow
pod status
webserver log
Describe
Port
I allowed port 8080 in security group of aws ec2
Port-forward 8080
i did kubectl port-forward svc/airflow-web 8080:8080
and check port
i did netstat -ntlp
and
i did kubectl get cs
...but I connected to 127.0.0.1:8080 but the connection was denied:
Here is the log for postgresql. There is an error, is it related to this?
service:
type: NodePort
externalPort: 8080
I solved the problem by changing the webserver's service type to NodePort.
In the case of ClusterIP, I don't know why I can't connect
In the instance security group, I allowed all ports 30000 - 32767
i am very happy

Kubectl port-forward has no effect

I'm trying to forward my redis pod to my local redis client. I checked pods, services, replica sets, deployments.
The command that I'm running on my cluster:
$ kubectl port-forward svc/redis 7000:6379
The response of it:
Forwarding from 127.0.0.1:7000 -> 6379
Forwarding from [::1]:7000 -> 6379
Even though, I can not connect from my terminal with $ redis-cli -p 7000.
It turns me an error and nothing happens in Kubernetes:
Could not connect to Redis at 127.0.0.1:7000: Connection refused
not connected>
Note: I've also tried --address option, sudo, different ports and to forward with pod/<pod_name>.
Is there anything that I missed?

why could not access service from another node in kubernetes

Today my pod could not start and show this error:
2021-04-22 12:41:26.325 WARN 1 --- [ngPollService-1] c.c.f.a.i.RemoteConfigLongPollService : Long polling failed, will retry in 64 seconds. appId: 0010010006, cluster: default, namespaces: TEST1.RABBITMQ_CONFIG_REPORT+TEST1.RABBITMQ-CONFIG+application+TEST1.EUREKA+TEST1.DATASOURCE-DRUID+TEST1.COMMON_CONFIG+TEST1.REDIS-CONFIG, long polling url: null, reason: Get config services failed from http://service-apollo-config-server-test-alpha.sre.svc.cluster.local:8080/services/config?appId=0010010006&ip=172.30.184.11 [Cause: Could not complete get operation [Cause: Connection refused (Connection refused)]]
this error tell me this pod could not access the config service, and fetch config failed from config center, so it could not start. Then I login to another node(work fine node) pod and curl the config pod like this:
curl http://service-apollo-config-server-test-alpha.sre.svc.cluster.local:8080
works fine. so the config service is ok. now I run the same command in the problem node pod:
bash-4.4# curl http://service-apollo-config-server-test-alpha.sre.svc.cluster.local:8080
curl: (7) Failed to connect to service-apollo-config-server-test-alpha.sre.svc.cluster.local port 8080: Connection refused
bash-4.4# curl http://service-apollo-config-server-test-alpha.sre.svc.cluster.local:8080
and I ping the config node like this from problem node, works fine :
ping service-apollo-config-server-test-alpha.sre.svc.cluster.local
then I scan the config node using nmap from problem node:
bash-4.4# nmap service-apollo-config-server-test-alpha.sre.svc.cluster.local
Starting Nmap 7.70 ( https://nmap.org ) at 2021-04-22 12:45 CST
Nmap scan report for service-apollo-config-server-test-alpha.sre.svc.cluster.local (10.254.82.131)
Host is up (0.000010s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
3306/tcp open mysql
8443/tcp open https-alt
did not found the 8080 port. seems network is fine but could not access the service from node. why the problem node pod could not access the config service? what should I do to find out the problem and fix it? I found on the problem node using pod ip it could work, for example:
# pod ip access works
curl 172.30.112.2:11025
# service ip failed
curl 10.254.94.209:11025
# service name failed
curl soa-illidan-superhub.dabai-fat.svc.cluster.local:11025
Finally I found the kube-proxy process was exit, in CentOS 7.6, using this command to start:
systemctl start kube-proxy
fix it.

SSH to Kubernetes pod using Bastion

I have deployed Google cloud Kubernetes cluster. The cluster has internal IP only.
In order to access it, I created a virtual machine bastion-1 which has external IP.
The structure:
My Machine -> bastion-1 -> Kubernetes cluster
The connection to the proxy station:
$ ssh bastion -D 1080
now using kubectl using proxy:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pods
No resources found.
The Kubernetes master server is responding, which is a good sign.
Now, trying to ssh a pod:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl exec -it "my-pod" -- /bin/bash
error: error sending request: Post https://xxx.xxx.xxx.xxx/api/v1/namespaces/xxx/pods/pod-xxx/exec?command=%2Fbin%2Fbash&container=xxx&container=xxx&stdin=true&stdout=true&tty=true: EOF
Question:
How to allow ssh connection to pod via bastion? What I'm doing wrong?
You can't do this right now.
The reason is because the connections used for commands like exec and proxy use SPDY2.
There's a bug report here with more information.
You'll have to switch to using a HTTP proxy

Openshift: Expose postgresql remotely

I've create a postgresql instance into my openshift origin v3. It's running correctly, however I don't quite figure out why I am not able to reach it remotely.
I've exposed a route:
$oc get routes
postgresql postgresql-ra-sec.192.168.99.100.nip.io postgresql postgresql None
$ oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgresql ClusterIP 172.30.59.113 <none> 5432/TCP 57m
This is my route:
I'm trying to get access to this instance from an ubuntu os. I'm trying to get access using psql:
$ psql --host=postgresql-ra-sec.192.168.99.100.nip.io --dbname=tdevhub
psql: could not connect to server: Connection refused
Is the server running on host "postgresql-ra-sec.192.168.99.100.nip.io" (192.168.99.100) and accepting
TCP/IP connections on port 5432?
Otherwise:
$ psql --host=postgresql-ra-sec.192.168.99.100.nip.io --port=80 --dbname=tdevhub
psql: received invalid response to SSL negotiation: H
I've checked dns resolution, and it seems to work correctly:
$ nslookup postgresql-ra-sec.192.168.99.100.nip.io
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: postgresql-ra-sec.192.168.99.100.nip.io
Address: 192.168.99.100
EDIT
What about this?
Why is there this redirection? Could I try to change it before port-forwarding?
Exposing a service via a route means that your enabling external HTTP traffic. For a service like Postgresql, this is not going to work as per your example.
An alternative is to port forward to your local machine and connect that way. So for example, run oc get pods and then oc port-forward <postgresql-pod-name> 5432, this will allow you to create the TCP connection:
Run psql --host=localhost --dbname=tdevhub on the host machine to verify this.
There is also the option, in some instances at least to assign external IP's to allow ingress traffic. See the OpenShift docs. This will be more complicated to achieve but a permanent solution as opposed to port forwarding. It looks like you are running oc cluster up or minishift however so not sure how viable this is.
In theory while the answer of the port forwarding is correct and the only way I made it work I would say that in Openshift 3.x you could use a tcp route for this https://documentation.its.umich.edu/node/2126
However it does not seem to work (at least for me) in Openshift 4.x
Also I don't personally like the port forwarding because this assumes you have to establish a connection with a user that can connect to the cluster and has permissions with namespace to do what it needs to do.
I would much rather suggest the ingress solution
https://docs.openshift.com/container-platform/4.6/networking/configuring_ingress_cluster_traffic/configuring-externalip.html