Kubernetes Connectivity - kubernetes

I have a POD running which had a Java Application running. This Java Application talks to a MySql which is on-prem. The MySql accepts connections from 192.* ip's
I have the pod running on EKS worker nodes with Ip - 192.. I am able to telnet Mysql from the worker nodes. When the pod starts, the Java application tries to connect to the Mysql with the POD Ip (which is some random 172. ip) and fails with MySQL connection error.
How can I solve this?

Try to execute a shell inside the pod and connect to the MySQL server from there.
kubectl exec -it -n <namespace-name> <pod-name> -c <container-name> -- COMMAND [args...]
E.g.:
kubectl exec -it -n default mypod -c container1 -- bash
Then check the MySQL connectivity:
#/> mysql --host mysql.dns.name.or.ip --port 3306 --user root --password --verbose
Or start another pod with usual tools and check MySQL port connectivity:
$ kubectl run busybox --rm -it --image busybox --restart=Never
#/> ping mysql.dns.name.or.ip
#/> telnet mysql.dns.name.or.ip 3306
You should see some connection-related information that helps you to resolve your issue.
I can guess you just need to add a route to your cluster pods network on your MySQL host or its default network router.

Related

not able to connect to internet via HTTPS minikube pods

cluster info:
Minikube installation steps on centos VM:
curl -LO https://storage.googleapis.com/minikube/releases/v1.21.0/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube start --addons=ingress --vm=true --memory=8192 --driver=none
PODs of this minikube cluster are not able to connect to internet.
However My host VM has internet connection with no firewall or iptables setup.
Can anybody help me debug this connection refused error
UPDATE:
I have noticed just now , I am able to connect non-https URLs, but not https URLs
How you have started the Minikube on the VM ? Which command did you used ?
If you are using the minikube --driver=docker it might won't work.
for starting the minikube on VM you have to change the driver
minikube --driver=none
in docker driver, it will create the container and install the kubernetes inside it and spawn the POD into further.
Check more at : https://minikube.sigs.k8s.io/docs/drivers/none/
if you on docker you can try :
pkill docker
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
docker -d
"It will force docker to recreate the bridge and reinit all the network rules"
reference : https://github.com/moby/moby/issues/866#issuecomment-19218300
Try with
docker run --net=host -it ubuntu
Or else add dns in the config file in /etc/default/docker
DOCKER_OPTS="--dns 208.67.222.222 --dns 208.67.220.220"
Please check your container port and target port. This is my pod setup:
spec:
containers:
name: hello
image: "nginx"
ports:
containerPort: 5000
This is my service setup:
ports:
protocol: TCP
port: 60000
targetPort: 5000
If your target port and container port don't match, you will get a curl: (7) connection refused error(Source).
Checkout similar Stackoverflowlink for more information.

Double port forwarding kubernetes + docker

Summary:
I have a docker container which is running kubectl port-forward, forwarding the port (5432) of a postgres service running as a k8s service to a local port (2223).
In the Dockerfile, I have exposed the relevant port 2223. Then I ran the container by publishing the said port (-p 2223:2223)
Now when I am trying to access the postgres through psql -h localhost -p 2223, I am getting the following error:
psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
However, when I do docker exec -ti to the said container and run the above psql command, I am able to connect to postgres.
Dockerfile CMD:
EXPOSE 2223
CMD ["bash", "-c", "kubectl -n namespace_test port-forward service/postgres-11-2 2223:5432"]
Docker Run command:
docker run -it --name=k8s-conn-12 -p 2223:2223 my_image_name:latest
Output of the docker run command:
Forwarding from 127.0.0.1:2223 -> 5432
So the port forwarding is successful, and I am able to connect to the postgres instance from inside the docker container. What I am not able to do is to connect from outside the container with the exposed and published port
You are missing a following parameter with your $ kubectl port-forward ...:
--address 0.0.0.0
I've reproduced the setup that you've tried to achieve and this was the reason the connection wasn't possible. I've included more explanation below.
Explanation
$ kubectl port-forward --help
Listen on port 8888 on all addresses, forwarding to 5000 in the pod
kubectl port-forward --address 0.0.0.0 pod/mypod 8888:5000
Options:
--address=[localhost]: Addresses to listen on (comma separated). Only accepts IP addresses or
localhost as a value. When localhost is supplied, kubectl will try to bind on both 127.0.0.1 and ::1
and will fail if neither of these addresses are available to bind.
By default: $ kubectl port-forward will bind to the localhost i.e. 127.0.0.1. In this setup the localhost will be the internal to the container and will not be accessible from your host even with the --publish (-p) parameter.
To allow the connections that are not originating from localhost you will need to pass earlier mentioned: --address 0.0.0.0. This will make kubectl listen on all IP addresses and respond to the traffic accordingly.
Your Dockerfile CMD should look similar to:
CMD ["bash", "-c", "kubectl -n namespace_test port-forward --address 0.0.0.0 service/postgres-11-2 2223:5432"]
Additional reference:
Kubernetes.io: Docs: Reference: Generated: Kubectl commands

kubectl port-forward to another endpoint

Is there a corresponding command with kubectl to:
ssh -L8888:rds.aws.com:5432 example.com
kubectl has port-forward you can also specify --address but that strictly requires an IP address.
The older answer is valid.
Still, a workaround would be to use something like
https://hub.docker.com/r/marcnuri/port-forward
kubectl run --env REMOTE_HOST=your.service.com --env REMOTE_PORT=8080 --env LOCAL_PORT=8080 --port 8080 --image marcnuri/port-forward test-port-forward
Run it on the cluster and then port forward to it.
kubectl port-forward test-port-forward 8080:8080
Short answer, No.
In OpenSSH, local port forwarding is configured using the -L option:
ssh -L 80:intra.example.com:80 gw.example.com
This example opens a connection to the gw.example.com jump server, and forwards any connection to port 80 on the local machine to port 80 on intra.example.com.
By default, anyone (even on different machines) can connect to the specified port on the SSH client machine. However, this can be restricted to programs on the same host by supplying a bind address:
ssh -L 127.0.0.1:80:intra.example.com:80 gw.example.com
You can read the docs here.
The port-forward in Kubernetes works only within the cluster, you can forward traffic that will hit specified port to Deployment or Service or a Pod
kubectl port-forward TYPE/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]
--address flag is to specify what to listen on 0.0.0.0 means everything localhost is as name and you can set an IP on which it can be listening on.
Documentation is available here, you can also read Use Port Forwarding to Access Applications in a Cluster.
One workaround you can use if you have an SSH server somewhere on the Internet is to SSH to your server from your pod, port-forwarding in reverse:
# Suppose a web console is being served at
# http://my-service-8f6717ab-e.default:8888/
# inside your cluster:
kubectl exec -it my-job-f523b248-7htj6 -- ssh -R8888:my-service-8f6717ab-e.default:8888 user#34.23.1.2
Then you can connect to the service inside Kubernetes from outside of it. If the SSH server is not local to you, you can SSH to it from your local machine with a normal port forward:
me#my-macbook-pro:$ ssh -L8888:localhost:8888 user#34.23.1.2
Then point your browser to http://localhost:8888/

When I start prepare kubernetes in aws the errors shown below

Below are the commands and their outputs:
root#k8s-master:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory
root#k8s-master:~# kubectl get services -n kube-system
The connection to the server localhost:8080 was refused - did you specify the right host or port?
It looks like you are not running EKS. Otherwise you cannot access the masters. With EKS, the masters are managed by AWS and you can't ssh to them
your kubectl commands makes a call to the kubernetes api server. So you have to check if it is running on localhost on port 8080.

how to connect to postgresql on Kubernetes cluster?

I have deployed my crunchy db postgresq on my Kubernetes cluster.
However, I am not sure how to connect to the database remotely.
What command can I use to connect remotely so I can create a new database?
Is there a kubectl command to go with psql?
I was able to look at another forum and found what I needed. I executed with the pod name and gets me to a bash prompt.
kubectl exec -it <POD_NAME> bash
kubctl get pods
kubectl exec -it <POD_NAME> bash
su postgres
psql
in the above postgres is user name.
you will get:
postgres=#