I've create a postgresql instance into my openshift origin v3. It's running correctly, however I don't quite figure out why I am not able to reach it remotely.
I've exposed a route:
$oc get routes
postgresql postgresql-ra-sec.192.168.99.100.nip.io postgresql postgresql None
$ oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgresql ClusterIP 172.30.59.113 <none> 5432/TCP 57m
This is my route:
I'm trying to get access to this instance from an ubuntu os. I'm trying to get access using psql:
$ psql --host=postgresql-ra-sec.192.168.99.100.nip.io --dbname=tdevhub
psql: could not connect to server: Connection refused
Is the server running on host "postgresql-ra-sec.192.168.99.100.nip.io" (192.168.99.100) and accepting
TCP/IP connections on port 5432?
Otherwise:
$ psql --host=postgresql-ra-sec.192.168.99.100.nip.io --port=80 --dbname=tdevhub
psql: received invalid response to SSL negotiation: H
I've checked dns resolution, and it seems to work correctly:
$ nslookup postgresql-ra-sec.192.168.99.100.nip.io
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: postgresql-ra-sec.192.168.99.100.nip.io
Address: 192.168.99.100
EDIT
What about this?
Why is there this redirection? Could I try to change it before port-forwarding?
Exposing a service via a route means that your enabling external HTTP traffic. For a service like Postgresql, this is not going to work as per your example.
An alternative is to port forward to your local machine and connect that way. So for example, run oc get pods and then oc port-forward <postgresql-pod-name> 5432, this will allow you to create the TCP connection:
Run psql --host=localhost --dbname=tdevhub on the host machine to verify this.
There is also the option, in some instances at least to assign external IP's to allow ingress traffic. See the OpenShift docs. This will be more complicated to achieve but a permanent solution as opposed to port forwarding. It looks like you are running oc cluster up or minishift however so not sure how viable this is.
In theory while the answer of the port forwarding is correct and the only way I made it work I would say that in Openshift 3.x you could use a tcp route for this https://documentation.its.umich.edu/node/2126
However it does not seem to work (at least for me) in Openshift 4.x
Also I don't personally like the port forwarding because this assumes you have to establish a connection with a user that can connect to the cluster and has permissions with namespace to do what it needs to do.
I would much rather suggest the ingress solution
https://docs.openshift.com/container-platform/4.6/networking/configuring_ingress_cluster_traffic/configuring-externalip.html
Related
I am trying to follow this tutorial to setup Minikube on my MacBook.
Minikube with ingress
I have also referred Stack overflow question and Stack over flow question 2 both of these are not working for me.
When I run Minikube tunnel it says to enter the password and then get stuck after entering my password.
sidharth#Sidharths-MacBook-Air helm % minikube tunnel
✅ Tunnel successfully started
📌 NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
❗ The service/ingress example-ingress requires privileged ports to be exposed: [80 443]
🔑 sudo permission will be asked for it.
❗ The service/ingress minimal-ingress requires privileged ports to be exposed [80 443]
🏃 Starting tunnel for service example-ingress.
🔑 sudo permission will be asked for it.
🏃 Starting tunnel for service minimal-ingress.
Password:
I am getting the below response when I run kubectl ge ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx hello-world.info localhost 80 34m
This is an issue specifically with the docker driver, and it's only an output issue. If you use a VM driver (like hyperkit for macOS), you'll get the expected output in the documentation.
This stems from the fact that we need to do two discrete things to tunnel for a container driver (since it needs to route to 127.0.0.1) and for a VM driver.
We can potentially look into fixing this so that the output for both versions are similar, but the tunnel itself is working fine.
Refer this github link for more information.
On the Mac's with M1 chips, I can use minikube with podman (can brew install both). The "minikube tunnel" will ask for password and then appear to hang, because it's essentially establishing a tunnel similar to "kubectl port-forward" with that process between 127.0.0.1:443 and your ingress, and that tunnel will disappear once you ctrl-c out of the "minikube tunnel" process.
Note, this is different from the experience/behavior you'll get if you're using minikube with virtualbox on an Intel-based Mac, which will return the list of mapped ports and always has the ingress reachable from on the virtualbox VM.
I want to use the dns system of Google Kubernetes to make one pod (a web-backend) connect to another pod and service (in this case Redis).
When I check the DNS in the cluster, I get this:
[ root#curl:/ ]$ nslookup redis-service
Server: 10.40.0.10
Address 1: 10.40.0.10 kube-dns.kube-system.svc.cluster.local
Name: redis-service
Address 1: 10.40.2.59 redis-service.default.svc.cluster.local
[ root#curl:/ ]$
In my application, I would set the REDIS_HOST url to redis-service.default.svc.cluster.local.
Unfortunatedly, the logs say it cannot connect:
(also with http:// in front).
Do I miss a setting to make these pods able to communicate using this address? This address is predicable, that is why.
I found 2 things that work:
Change the service to ClusterIP instead of NodePort.
#harshmanvar in the comments mentioned, when just using the servicename it also works.
I am not exactly sure why, would like to understand this behaviour.
Consider if we build two VMs in a bare-metal server through a network, one is master and another is worker. I ssh to the master and construct a cluster using kubeadm which has three pods and a service with type: ClusterIP. So when I want access to the cluster I do kubectl proxy in the master. Now we can explore the API with curl and wget in the VM which we ssh to it, like this :
$ curl http://localhost:8080/api/
So far, so good! but I want access to the services by my laptop? The localhost which comes above is refer to the bare-metal server! How can access to the services through proxy by my laptop when cluster is placed in another machine?
When I do $ curl http://localhost:8080/api/ in my laptop it says :
127.0.0.1 refused to connect
which make sense! But what is the solution to this?
If you forward the port 8080 when sshing to master, you can use localhost on your laptop to access the apis on the cluster.
You can try adding the -L flag to your ssh command:
$ ssh -L 8080:localhost:8080 your.master.host.com
Then the curl to localhost will work.
You can also specify an extra arguments to the kubectl proxy command, to let your reverse-proxy server listening on non-default ip address (127.0.0.1) - expose outside
kubectl proxy --port=8001 --address='<MASTER_IP_ADDRESS>' --accept-hosts="^.*$"
You can get your Master IP address by issuing following command: kubectl cluster-info
I've set up and deployed a Kubernetes stateful set containing three CockroachDB pods, as per docs. My ultimate objective is to query the database without requiring use of kubectl. My intermediate objective is to query the database without actually shelling into the database pod.
I forwarded a port from a pod to my local machine, and attempted to connect:
$ kubectl port-forward cockroachdb-0 26257
Forwarding from 127.0.0.1:26257 -> 26257
Forwarding from [::1]:26257 -> 26257
# later, after attempting to connect:
Handling connection for 26257
E0607 16:32:20.047098 80112 portforward.go:329] an error occurred forwarding 26257 -> 26257: error forwarding port 26257 to pod cockroachdb-0_mc-red, uid : exit status 1: 2017/06/07 04:32:19 socat[40115] E connect(5, AF=2 127.0.0.1:26257, 16): Connection refused
$ cockroach node ls --insecure --host localhost --port 26257
Error: unable to connect or connection lost.
Please check the address and credentials such as certificates (if attempting to
communicate with a secure cluster).
rpc error: code = Internal desc = transport is closing
Failed running "node"
Anyone manage to accomplish this?
From inside the Kubernetes cluster, you can talk to the database by connecting the cockroachdb-public DNS name. In the docs, that corresponds to the example command:
kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never -- sql --insecure --host=cockroachdb-public
While that command is using the CockroachDB image, any Postgres client driver you use should be able to connect to cockroachdb-public when running with the Kubernetes cluster.
Connecting to the database from outside of the Kubernetes cluster will require exposing the cockroachdb-public service. The details will depend somewhat on how your Kubernetes cluster was deployed, so I'd recommend checking out their docs on that:
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#exposing-the-service
And in case you're curious, the reason forwarding port 26257 isn't working for you is because port forwarding from a pod only works if the process in the pod is listening on localhost, but the CockroachDB process in the statefulset configuration is set up to listen on the pod's hostname (as configured via the --host flag).
How can I expose service of type NodePort to internet without using type LoadBalancer? Every resource I have found was doing it by using load balancer. But I don't want load balancing its expensive and unnecessary for my use case because I am running one instance of postgres image which is mounting to persistent disk and I would like to be able to connect to my database from my PC using pgAdmin. If it is possible could you please provide bit more detailed answer as I am new to Kubernetes, GCE and networking.
Just for the record and bit more context I have deployment running 3 replicas of my API server to which I am connecting through load balancer with set loadBalancerIP and another deployment which is running one instance of postgres with NodePort service through which my API servers are communicating with my db. And my problem is that maintaining the db without public access is hard.
using NodePort as Service type works straight away e.g. like this:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx
More details can be found in the documentation.
The drawback of using NodePort is that you've to take care of integrating with your providers firewall by yourself. A starting port for that can also be found in the Configuring Your Cloud Provider's Firewalls section of the official documentation.
For GCE opening up the above for publicly on all nodes could look like:
gcloud compute firewall-rules create myservice --allow tcp:30080,tcp:30443
Once this is in place your services should be accessable through any of the public IPs of your nodes. You'll find them with:
gcloud compute instances list
You can run kubectl in a terminal window (command or power shell in windows) to port forward the postgresql deployment to your localhost.
kubectl port-forward deployment/my-pg-deployment 5432:5432
While this command is running (it runs in the foreground) you can use pgAdmin to point to localhost:5432 to access your pod on the gke. Simply close the terminal once you are done using the pgadmin.
For the sake of improved security: if in doubt about exposing a service like a database to the public internet, you might like the idea of hiding it behind a simple linux VM called jump host, also called bastion host in the official GCP documentation which is recommended. This way your database instance will continue being open towards the internal network. You then can remove the external IP address so that it stops being exposed to the internet.
The high level concept:
public internet <- SSH:22 -> bastion host <- db:5432 -> database service
After setting up your ssh connection and establishing connection, you could reach out to the database by forwarding the database port (see example below).
The Procedure Overview
Create the GCE VM
Specific requirements:
Pick the image of a Linux distribution you are familiar with
VM Connectivity to internet: Attach a public IP to the VM (you can do this during or after the installation)
Security: Go to Firewall rules and add a new rule opening port 22 at internal VM IP. Restrict the incoming connections to your home public IP
Go to your local machine, from which you would connect, and setup the connection like in the following example below.
SSH Connect to the bastion host VM
An example setup for your ssh connection, located at $HOME/.ssh/config (if this file called config doesn't exist, just create it):
Host bastion-host-vm
Hostname external-vm-ip
User root
IdentityFile ~/.ssh/id_ed25519
LocalForward 5432 internal-vm-ip:5432
Now you are ready for connecting from your local machine terminal with this command:
ssh bastion-host-vm
Once connected, you could now pick your favorite database client and connect to localhost:5432 (which is the forwarded port through the ssh connection from the remote database instance, which is behind the ssh host).
CAUTION: The port forwarding is only function as long as the ssh connection is established. If you disconnect or close the terminal window the ssh connection will close, and so the database port forwarding as well. So keep the terminal open and connection to your bastion host established as long as you are using the database connection.
Pro tipp for cost saving on the GCE VM
you could use the free tier offer for creating the bastion host VM which means increased protection for free.
Search for "Compute Engine" in the official table.
You could check this thread for more details on GCE free limits.