pgadmin remotely access postresql service in kubernetes - postgresql

I launch a pod from rancher and my pgsql daemon is running fine.
Then ingres is set up with a target (pod name) and port 5432
Then use kubectl to start port forwarding
After these steps are completed, I can access the db from within the kubernetes cluster using
kubectl exec -it pod/<pod_name> -n <ns_name> -- psql -U postgres
This ran fine.
Then I tried to connect to the db using pgadmin on my laptop. It always failed with
Unable to connect to server:
Could not connect to server: Connection refused (0x0000274D/10061)
Is the server running on host "pgsql.kube.xx.yy.com" (###.##.###.##) and accepting
TCP/IP connections on port 5432?
I can connect to db from another pod in the k8s cluster.
this works for me in another pod:
./psql --host <pod.ip> -U postgres -d metastore -p 5432
Ingress config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
field.cattle.io/creatorId: u-abcdefg
field.cattle.io/ingressState: '{"c######M=":"p#####q:sslcerts","c#####g==":"statefulset:pgsql-###:pgsql-####"}'
field.cattle.io/publicEndpoints: '[{"addresses":["##.##.##.##"],"port":443,"protocol":"HTTPS","serviceName":"pgsql-###:ingress-5###","ingressName":"pgsql-###:posgres-ingres","hostname":"pgsql-##.kube.##.###.###s","allNodes":true}]'
creationTimestamp: "2021-11-10T19:01:30Z"
generation: 5
labels:
cattle.io/creator: norman
name: posgres-ingres
namespace: pgsql-###
resourceVersion: "343048085"
selfLink: /apis/extensions/v1beta1/namespaces/pgsql-###/ingresses/posgres-ingres
uid: 10###-###-########-######
spec:
rules:
- host: pgsql-##.kube.##.##.##
http:
paths:
- backend:
serviceName: ingress-########
servicePort: 5432
tls:
- hosts:
- pgsql-###.kube.##.##.###
secretName: sslcerts
status:
loadBalancer:
ingress:
- ip: ##.##.##.##
- ip: ##.##.##.##
- ip: ###.###.###.###
- ip: ###.###.###.###
Your suggestions would be greatly appreciated.

As stated in the documentation:
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
PostgreSQL is a SQL database and typically doesn't use http as protocol. From the Postgres documentation.
PostgreSQL uses a message-based protocol for communication between frontends and backends (clients and servers). The protocol is supported over TCP/IP and also over Unix-domain sockets.

Related

How to use an ExternalName service to access an internal service that is exposed with ingress

I am trying out a possible kubernetes scenario in the local machine minikube cluster. It is to access an internal service that is exposed with ingress in one cluster from another cluster using an ExternalName service. I understand that using an ingress the service will already be accessible within the cluster. As I am trying this out locally using minikube, I am unable to use simultaneously running clusters. Since I just wanted to verify whether it is possible to access an ingress exposed service using ExternName service.
I started the minikube tunnel using minikube tunnel.
I can access the service using http://k8s-yaml-hello.info.
But when I tryout curl k8s-yaml-hello-internal within a running POD, the error that I that is curl: (7) Failed to connect to k8s-yaml-hello-internal port 80 after 1161 ms: Connection refused
Can anyone point me out the issue here? Thanks in advance.
service.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
ingress.yaml
kind: Ingress
metadata:
name: k8s-yaml-hello-ingress
labels:
name: k8s-yaml-hello-ingress
spec:
rules:
- host: k8s-yaml-hello.info
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: k8s-yaml-hello
port:
number: 3000
externalName.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello-internal
spec:
ports:
- name: ''
appProtocol: http
protocol: TCP
port: 3000
type: ExternalName
externalName: k8s-yaml-hello.info
etc/hosts
127.0.0.1 k8s-yaml-hello.info
As You are getting the error curl: (7) Failed to connect :
The above error message means that no web-server is running on the specified IP and Port and the specified (or implied) port.
Check using nano /etc/hosts whether the IP and port is pointing to the correct domain or not. If it's not pointing, provide the correct IP and Port.
Refer to this SO for more information.
In Ingress.Yaml use Port 80 and also in service.yaml port should be 80. The service port and Target port should be different As per your yaml it is the same. Change it to 80 and have a try , If you get any errors, post here.
The problem is that minikube tunnel by default binds to the localhost address 127.0.0.1. Every node, machine, vm, container etc. has its own and the same localhost address. It is to reach local services without having to know the ip address of the network interface (the service is running on "myself"). So when k8s-yaml-hello.info resolves to 127.0.0.1 then it points to different service depending on which container you are (just to myself).
To make it work like you want, you first have to find out the ip address of your hosts network interface e.g. with ifconfig. Its name is something like eth0 or en0, depending on your system.
Then you can use the bind-address option of minikube tunnel to bind to that address instead:
minikube tunnel --bind-address=192.168.1.10
With this your service should be reachable from within the container. Please check first with the ip address:
curl http://192.168.1.10
Then make sure name resolution with /etc/hosts works in your container with dig, nslookup, getent hosts or something similar that is available in your container.

How to access a database that is only accesible from Kubernetes cluster locally?

I have a situation where I have a Kubernetes cluster that has access to a Postgres instance (which is not run in the Kubernetes cluster). The Postgres instance is not accessible from anywhere else.
What I would like to do is connect with my Database tools locally. What I have found is kubectl port-forward but I think this would only be a solution if the Postgres instance is run as a pod. What I basically need is a Pod, that forwards everything that is sent on Port 8432 to the postgres instance and then I could use the port forward.
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
What is the right way to do this?
You can create service for your postgresql instance:
---
apiVersion: v1
kind: Service
metadata:
name: postgresql
spec:
ports:
- port: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: postgresql
subsets:
- addresses:
- ip: ipAddressOfYourPGInstance
ports:
- port: 5432
And then use:
kubectl port-forward service/postgresql 5432:5432
you can use the Postgres client to connect with the Postgres instance and expose that pod using the ingress and you can access the UI over the URL.
for Postgres client, you can use: https://hub.docker.com/r/dpage/pgadmin4/
you can set this as pgclient and use it

How to Configure Kubernetes in Hairpin Mode

I'm trying to enable hairpin connections on my Kubernetes service, on GKE.
I've tried to follow the instructions here: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/ to configure my kubelet config to enable hairpin mode, but it looks like my configs are never saved, even though the edit command returns without error.
Here is what I try to set when I edit node:
spec:
podCIDR: 10.4.1.0/24
providerID: gce://staging/us-east4-b/gke-cluster-staging-highmem-f36fb529-cfnv
configSource:
configMap:
name: my-node-config-4kbd7d944d
namespace: kube-system
kubeletConfigKey: kubelet
Here is my node config when I describe it
Name: my-node-config-4kbd7d944d
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
kubelet_config:
----
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"hairpinMode": "hairpin-veth"
}
I've tried both using "edit node" and "patch". Same result in that nothing is saved. Patch returns "no changes made."
Here is the patch command from the tutorial:
kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMap\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"kubeletConfigKey\":\"kubelet\"}}}}"
I also can't find any resource on where the "hairpinMode" attribute is supposed to be set.
Any help is appreciated!
------------------- edit ----------------
here is why I think hairpinning isn't working.
root#668cb9686f-dzcx8:/app# nslookup tasks-staging.[my-domain].com
Server: 10.0.32.10
Address: 10.0.32.10#53
Non-authoritative answer:
Name: tasks-staging.[my-domain].com
Address: 34.102.170.43
root#668cb9686f-dzcx8:/app# curl https://[my-domain].com/python/healthz
hello
root#668cb9686f-dzcx8:/app# nslookup my-service.default
Server: 10.0.32.10
Address: 10.0.32.10#53
Name: my-service.default.svc.cluster.local
Address: 10.0.38.76
root#668cb9686f-dzcx8:/app# curl https://my-service.default.svc.cluster.local/python/healthz
curl: (7) Failed to connect to my-service.default.svc.cluster.local port 443: Connection timed out
also if I issue a request to localhost from my service (not curl), it gets a "connection refused." Issuing requests to the external domain, which should get routed to the same pod, is fine though.
I only have one service, one node, one pod, and two listening ports at the moment.
--------------------- including deployment yaml -----------------
Deployment
spec:
replicas: 1
spec:
containers:
- name: my-app
ports:
- containerPort: 8080
- containerPort: 50001
readinessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTPS
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
spec:
backend:
serviceName: my-service
servicePort: 60000
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-service
servicePort: 60000
- path: /python/*
backend:
serviceName: my-service
servicePort: 60001
service
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- name: port
port: 60000
targetPort: 8080
- name: python-port
port: 60001
targetPort: 50001
type: NodePort
I'm trying to set up a multi-port application where the main program trigger a script to run through issuing a request on the local machine on a different port. (I need to run something in python but the main app is in golang.)
It's a simple script and I'd like to avoid exposing the python endpoints with the external domain, so I don't have to worry about authentication, etc.
-------------- requests sent from my-service in golang -------------
https://[my-domain]/health: success
https://[my-domain]/python/healthz: success
http://my-service.default:60000/healthz: dial tcp: lookup my-service.default on 169.254.169.254:53: no such host
http://my-service.default/python/healthz: dial tcp: lookup my-service.default on 169.254.169.254:53: no such host
http://my-service.default:60001/python/healthz: dial tcp: lookup my-service.default on 169.254.169.254:53: no such host
http://localhost:50001/healthz: dial tcp 127.0.0.1:50001: connect: connection refused
http://localhost:50001/python/healthz: dial tcp 127.0.0.1:50001: connect: connection refused
Kubelet reconfiguration in GKE
You should not reconfigure kubelet in cloud managed Kubernetes clusters like GKE. It's not supported and it can lead to errors and failures.
Hairpinning in GKE
Hairpinning is enabled by default in GKE provided clusters. You can check if it's enabled by invoking below command on one of the GKE nodes:
ifconfig cbr0 |grep PROMISC
The output should look like that:
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1
Where the PROMISC will indicate that the hairpinning is enabled.
Please refer to official documentation about debugging services: Kubernetes.io: Debug service: a pod fails to reach itself via the service ip
Workload
Basing only on service definition you provided, you should have an access to your python application on port 50001 with a pod hosting it with:
localhost:50001
ClusterIP:60001
my-service:60001
NodeIP:nodeport-port (check $ kubectl get svc my-service for this port)
I tried to run your Ingress resource and it failed to create. Please check how Ingress definition should look like.
Please take a look on official documentation where whole deployment process is explained with examples:
Kubernetes.io: Connect applications service
Cloud.google.com: Kubernetes engine: Ingress
Cloud.google.com: Kubernetes engine: Load balance ingress
Additionally please check other StackOverflow answers like:
Stackoverflow.com: Kubernetes how to access service if nodeport is random - it describes how you can access application in your pod
Stackoverflow.com: What is the purpose of kubectl proxy - it describes what happen when you create your service object.
Please let me know if you have any questions to that.

Access SQL Server database from Kubernetes Pod

My deployed Spring boot application to trying to connect to an external SQL Server database from Kubernetes Pod. But every time it fails with error
Failed to initialize pool: The TCP/IP connection to the host <>, port 1443 has failed.
Error: "Connection timed out: no further information.
Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.
I have tried to exec into the Pod and successfully ping the DB server without any issues
Below are the solutions I have tried:
Created a Service and Endpoint and provided the DB IP in configuration file tried to bring up the application in the Pod
Tried using the Internal IP from Endpoint instead of DB IP in configuration to see Internal IP is resolved to DB IP
But both these cases gave the same result. Below is the yaml I am using the create the Service and Endpoint.
---
apiVersion: v1
kind: Service
metadata:
name: mssql
namespace: cattle
spec:
type: ClusterIP
ports:
- port: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
namespace: cattle
subsets:
- addresses:
- ip: <<DB IP>>
ports:
- port: 1433
Please let me know if I am wrong or missing in this setup.
Additional information the K8s setup
It is clustered master with external etcd cluster topology
OS on the nodes is CentOS
Able to ping the server from all nodes and the pods that are created
For this scenario a headless service is very useful. You will redirect traffic to this ip without defining an endpoint.
kind: "Service"
apiVersion: "v1"
metadata:
namespace: "your-namespace"
name: "ftp"
spec:
type: ExternalName
externalName: your-ip
The issue was resolved by updating the deployment yaml with IP address. Since all the servers were in same subnet, I did not need the to create a service or endpoint to access the DB. Thank you for all the inputs on the post

Kubernetes service is reachable from node but not from my machine

I have a timeout problem with my site hosted on Kubernetes cluster provided by DigitalOcean.
u#macbook$ curl -L fork.example.com
curl: (7) Failed to connect to fork.example.com port 80: Operation timed out
I have tried everything listed on the Debug Services page. I use a k8s service named df-stats-site.
u#pod$ nslookup df-stats-site
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
It gives the same output when I do it from node:
u#node$ nslookup df-stats-site.deepfork.svc.cluster.local 10.245.0.10
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
With the help of Does the Service work by IP? part of the page, I tried the following command and got the expected output.
u#node$ curl 10.245.16.96
*correct response*
Which should mean that everything is fine with DNS and service. I confirmed that kube-proxy is running with the following command:
u#node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 13:56 /hyperkube proxy --config=...
But I have something wrong with iptables rules:
u#node$ iptables-save | grep df-stats-site
(unfortunately, I was not able to copy the output from node, see the screenshot below)
It is recommended to restart kube-proxy with with the -v flag set to 4, but I don't know how to do it with DigitalOcean provided cluster.
That's the configuration I use:
apiVersion: v1
kind: Service
metadata:
name: df-stats-site
spec:
ports:
- port: 80
targetPort: 8002
selector:
app: df-stats-site
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: df-stats-site
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- fork.example.com
secretName: letsencrypt-prod
rules:
- host: fork.example.com
http:
paths:
- backend:
serviceName: df-stats-site
servicePort: 80
Also, I have a NGINX Ingress Controller set up with the help of this answer.
I must note that it worked fine before. I'm not sure what caused this, but restarting the cluster would be great, though I don't know how to do it without removing all the resources.
The solution for me was to add HTTP and HTTPS inbound rules in the Firewall (these are missing by default).
For DigitalOcean provided Kubernetes cluster, you can open it at https://cloud.digitalocean.com/networking/firewalls/.
UPDATE: Make sure to create a new firewall record rather than editing an existing one. Otherwise, your rules will be automatically removed in a couple of hours/days, because DigitalOcean k8s persists the set of rules in the firewall.
ClusterIP services are only accessible from within the cluster. If you want to access it from outside the cluster, it needs to be configured as NodePort or LoadBalancer.
If you are just trying to test something locally, you can use kubectl port-forward to forward a port on your local machine to a ClusterIP service on a remote cluster. Here's an example of creating a deployment from an image, exposing it as a ClusterIP service, then accessing it via kubectl port-forward:
$ kubectl run --image=rancher/hello-world hello-world --replicas 2
$ kubectl expose deployment hello-world --type=ClusterIP --port=8080 --target-port=80
$ kubectl port-forward svc/hello-world 8080:8080
This service is now accessible from my local computer at http://127.0.0.1:8080