Kubernetes Pod can't connect to local postgres server with Hasura image - postgresql

I'm following this tutorial to connect a hasura kubernetes pod to my local postgres server.
When I create the deployment, the pod's container fails to connect to postgres (CrashLoopBackOff and keeps retrying), but doesn't give any reason why. Here are the logs:
{"type":"pg-client","timestamp":"2020-05-03T06:22:21.648+0000","level":"warn","detail":{"message":"postgres connection failed, retrying(0)."}}
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasura
hasuraService: custom
name: hasura
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: hasura
template:
metadata:
creationTimestamp: null
labels:
app: hasura
spec:
containers:
- image: hasura/graphql-engine:v1.2.0
imagePullPolicy: IfNotPresent
name: hasura
env:
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://USER:#localhost:5432/my_db
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
ports:
- containerPort: 8080
protocol: TCP
resources: {}
I'm using postgres://USER:#localhost:5432/MY_DB as the postgres url - is "localhost" the correct address here?
I verified that the above postgres url works when I try (no password):
> psql postgres://USER:#localhost:5432/my_db
psql (12.2)
Type "help" for help.
> my_db=#
How else can I troubleshoot it? The logs aren't very helpful...

If I got you correctly, the issue is that the Pod (from "inside" the Minikube ) can not access Postgres installed on Host machine (the one that runs Minikube itself) via localhost.
If that is the case, please check this thread .
... Minikube VM can access your host machine’s localhost on 192.168.99.1 (127.0.0.1 from Minikube would still be a Minicube's localhost).
Technically, for the Pod the localhost is Pod itself. The Host machine and Minikube are connected via bridge. You can find out exact ip addresses and routes with the infconfig and route -n on your Minikube Host.

Related

Access NodePort Service Outside Kubeadm K8S Cluster

I have two Ubuntu VMs created using Oracle Virtual Box on my Windows 11 laptop. I setup a k8s cluster using kubeadm with these two Ubuntu VMs, one of them is a master node and an another one is a worker node. Both the nodes are running with Ubuntu 20.04.3 LTS and docker://20.10.7. I deployed my spring boot app into the k8s cluster and exposed a node port service for my spring boot app with port 30000, but I am not really sure on how to access my node port service on the internet outside my cluster. Could you please help me with this issue?
Following are the IP address of my nodes in k8s cluster - master [192.168.254.94] and worker [192.168.254.95]. I tried with the following urls but none of them worked
http://192.168.254.94:30000/swagger-ui.html
http://192.168.254.95:30000/swagger-ui.html
These above urls throwing message which says refused to connect
http://192.168.9.13:30000/swagger-ui.html
http://192.168.9.14:30000/swagger-ui.html
These above urls says that the site cannot be reached
Below is the content of my application.yaml which I used for deploying the spring boot app and its corresponding service
apiVersion: apps/v1
kind: Deployment
metadata:
name: dealer-engine
spec:
replicas: 1
selector:
matchLabels:
app: dealer-engine
template:
metadata:
labels:
app: dealer-engine
spec:
containers:
- name: dealer-engine
image: moviepopcorn/dealer_engine:0.0.1
ports:
- containerPort: 9090
env:
- name: MONGO_URL
value: mongodb://mongo-service:27017/mazda
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: dealer-engine
spec:
type: NodePort
selector:
app: dealer-engine
ports:
- port: 9091
targetPort: 9090
nodePort: 30000
externalIPs:
- 10.0.0.12
I am a beginner in k8s so please help me on how I can access my node port service outside my k8s cluster.
I created a new simple Springboot application which returns "Hello world!!!" back to the user when the following endpoint "/helloWorld" is invoked. I deployed this spring boot app into my k8s cluster using the below yaml configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: moviepopcorn/hello_world:0.0.1
ports:
- containerPort: 9091
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
app: hello-world
ports:
- port: 9091
targetPort: 9091
nodePort: 30001
After successful deployment, I am able to access the helloWorld endpoint using the following url <K8S_MASTER_NODE_IP>:<NODE_PORT (30001)>.
Thank you all for your answers and inputs. Very much appreciated.
Have you install any CNI plugin like flannel?
If yes, Check your CIDR setting here
kubectl get node k8s-master -o yaml | grep podCIDR:
kubectl get configmap -n kube-system kube-flannel-cfg -o yaml | grep '"Network":'
Basically yes, CNI is must. flannel is the most simple one.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.
Download cni pulgin on every server.
# download cni plugin
wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar -xzvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin
Reset you cluster
kubeadm reset
Init your cluster with cidr setting must same as flannel config, default 10.244.0.0
kubeadm init --pod-network-cidr=10.244.0.0/16
Apply cni plugin.
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

NodePort type service not accessible outside cluster

I am trying to setup a local cluster using minikube in a Windows machine. Following some tutorials in kubernetes.io, I got the following manifest for the cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-nginx-deployment
labels:
app: external-nginx
spec:
selector:
matchLabels:
app: external-nginx
replicas: 2
template:
metadata:
labels:
app: external-nginx
spec:
containers:
- name: external-nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: expose-nginx
labels:
service: expose-nginx
spec:
type: NodePort
selector:
app: external-nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 32000
If I got things right, this should create a pod with a nginx instance and expose it to the host machine at port 32000.
However, when I run curl http://$(minikube ip):32000, I get a connection refused error.
I ran bash inside the service expose-nginx via kubectl exec svc/expose-nginx -it bash and from there I was able to access the external-nginx pods normally, which lead me to believe it is not a problem within the cluster.
I also tried to change the type of the service to LoadBalancer and enable the minikube tunnel, but got the same result.
Is there something I am missing?
Almost always by default minikube uses docker driver for the minikube VM creation. In the host system it looks like a big docker container for the VM in which other kubernetes components are run as containers as well. Based on tests NodePort for services often doesn't work as it's supposed to like accessing the service exposed via NodePort should work on minikube_IP:NodePort address.
Solutions are:
for local testing use kubectl port-forward to expose service to the local machine (which OP did)
use minikube service command which will expose the service to the host machine. Works in a very similar way as kubectl port-forward.
instead of docker driver use proper virtual machine which will get its own IP address (VirtualBox or hyperv drivers - depends on the system). Reference.
(Not related to minikube) Use built-in feature kubernetes in Docker Desktop for Windows. I've already tested it and service type should be LoadBalancer - it will be exposed to the host machine on localhost.

Kubernetes pod can't access other pods exposed by a service

New to Kubernetes.
To build our testing environment, I'm trying to set up a PostgreSQL instance in Kubernetes, that's accessible to other pods in the testing cluster.
The pod and service are both syntactically valid and running. Both show in the output from kubectl get [svc/pods]. But when another pod tries to access the database, it times out.
Here's the specification of the pod:
# this defines the postgres server
apiVersion: v1
kind: Pod
metadata:
name: postgres
labels:
app: postgres
spec:
hostname: postgres
restartPolicy: OnFailure
containers:
- name: postgres
image: postgres:9.6.6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
protocol: TCP
And here is the definition of the service:
# this defines a "service" that makes the postgres server publicly visible
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
type: ClusterIP
ports:
- port: 5432
protocol: TCP
I'm certain that something is wrong with at least one of those, but I'm not sufficiently familiar with Kubernetes to know which.
If it's relevant, we're running on Google Kubernetes Engine.
Help appreciated!

Connection refused to GCP LoadBalancer in Kubernetes

When I create a deployment and a service in a Kubernetes Engine in GCP I get connection refused for no apparent reason.
The service creates a Load Balancer in GCP and all corresponding firewall rules are in place (allows traffic to port 80 from 0.0.0.0/0). The underlying service is running fine, when I kubectl exec into the pod and curl localhost:8000/ I get the correct response.
This deployment setting used to work just fine for other images, but yesterday and today I keep getting
curl: (7) Failed to connect to 35.x.x.x port 80: Connection refused
What could be the issue? I tried deleting and recreating the service multiple times, with no luck.
kind: Service
apiVersion: v1
metadata:
name: my-app
spec:
selector:
app: app
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8000
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: my-app
image: gcr.io/myproject/my-app:0.0.1
imagePullPolicy: Always
ports:
- containerPort: 8000
This turned out to be a dumb mistake on my part. The gunicorn server was using a bind to 127.0.0.1 instead of 0.0.0.0, so it wasn't accessible from outside of the pod, but worked when I exec-ed into the pod.
The fix in my case was changing the entrypoint of the Dockerfile to
CMD [ "gunicorn", "server:app", "-b", "0.0.0.0:8000", "-w", "3" ]
rebuilding the image and updating the deployment.
Is the service binding to your pod? What does "kubectl describe svc my-app" say?
Make sure it transfers through to your pod on the correct port? You can also try, assuming you're using an instance on GCP, to curl the IP and port of the pod and make sure it's responding as it should?
ie, kubectl get pods -o wide, will tell you the IP of the pod
does curl ipofpod:8000 work?

Cannot access a Kubernetes postgresql service from the outside

I'm using Docker 1.10.1 and Kubernetes 1.0.1 on a local bare metal computer with Ubuntu 14.04, and I'm still new on this.
That computer is the Kubernetes master as well as the node I'm using to make tests (I'm currently making a proof of concept).
I created a couple of containers (1 nginx and 1 postgresql 9.2) to play with them.
I created the service and pod definitions for the postgresql container as follows:
----- app-postgres-svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: AppPostgres
name: app-postgres
spec:
ports:
- port: 5432
selector:
app: AppPostgresPod
type: NodePort
clusterIP: 192.168.3.105
----- app-postgres-pod.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: app-postgres-pod
spec:
replicas: 1
selector:
app: AppPostgresPod
template:
metadata:
name: app-postgres-pod
labels:
app: AppPostgresPod
spec:
containers:
- name: app-postgres-pod
image: localhost:5000/app_postgres
ports:
- containerPort: 5432
I also installed HAProxy in order to redirect the requests to ports 80 and 5432 to their corresponding Kubernetes service.
The following is the HAProxy definition of the frontend and backend for the postgresql redirection:
frontend postgres
bind *:5432
mode tcp
option tcplog
default_backend postgres-kubernetes
backend postgres-kubernetes
mode tcp
balance roundrobin
option pgsql-check
server postgres-01 192.168.3.105:5432 check
After I start the kubernetes cluster, services and pods, I first tried hitting the nginx site from the outside and it works like
a charm. But, when I try to connect to the postgresql service from the outside I get the following error:
Error connecting to the server: server closed the connection unexpectedly
This probably means the server terminated abnormally before or while processing the request.
And if I look at the log, there are too many of these messages:
LOG: incomplete startup packet
LOG: incomplete startup packet
...
This is the command I use to try to connect from the outside (from another computer on the same LAN):
psql -h 192.168.0.100 -p 5432 -U myuser mydb
I also tried using pgAdmin and the same error was displayed.
Note: if I try to connect to the postgres service from the docker/kubernetes host itself, I can connect without any problems:
psql -h 192.168.3.105 -p 5432 -U myuser mydb
So it seems like the Kubernetes portion is working fine.
What might be wrong?
EDIT: including the details of the LAN computers involved in the proof of concept, as well as the kubernetes definitions
for the nginx stuff and the entries on haproxy configuration.
So the issue happens when I try to connect from my laptop to the postgres service on the docker/kubernetes box (I thought HAProxy would redirect the request that arrives to the docker/kubernetes box at port 5432 to the kubernetes postgres service)
----- LAN Network
My Laptop: 192.168.0.5
HAProxy/Docker/Kubernetes box: 192.168.0.100
|
---> Kubernetes Nginx Service: 192.168.3.104
|
---> Nginx Pod: 172.16.4.5
|
---> Kubernetes Postgres Service: 192.168.3.105
|
---> Postgres Pod: 172.16.4.9
----- app-nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: AppNginx
name: app-nginx
spec:
ports:
- port: 80
selector:
app: AppNginxPod
type: NodePort
clusterIP: 192.168.3.104
----- app-nginx-pod-yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: app-nginx-pod
spec:
replicas: 2
selector:
app: AppNginxPod
template:
metadata:
name: app-nginx-pod
labels:
app: AppNginxPod
spec:
containers:
- name: app-nginx-pod
image: localhost:5000/app_nginx
ports:
- containerPort: 80
----- haproxy entries on /etc/haproxy/haproxy.cfg
frontend nginx
bind *:80
mode http
default_backend nginx-kubernetes
backend nginx-kubernetes
mode http
balance roundrobin
option forwardfor
option http-server-close
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server nginx-01 192.168.3.104:80 check
SOLUTION: I had to specify the user parameter in the pgsql-check line of the HAProxy configuration. After that, I was able to connect without any problems. Went back and removed the user parameter just to verify that the error was back, and it was. So, definitely, the problem was caused by the absence of the user parameter in the pgsql-check line.