I'm using Docker 1.10.1 and Kubernetes 1.0.1 on a local bare metal computer with Ubuntu 14.04, and I'm still new on this.
That computer is the Kubernetes master as well as the node I'm using to make tests (I'm currently making a proof of concept).
I created a couple of containers (1 nginx and 1 postgresql 9.2) to play with them.
I created the service and pod definitions for the postgresql container as follows:
----- app-postgres-svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: AppPostgres
name: app-postgres
spec:
ports:
- port: 5432
selector:
app: AppPostgresPod
type: NodePort
clusterIP: 192.168.3.105
----- app-postgres-pod.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: app-postgres-pod
spec:
replicas: 1
selector:
app: AppPostgresPod
template:
metadata:
name: app-postgres-pod
labels:
app: AppPostgresPod
spec:
containers:
- name: app-postgres-pod
image: localhost:5000/app_postgres
ports:
- containerPort: 5432
I also installed HAProxy in order to redirect the requests to ports 80 and 5432 to their corresponding Kubernetes service.
The following is the HAProxy definition of the frontend and backend for the postgresql redirection:
frontend postgres
bind *:5432
mode tcp
option tcplog
default_backend postgres-kubernetes
backend postgres-kubernetes
mode tcp
balance roundrobin
option pgsql-check
server postgres-01 192.168.3.105:5432 check
After I start the kubernetes cluster, services and pods, I first tried hitting the nginx site from the outside and it works like
a charm. But, when I try to connect to the postgresql service from the outside I get the following error:
Error connecting to the server: server closed the connection unexpectedly
This probably means the server terminated abnormally before or while processing the request.
And if I look at the log, there are too many of these messages:
LOG: incomplete startup packet
LOG: incomplete startup packet
...
This is the command I use to try to connect from the outside (from another computer on the same LAN):
psql -h 192.168.0.100 -p 5432 -U myuser mydb
I also tried using pgAdmin and the same error was displayed.
Note: if I try to connect to the postgres service from the docker/kubernetes host itself, I can connect without any problems:
psql -h 192.168.3.105 -p 5432 -U myuser mydb
So it seems like the Kubernetes portion is working fine.
What might be wrong?
EDIT: including the details of the LAN computers involved in the proof of concept, as well as the kubernetes definitions
for the nginx stuff and the entries on haproxy configuration.
So the issue happens when I try to connect from my laptop to the postgres service on the docker/kubernetes box (I thought HAProxy would redirect the request that arrives to the docker/kubernetes box at port 5432 to the kubernetes postgres service)
----- LAN Network
My Laptop: 192.168.0.5
HAProxy/Docker/Kubernetes box: 192.168.0.100
|
---> Kubernetes Nginx Service: 192.168.3.104
|
---> Nginx Pod: 172.16.4.5
|
---> Kubernetes Postgres Service: 192.168.3.105
|
---> Postgres Pod: 172.16.4.9
----- app-nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: AppNginx
name: app-nginx
spec:
ports:
- port: 80
selector:
app: AppNginxPod
type: NodePort
clusterIP: 192.168.3.104
----- app-nginx-pod-yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: app-nginx-pod
spec:
replicas: 2
selector:
app: AppNginxPod
template:
metadata:
name: app-nginx-pod
labels:
app: AppNginxPod
spec:
containers:
- name: app-nginx-pod
image: localhost:5000/app_nginx
ports:
- containerPort: 80
----- haproxy entries on /etc/haproxy/haproxy.cfg
frontend nginx
bind *:80
mode http
default_backend nginx-kubernetes
backend nginx-kubernetes
mode http
balance roundrobin
option forwardfor
option http-server-close
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server nginx-01 192.168.3.104:80 check
SOLUTION: I had to specify the user parameter in the pgsql-check line of the HAProxy configuration. After that, I was able to connect without any problems. Went back and removed the user parameter just to verify that the error was back, and it was. So, definitely, the problem was caused by the absence of the user parameter in the pgsql-check line.
Related
I have local and dockerized apps which are working excelent on localhost : java backend at 8080, angular at 4200, activemq at 8161, and postgres on 5432
Now,I am trying also to kubernetize apps to make them work on localhosts.
As far as I know kubernetes provides random Ip on clusters, what should I do do make them work on localhosts to listen to each other ? Is there any way to make them automatically start at those localhosts instead of using port forwariding for each service ?
Every service and deployment has similiar structure :
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
type: LoadBalancer
ports:
- protocol: 8080
port: 8080
targetPort: 8080
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image:
ports:
- containerPort: 8080
Tried port-forwarding, works, but requires lot of manual work ( open few new powershell windows and then do manual port forwarding)
In the kubernetes eco system apps talk to each other through their services.
If they are in the same namespace they can directly go to the service name of not they need to specify the full name which includes the namespace name:
my-svc.my-namespace.svc.cluster-domain.example
Never mind, find a way to do it automaticaly with port - forwarding, with simply running 1 script
I have wrote a .bat script with these steps:
kubernetes run all deployments file
kubernetes run all services file
15 second timeout to give time to change pod state from pending to running
{ do port forwarding for each service. Every forwarding is in new powershell windows without exiting }
I have deployed a Cloud SQL proxy and a service attached to it in Kubernetes. Both the proxy and the service are listening on port 5432. Other pods are not able to establish a connection to the Cloud SQL database through the proxy, but when I port forward to the pod of the proxy or to the service, I am able to connect to the database without any issue from my localhost.
The Kubernetes cluster and the Cloud SQL instance are both private. I have checked the service and deployment labels, the service and pod network configuration, the firewall rules, and the network configuration, but I am still unable to resolve the issue.
All pods are in the same namespace, logs of the proxy show no error, when I run nc -v $service_name $port in other pods, it yields no error and it doesn't show any sign of malfunctioning, it doesn't even print that the connection was successful. The problem is that these pods are not being able to establish a TCP connection to the service,
Here is an example of an error message:
Caused by: com.zaxxer.hikari.pool.HikariPool$PoolInitializationException: Failed to initialize pool: Connection to airbyte-db-svc:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
If needed here is the manifest that deploys the service and the proxy:
apiVersion: v1
kind: Service
metadata:
name: airbyte-db-svc
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
protocol: TCP
selector:
airbyte: db
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: airbyte-db-proxy
spec:
replicas: 1
selector:
matchLabels:
airbyte: db
template:
metadata:
labels:
airbyte: db
spec:
serviceAccountName: airbyte-admin
containers:
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:latest
command:
- "/cloud_sql_proxy"
- "-enable_iam_login"
- "-ip_address_types=PRIVATE"
- "-instances=PROJECT_ID_HERE:REGION_HERE:INSTANCE_CONNECTION_HERE=tcp:5432"
ports:
- containerPort: 5432
securityContext:
runAsNonRoot: true
The serviceAccount airbyte-admin has the good 'Cloud SQL Client' and the workloadIdentity configured in the GCP Project.
What could be the problem and how can I fix it?
CloudSQL Proxy listens on localhost by default. If you want to expose it via a service, you'll want to add --address 0.0.0.0 to your cloud_sql_proxy command options.
I'm following this tutorial to connect a hasura kubernetes pod to my local postgres server.
When I create the deployment, the pod's container fails to connect to postgres (CrashLoopBackOff and keeps retrying), but doesn't give any reason why. Here are the logs:
{"type":"pg-client","timestamp":"2020-05-03T06:22:21.648+0000","level":"warn","detail":{"message":"postgres connection failed, retrying(0)."}}
My deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasura
hasuraService: custom
name: hasura
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: hasura
template:
metadata:
creationTimestamp: null
labels:
app: hasura
spec:
containers:
- image: hasura/graphql-engine:v1.2.0
imagePullPolicy: IfNotPresent
name: hasura
env:
- name: HASURA_GRAPHQL_DATABASE_URL
value: postgres://USER:#localhost:5432/my_db
- name: HASURA_GRAPHQL_ENABLE_CONSOLE
value: "true"
ports:
- containerPort: 8080
protocol: TCP
resources: {}
I'm using postgres://USER:#localhost:5432/MY_DB as the postgres url - is "localhost" the correct address here?
I verified that the above postgres url works when I try (no password):
> psql postgres://USER:#localhost:5432/my_db
psql (12.2)
Type "help" for help.
> my_db=#
How else can I troubleshoot it? The logs aren't very helpful...
If I got you correctly, the issue is that the Pod (from "inside" the Minikube ) can not access Postgres installed on Host machine (the one that runs Minikube itself) via localhost.
If that is the case, please check this thread .
... Minikube VM can access your host machine’s localhost on 192.168.99.1 (127.0.0.1 from Minikube would still be a Minicube's localhost).
Technically, for the Pod the localhost is Pod itself. The Host machine and Minikube are connected via bridge. You can find out exact ip addresses and routes with the infconfig and route -n on your Minikube Host.
I'm trying to setup a zookeeper cluster (3 replicas) but each host can't connect to another and I really don't know where's the problem.
It's creating 3 pods successfully with names like
zookeeper-0.zookeeper-internal.default.svc.cluster.local
zookeeper-1.zookeeper-internal.default.svc.cluster.local
zookeeper-2.zookeeper-internal.default.svc.cluster.local
but when connected to one of them and trying to connect to the open port it returns the Unknown host message:
zookeeper#zookeeper-0:/opt$ nc -z zookeeper-1.zookeeper-internal.default.svc.cluster.local 2181
zookeeper-1.zookeeper-internal.default.svc.cluster.local: forward host lookup failed: Unknown host
My YAML file is here
I really appreciate any help.
Did you create a headless service as you had mentioned in your yaml - serviceName: zookeeper-internal ?
You need to create this service (update the port) to access the zookeeper-0.zookeeper-internal.default.svc.cluster.local
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-cluster-dev
name: zookeeper
name: zookeeper-internal
spec:
ports:
- name: zookeeper-port
port: 80
protocol: TCP
targetPort: 80
selector:
name: zookeeper
clusterIP: None
type: ClusterIP
Service is required. But it does not expose anything outside the cluster. It is only within cluster. Any pods can access this service within the cluster. So you can not access it from your browser unless you expose it via NodePort / LoadBalancer / Ingress!
When I create a deployment and a service in a Kubernetes Engine in GCP I get connection refused for no apparent reason.
The service creates a Load Balancer in GCP and all corresponding firewall rules are in place (allows traffic to port 80 from 0.0.0.0/0). The underlying service is running fine, when I kubectl exec into the pod and curl localhost:8000/ I get the correct response.
This deployment setting used to work just fine for other images, but yesterday and today I keep getting
curl: (7) Failed to connect to 35.x.x.x port 80: Connection refused
What could be the issue? I tried deleting and recreating the service multiple times, with no luck.
kind: Service
apiVersion: v1
metadata:
name: my-app
spec:
selector:
app: app
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8000
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: my-app
image: gcr.io/myproject/my-app:0.0.1
imagePullPolicy: Always
ports:
- containerPort: 8000
This turned out to be a dumb mistake on my part. The gunicorn server was using a bind to 127.0.0.1 instead of 0.0.0.0, so it wasn't accessible from outside of the pod, but worked when I exec-ed into the pod.
The fix in my case was changing the entrypoint of the Dockerfile to
CMD [ "gunicorn", "server:app", "-b", "0.0.0.0:8000", "-w", "3" ]
rebuilding the image and updating the deployment.
Is the service binding to your pod? What does "kubectl describe svc my-app" say?
Make sure it transfers through to your pod on the correct port? You can also try, assuming you're using an instance on GCP, to curl the IP and port of the pod and make sure it's responding as it should?
ie, kubectl get pods -o wide, will tell you the IP of the pod
does curl ipofpod:8000 work?