I have a problem. In my cluster I have a Ruby-On-Rails application which I want to map to the database on the hosted machine (Not containerized). It's a Postgres database, which listens to the following port when I run:
sudo netstat -tulpn | grep LISTEN
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 1109/postgres
Then I created a service which maps the DB_HOST to the local machine like this:
apiVersion: v1
kind: Service
metadata:
name: external-postgres-svc
namespace: myapp-nm
spec:
ports:
- port: 5432
targetPort: 5432
protocol: TCP
I also added an endpoint:
apiVersion: v1
kind: Endpoints
metadata:
name: external-postgres-svc
namespace: myapp-nm
subsets:
- addresses:
- ip: "10.0.2.2"
ports:
- port: 5432
And in my configmap I have the following config:
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
namespace: myapp-nm
data:
db_host: "external-postgres-svc.myapp-nm.svc"
db_port: "5432"
db_username: "myuser"
db_password: "mypass"
But then when all resources are created and the migration job runs, it never completes. After like 2-3 minutes it crashes and gives the error:
connection to server at port 5432 failed: Operation timed out
I have added:
listen_addresses = '*'
to the /etc/postgresql/14/main/postgresql.conf, I added:
host all all 0.0.0.0/0 md5
to the /etc/postgresql/14/main/pg_hba.conf,
so I think it should listen to incoming traffic. I also ran
sudo ufw allow 5432/TCP
to allow the firewall port on my machine and I checked if the user was correct and it is, so what can be the problem?
I can connect to the database if I am not in the cluster using the
ip
port
username
password
What am I doing wrong?
Error: Operation Timed Out : indicates that your server failed to issue a complete response within the allowed time period.
Check below solutions :
1) Check If /etc directory owned by another user, /etc directory needs to be set as root user and also open 5432 port. (Add the hostname and IP details of the host in /etc/hosts. In the case of a multi-node cluster, the /etc/hosts on each machine has to be updated with the details of all cluster nodes).
2) Check again that there's no route to the database server because it's being blocked by a firewall. Make sure you set a rule in the firewall to allow the Ruby-On-Rails application to connect.
3) Check the setting if you are not doing the connection locally ONLY>, in the postgresql.conf file:
Connection Settings - #listen_addresses = 'localhost' >>>> This should be = '*' instead of localhost
Save the conf and restart the service.
4) Check Tunnel issue :
Verify that attributes of the ssh process match what customers provided: Look at the output of ps aux | grep ssh, the relevant part is:
-L number_1:string_or_number_1:number_2 ... KnownHostsFile=/dev/null string_or_number_2
*some_number1: Looker side port number
*string_or_number_1: Database host number or IP address
*number_2: Database side port number
*string_or_number_2: Tunnel server host or IP address
How to manually set the port:
Note : It is not recommended since it is still possible that the looker set the port back again,instead. Recommend you to open needed ports on the DB.
Create a tunnel through the Looker database connections UI
Update tunnel via the API(PATCH /api/4.0/connections/:connection_name) after it's been created
Set the desired local_host_port
Make sure db_connection and the following fields are set by using API GET /api/4.0/ssh_tunnel/:ssh_tunnel_id or check from go-ssh-sidecar:
*tunnel_id: id of the tunnel
*port: new local_host_port
*host: localhost
Also check for Improper Tunnel migration:
The following types of issues may be related to SSH Tunnels on newly migrated K8s
1.Tunnel is not migrated, or tunnels are partially migrated
2.Tunnel is migrated with incorrect information
3.IPs have changed, and are not documented or announced to customers
4.Public keys have changed (specific to migrated tunnels)
Related
I am trying out a possible kubernetes scenario in the local machine minikube cluster. It is to access an internal service that is exposed with ingress in one cluster from another cluster using an ExternalName service. I understand that using an ingress the service will already be accessible within the cluster. As I am trying this out locally using minikube, I am unable to use simultaneously running clusters. Since I just wanted to verify whether it is possible to access an ingress exposed service using ExternName service.
I started the minikube tunnel using minikube tunnel.
I can access the service using http://k8s-yaml-hello.info.
But when I tryout curl k8s-yaml-hello-internal within a running POD, the error that I that is curl: (7) Failed to connect to k8s-yaml-hello-internal port 80 after 1161 ms: Connection refused
Can anyone point me out the issue here? Thanks in advance.
service.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
ingress.yaml
kind: Ingress
metadata:
name: k8s-yaml-hello-ingress
labels:
name: k8s-yaml-hello-ingress
spec:
rules:
- host: k8s-yaml-hello.info
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: k8s-yaml-hello
port:
number: 3000
externalName.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello-internal
spec:
ports:
- name: ''
appProtocol: http
protocol: TCP
port: 3000
type: ExternalName
externalName: k8s-yaml-hello.info
etc/hosts
127.0.0.1 k8s-yaml-hello.info
As You are getting the error curl: (7) Failed to connect :
The above error message means that no web-server is running on the specified IP and Port and the specified (or implied) port.
Check using nano /etc/hosts whether the IP and port is pointing to the correct domain or not. If it's not pointing, provide the correct IP and Port.
Refer to this SO for more information.
In Ingress.Yaml use Port 80 and also in service.yaml port should be 80. The service port and Target port should be different As per your yaml it is the same. Change it to 80 and have a try , If you get any errors, post here.
The problem is that minikube tunnel by default binds to the localhost address 127.0.0.1. Every node, machine, vm, container etc. has its own and the same localhost address. It is to reach local services without having to know the ip address of the network interface (the service is running on "myself"). So when k8s-yaml-hello.info resolves to 127.0.0.1 then it points to different service depending on which container you are (just to myself).
To make it work like you want, you first have to find out the ip address of your hosts network interface e.g. with ifconfig. Its name is something like eth0 or en0, depending on your system.
Then you can use the bind-address option of minikube tunnel to bind to that address instead:
minikube tunnel --bind-address=192.168.1.10
With this your service should be reachable from within the container. Please check first with the ip address:
curl http://192.168.1.10
Then make sure name resolution with /etc/hosts works in your container with dig, nslookup, getent hosts or something similar that is available in your container.
I have a host running HAProxy already. It's been up and running since before I did anything with Kubernetes. It works flawlessly as a reverse proxy and SSL terminator for various web things in various Docker containers on various other host machines.
Now I have a Kubernetes cluster up and running across some of those other machines. I've created the NodePort Service that exposes port 30080 on each worker node, as follows:
apiVersion: v1
kind: Service
metadata:
name: snginx
labels:
app: nginx
spec:
type: NodePort
externalTrafficPolicy: Local #Cluster or Local
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30080
From the machine running HAProxy (which is not part of the cluster), I can curl the NodePort successfully ( curl 10.0.0.20:30080 ), and I get "Welcome to nginx!..." However, if I set that NodePort as a backend in HAProxy, I get a 503 "No server is available", and HAProxy traffic log says:
localhost haproxy[18489]: [redactedIP]:49399 [30/Aug/2021:19:24:00.764] http-in~ load/load 66/0/-1/-1/67 503 212 - - SC-- 1/1/0/0/3 0/0 "GET / HTTP/1.1"
The haproxy admin log says:
Aug 30 20:07:13 localhost haproxy[18839]: Server load/load is DOWN, reason: Layer4 connection problem, info: "General socket error (Permission denied)"
However, I've disabled the firewall with
sudo systemctl disable --now firewalld
and verified the status is not running. Also, SELinux was disabled when I installed the cluster. Also, I can ping 10.0.0.20 just fine.
"load" is the hostname I'm using for testing load balancing (i.e. load.mydomain.com).
Also, if I use PAT on my physical router to route directly to that NodePort, from outside the building, it works as expected.
What gives? What's the difference between the proxied request and curl?
Thank you.
SELinux is the difference. That is, SELinux on the HAProxy host (not a cluster node):
"SELinux only allows the web server to make outbound connections to a limited set of ports"
That is, you can't make an outbound http request to any port in the NodePort range (30000-32768) without opening that port on the "client", which is the HAProxy server in this case.
sudo semanage port --add --type http_port_t --proto tcp 30080
My deployed Spring boot application to trying to connect to an external SQL Server database from Kubernetes Pod. But every time it fails with error
Failed to initialize pool: The TCP/IP connection to the host <>, port 1443 has failed.
Error: "Connection timed out: no further information.
Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.
I have tried to exec into the Pod and successfully ping the DB server without any issues
Below are the solutions I have tried:
Created a Service and Endpoint and provided the DB IP in configuration file tried to bring up the application in the Pod
Tried using the Internal IP from Endpoint instead of DB IP in configuration to see Internal IP is resolved to DB IP
But both these cases gave the same result. Below is the yaml I am using the create the Service and Endpoint.
---
apiVersion: v1
kind: Service
metadata:
name: mssql
namespace: cattle
spec:
type: ClusterIP
ports:
- port: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: mssql
namespace: cattle
subsets:
- addresses:
- ip: <<DB IP>>
ports:
- port: 1433
Please let me know if I am wrong or missing in this setup.
Additional information the K8s setup
It is clustered master with external etcd cluster topology
OS on the nodes is CentOS
Able to ping the server from all nodes and the pods that are created
For this scenario a headless service is very useful. You will redirect traffic to this ip without defining an endpoint.
kind: "Service"
apiVersion: "v1"
metadata:
namespace: "your-namespace"
name: "ftp"
spec:
type: ExternalName
externalName: your-ip
The issue was resolved by updating the deployment yaml with IP address. Since all the servers were in same subnet, I did not need the to create a service or endpoint to access the DB. Thank you for all the inputs on the post
I'm trying to setup a Cloud SQL Proxy Docker image for PostgreSQL as mentioned here.
I can get my app to connect to the proxy docker image but the proxy times out. I suspect it's my credentials or the port, so how do I debug and find out if it works?
This is what I have on my project
kubectl create secret generic cloudsql-instance-credentials --from-file=credentials.json=my-account-credentials.json
My deploy spec snippet:
spec:
containers:
- name: mara ...
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<MY INSTANCE NAME>=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
The logs of my cloudsql-proxy show a timeout:
2019/05/13 15:08:25 using credential file for authentication; email=646092572393-compute#developer.gserviceaccount.com
2019/05/13 15:08:25 Listening on 127.0.0.1:5432 for <MY INSTANCE NAME>
2019/05/13 15:08:25 Ready for new connections
2019/05/13 15:10:48 New connection for "<MY INSTANCE NAME>"
2019/05/13 15:10:58 couldn't connect to <MY INSTANCE NAME>: dial tcp <MY PRIVATE IP>:3307: getsockopt: connection timed out
Questions:
I specify 5432 as my port, but as you can see in the logs above,it's hitting 3307. Is that normal and if not, how do I specify 5432?
How do I check if it is a problem with my credentials? My credentials file is from my service account 123-compute#developer.gserviceaccount.com
and the service account shown when I go to my cloud sql console is p123-<somenumber>#gcp-sa-cloud-sql.iam.gserviceaccount.com. They don't seem the same? Does that make a difference?
If I make the Cloud SQL instance available on a public IP, it works.
I specify 5432 as my port, but as you can see in the logs above,it's
hitting 3307
The proxy listens locally on the port you specified (in this case 5432), and connects to your Cloud SQL instance via port 3307. This is expected and normal.
How do I check if it is a problem with my credentials?
The proxy returns an authorization error if the Cloud SQL instance doesn't exist, or if the service account doesn't have access. The connection timeout error means it failed to reach the Cloud SQL instance.
My credentials file is from my service account 123-compute#developer.gserviceaccount.com and the service account shown when I go to my cloud sql console is p123-#gcp-sa-cloud-sql.iam.gserviceaccount.com. They don't seem the same?
One is just the name of the file, the other is the name of the service account itself. The name of the file doesn't have to match the name of the service account. You can check the name and IAM roles of a service account on the Service Account page.
2019/05/13 15:10:58 couldn't connect to : dial tcp :3307: getsockopt: connection timed out
This error means that the proxy failed to establish a network connection to the instance (usually because a path from the current location doesn't exist). There are two common causes for this:
First, make sure there isn't a firewall or something blocking outbound connections on port 3307.
Second, since you are using Private IP, you need to make sure the resource you are running the proxy on meets the networking requirements.
Proxy listen port 3307. This is mentioned on documentation
Port 3307 is used by the Cloud SQL Auth proxy to connect to the Cloud SQL Auth proxy server. -- https://cloud.google.com/sql/docs/postgres/connect-admin-proxy#troubleshooting
You may need to create a firewall like the following:
Direction: Egress
Action on match: Allow
Destination filters : IP ranges 0.0.0.0/0
Protocols and ports : tcp:3307 & tcp:5432
using a standard istio deployment in a kubernetes cluster I am trying to add an initContainer to my pod deployment, which does additional database setup.
Using the cluster IP of the database doesn't work either. But I can connect to the database from my computer using port-forwarding.
This container is fairly simple:
spec:
initContainers:
- name: create-database
image: tmaier/postgresql-client
args:
- sh
- -c
- |
psql "postgresql://$DB_USER:$DB_PASSWORD#db-host:5432" -c "CREATE DATABASE fusionauth ENCODING 'UTF-8' LC_CTYPE 'en_US.UTF-8' LC_COLLATE 'en_US.UTF-8' TEMPLATE template0"
psql "postgresql://$DB_USER:$DB_PASSWORD#db-host:5432" -c "CREATE ROLE user WITH LOGIN PASSWORD 'password';"
psql "postgresql://$DB_USER:$DB_PASSWORD#db-host:5432" -c "GRANT ALL PRIVILEGES ON DATABASE fusionauth TO user; ALTER DATABASE fusionauth OWNER TO user;"
This kubernetes initContainer according to what I can see runs before the "istio-init" container. Is that the reason why it cannot resolve the db-host:5432 to the ip of the pod running the postgres service?
The error message in the init-container is:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
The same command from fully initialized pod works just fine.
You can't access services inside the mesh without the Envoy sidecar, your init container runs alone with no sidecars. In order to reach the DB service from an init container you need to expose the DB with a ClusterIP service that has a different name to the Istio Virtual Service of that DB.
You could create a service named db-direct like:
apiVersion: v1
kind: Service
metadata:
name: db-direct
labels:
app: db
spec:
type: ClusterIP
selector:
app: db
ports:
- name: db
port: 5432
protocol: TCP
targetPort: 5432
And in your init container use db-direct:5432.