Connection to Google Cloud SQL via proxy works in all scenarios except via socket in Docker container - google-cloud-sql

Hopefully I'm doing something wrong, I've read all documentation and scoured forums but can't seem to get to the bottom of an issue I'm experiencing. I'm using OSX btw.
Things that are working:
Connect to cloud SQL from local OS using proxy via either TCP or Socket
Connect to cloud SQL from local OS using proxy in container via TCP
Connect to cloud SQL from GKE using proxy in the same pod via TCP
Things that are not working:
Connect to cloud SQL from local OS using proxy in contain via sockets
Connect to cloud SQL from GKE using proxy in the same pod via socket
I suspect both of these problems are actually the same problem. I'm using this command to run the proxy inside of the container:
docker run -v [PATH]:/cloudsql \
gcr.io/cloudsql-docker/gce-proxy /cloud_sql_proxy -dir=/cloudsql \
-instances=[INSTANCE_CONNECTION_NAME] -credential_file=/cloudsql/[FILE].json
And the associated socket is being generated with the directory. However when I attempt to connect I get the following error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/cloudsql/node-sql:us-central1:nodedb' (61)
The proxy doesn't generate a new line when I try to connect which makes me think that it's not receiving the request, it simply says Ready for new connections and waits.
Any idea what's going wrong, or how I could troubleshoot this further?

For "Connect to cloud SQL from GKE using proxy in the same pod via socket" can you please follow the tutorial at https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine? We have a working WordPress example there that has the cloudsql-proxy as a sidecar container (i.e. in the same Pod, but over TCP).
I don't think you can do "in the same pod via socket" unless you’re running multiple processes in a single container (which you shouldn’t as a best practice). If you do a sidecar container, you can use TCP, so you don’t need a unix socket (moreover, I'm not sure how you’d share files between containers of a Pod).
Also, the docker run -v /local.sock:/remote.sock (I think) will be creating a file/directory locally as /local.sock and making that available inside the container as /remote.sock. This might not work because the docker-engine doesn't know that /local.sock is meant to be a Unix socket and it creates a regular file.

Related

GCP Can't Connect to MongoDB

This is my first attempt at deploying a Node.js application on a Google VM instance while connecting to MongoDB.
In MongoDB, I have whitelisted my IP address and the VM instance's IP address. When I start my server using Google Cloud Shell, I receive the following error:
op.cb(new error_1.MongoNetworkError(`connection ${this.id} to ${this.address} closed`));
^
MongoNetworkError: connection 1 to 34.71.95.215:27017 closed
I'm connecting on port 8080. The external IP is listed on my GCP instance page and when I ping it, it is up. IP: 34.68.254.120
When I whitelist 0.0.0.0/0 in Mongodb, the code runs successfully, and I can preview my app through GCP.
I created a new instance from scratch, and it also crashes with the same error.
ETA: In looking at the source code around the error message at:
...\node_modules\mongoose\node_modules\mongodb\lib\cmap\connection.js
it looks like a closed connection. The error message above spits out the IP address as the Iowa Google Data Center where my VM is housed.
I don't know what this means, but if you do, please let me know.
ETA2: I have 2 problems, and they may be connected. The first is that my VM server cannot connect to MongoDB. This should be simple -- whitelist the external IP address of my VM server. It does not work (I have to open MongoDB to 0.0.0.0/0 for it to connect).
The second is that I cannot connect to my server via the external IP address, regardless of whether MongoDB is connected or not. It "refuses to connect." I can do a web preview of my running server, though.
It seems the two may be connected somehow. I've rebooted my VM, but it did not fix anything. I whitelisted the error message IP address in MongoDB, but it did not help.
ETA3: Okay, it appears I have solved the whitelist to MongoDB issues. Through Cloud Shell, I asked my VM what the IP is. It is different than the one GCP tells me is the external IP. By adding this IP to the whitelist, I can connect between GCP VM and MongoDB. Whew. No idea why.
The VM's external IP address through my browser still gives me a cannot connect message, and when I use the new VM IP address I found through Cloud Shell, it gives me a "took too long to respond" message.
So I feel I have made progress. The remaining problem is accessing my server through Chrome.
Any suggestions on how I can investigate the issue further? I'm at a dead end. I believe the problem is likely simple given my inexperience.
Thanks!
Problem solved by a friend, for anyone in the future with this issue.
I had set up my GCP VM using Cloud Shell. I had housed my code by coping my repository through Cloud Shell. It turns out, this is more of a virtual interface with my VM, and the files are not physically on my VM. I needed to go through SSH, clone my repository there, and run my server through SSH. Cloud Shell was causing the problem.

connecting wget to vpn

I'm trying to download some files using wget but the problem is the files will only download from specific servers how can I use wget over VPN?
p s: I tried use_proxy=yes -e http_proxy=[server]:[port] but it didn't work I need to connect to a VPN server not a proxy
Install a VPN on your machine first, then run the command
Proxies and VPNs are entirely different things. The proxy functionality won't be of any use to you here.
To use a VPN you have to setup a connection at the OS level (i assume linux ? but i could be wrong) - the wget tool itself wont be involved, you'll just run that after your connection is replaced with the VPN connection (no need for any special flags).
As for how you setup the vpn connection, that differs a lot based on the particular details of your situation. It could involve running openvpn yourinfo.ovpn or something like that, or your vpn provider may offer a separate application to set up the tunnel connection and then adjust your OS's routing table so traffic flows through the tunnel instead of to the normal gateway.

How to configure PostgreSQL database over the tunnel in jmeter

I am using jmeter to test an application which uses PostgreSQL. I can connect to the database by using ssh tunnel provided by the database applications.
Can someone please tell me how do I do this using jmeter. I do not see any ssh tunnel option in jmeter database connection confi element.
You could use port forwarding,as explained in this answer:
https://stackoverflow.com/a/1968446/460802
I don't think you should be load testing the database directly, your load test should simulate real-life application under test usage. So instead of testing the database you should focus on the application itself and treat it like a black box so my general recommendation is reconsidering the approach.
If you have performed normal load testing already and identified that the database is the bottleneck and would like to load test the database separately - performance testing it over the SSH tunnel is not the best idea itself as the SSH tunnel traffic might be the next bottleneck due to the nature of TCP protocol and immense CPU footprint required for encryption/decryption of the data sent over SSH. So I would recommend talking to network administrators and asking them to temporarily open the Postgres network port to the machine(s) you're running JMeter from or provide you access to the machine(s) where you can install JMeter which will be having access to the database directly (preferably in the same subnet / physical location, otherwise you might be suffering from high latencies)
If for any reason the above instructions are not applicable for you - you can use SSH Local Forwarding in order to map remote Postgres port to your local port, the relevant command would be:
ssh -L 2345:localhost:5432 username#your_postgresql_server
Once done you should be able to connect to Postgres instance as it is installed locally on port 2345 like:
postgres://localhost:2345/your_database

Consul.io - how to run multiple servers on same machine

This is probably a very basic question for you, but I'm just getting into consul and for testing purposes, I wanna run multiple servers on my PC. For example, I run the first server with
consul agent -server -bootstrap-expect=1 -dc=dev -data-dir=/tmp/consul -ui-dir="c:/consul 0.5.2/dist"
and then I try to run the second server with
consul agent -server -data-dir=/tmp/consul2 -dc=dc2
but it returns
==> Error starting agent: Failed to start Consul server: Failed to start RPC lay
er: listen tcp 0.0.0.0:8300: bind: Only one usage of each socket address (protoc
ol/network address/port) is normally permitted.
What am I missing from my command?
You are launching two consul servers using mostly default values. In this case the problem is that you use default ports.
When you read the error message you will notice that your second consul server tries to bind to port 8300. But your first server is already using this port, causing the second server to fail at startup. (note: consul binds to a variety of ports, each having another purpose and default setting. Take a look at the documentation).
As suggested by LenW, you can use Vagrant to set your environment. You could follow the consul tutorial.
If you do not want to use vagrant or set up any virtual machines on your own. You could change the defaults of the second server.
If you are trying to simulate a production topology on your dev machine I would look at using Vagrant in combination with VirtualBox to simulate a couple of machines for testing.

Google Cloud SQL VM refusing connection

I have been stuck trying to figure out why my Cloud SQL VM is refusing my connection from my machine (whom ip address I have added as a subnet). I cann SSH into the VM but i cannot access the VM from a browser to make SQLs. I have scoured the internet for days trying to find a fix but i cannot seem to get pass this point. My apache listens to port 80. Also Id like to add that I have been connecting to my Mysql db for months through php and making sqls so I do not believe the problem is with apache. However if it is please point me to where i should be looking.
It sounds like you have MySQL running on a GCE VM, not an actual CloudSQL instance (that is a different service from GCE). Is that right?
If so, then if you are trying to connect from your local machine directly to the mysql instance, you are probably getting blocked by the firewall. Go to the networks tab (under Compute Engine) on the cloud console and see what firewall rules you have enabled. You might need to add one for 3306 or whatever port you are using.