I have two servers on Linode that can connect to each other through the local Linode regional network. The problem is that any other Linode in the region can also connect using that IP. One server hosts the python application and the other hosts the MongoDB.
Would it be a good idea to connect to the database using an SSH Tunnel? What happens if the tunnel fails? Are SSH tunnels known to fail at all?
Or am I approaching the problem the wrong way? Another alternative I can think of is setting up iptables to only accept connections from a particular source IP.
I'm thinking in a more hypothetical situation, perhaps a DB password is all I need. I've been taking an computer security course and it makes everything seem more vulnerable than it really is.
Related
I'm trying to download some files using wget but the problem is the files will only download from specific servers how can I use wget over VPN?
p s: I tried use_proxy=yes -e http_proxy=[server]:[port] but it didn't work I need to connect to a VPN server not a proxy
Install a VPN on your machine first, then run the command
Proxies and VPNs are entirely different things. The proxy functionality won't be of any use to you here.
To use a VPN you have to setup a connection at the OS level (i assume linux ? but i could be wrong) - the wget tool itself wont be involved, you'll just run that after your connection is replaced with the VPN connection (no need for any special flags).
As for how you setup the vpn connection, that differs a lot based on the particular details of your situation. It could involve running openvpn yourinfo.ovpn or something like that, or your vpn provider may offer a separate application to set up the tunnel connection and then adjust your OS's routing table so traffic flows through the tunnel instead of to the normal gateway.
I am using Grafana Cloud for a PoC project, and the long and short of it is that I cannot find a way to securely connect Grafana and the PostgreSQL data source.
For obvious reasons we do not allow any direct connections to our database and instead use jump hosts with individual SSH keys for access.
I have looked for a Connect to PostgreSQL via SSH option and found nothing, I am curious if anyone else has faced this, as it seems like it would be a common issue.
Thank you
You might be able to setup an ssh tunnel on the machine/instance that is hosting Grafana, tunnel through the jump host directly to postgresql. There are some pretty good docker-compose setups to do that.
I am using jmeter to test an application which uses PostgreSQL. I can connect to the database by using ssh tunnel provided by the database applications.
Can someone please tell me how do I do this using jmeter. I do not see any ssh tunnel option in jmeter database connection confi element.
You could use port forwarding,as explained in this answer:
https://stackoverflow.com/a/1968446/460802
I don't think you should be load testing the database directly, your load test should simulate real-life application under test usage. So instead of testing the database you should focus on the application itself and treat it like a black box so my general recommendation is reconsidering the approach.
If you have performed normal load testing already and identified that the database is the bottleneck and would like to load test the database separately - performance testing it over the SSH tunnel is not the best idea itself as the SSH tunnel traffic might be the next bottleneck due to the nature of TCP protocol and immense CPU footprint required for encryption/decryption of the data sent over SSH. So I would recommend talking to network administrators and asking them to temporarily open the Postgres network port to the machine(s) you're running JMeter from or provide you access to the machine(s) where you can install JMeter which will be having access to the database directly (preferably in the same subnet / physical location, otherwise you might be suffering from high latencies)
If for any reason the above instructions are not applicable for you - you can use SSH Local Forwarding in order to map remote Postgres port to your local port, the relevant command would be:
ssh -L 2345:localhost:5432 username#your_postgresql_server
Once done you should be able to connect to Postgres instance as it is installed locally on port 2345 like:
postgres://localhost:2345/your_database
I am currently developing a service application that pulls data from Mongo and returns it to consumers. There is a layer of authentication involved and I am using Heroku to host the service. Mongo was being hosted on MongoLabs, but there were some significant performance concerns and so we have moved to hosting Mongo on one of our cloud servers. We want to be able to secure access to Mongo using a firewall, white-listing the ip address of the service app on Heroku.
There are a couple of issues with this.
Issues
Well, at least these are main ones...
Heroku, while providing some nice features like easily managing cluster settings, s/w upgrades, etc., draws ip addresses from a pool. While the dns value of an application's url may not change, the underlying ip address can and will change.
to be better secured, mongo-server01 is placed behind a firewall that requires rules to be added using static ip addresses to allow access.
Since Heroku can't provide static ip addresses, we need to consider options for how Heroku can access mongo-server01 while still protecting the data it hosts.
Static IP addresses for outbound requests
There seem to be a couple of options, specifically for Heroku. Fixie and QuotaGuard Static both seem to serve that function, but these seem to be geared toward HTTP and HTTPS communication only (perhaps not even HTTPS).
Mongo doesn't use HTTP, it uses its own network protocol over port 27017, by default
https://groups.google.com/forum/embed/#!topic/mongodb-user/eX_RIv2cZVw
Does this mean these proxies won't work for calls to Mongo? In theory, there doesn't seem to be any reason that a proxy is only for HTTP or HTTPS requests. That being said, there doesn't seem to be any way to get in to these Heroku plugins and configure the proxy to use a different port or to handle Mongo's particular protocol.
If we could get into the proxy, perhaps we could put an additional set of ssh keys in place so the ssl tunnel chain could continue on to mongo-server01. But there doesn't see to be any way to ssh to these proxies or access configuration through the plugin dashboards.
The question (finally!)
How can one connect from Heroku to a firewalled host to get data from MongoDb? Are there proxies that can be used to achieve this?
The simple approach. Won't work because Heroku applications don't use static ip addresses.
Using a proxy. The Heroku proxy plugins don't know how to proxy mongodb protocol. Can't install ssh keys on proxy for ssh tunneling.
What can be done to get a connection without opening up the Mongo server to the world?
I spoke with the folks at QuotaGuard and they do have something that does the trick.
we offer a SOCKS proxy which should do the trick as it proxies at the TCP layer
https://devcenter.heroku.com/articles/quotaguardstatic#socks-proxy-setup
I did need to make a simple change to bin/qgsocksify
#SOCKS_DIR="$(dirname $(dirname $(readlink -f ${BASH_SOURCE[0]})))/vendor/dante
SOCKS_DIR="${HOME}/vendor/dante"
After that, the proxy worked like a charm.
I have been stuck trying to figure out why my Cloud SQL VM is refusing my connection from my machine (whom ip address I have added as a subnet). I cann SSH into the VM but i cannot access the VM from a browser to make SQLs. I have scoured the internet for days trying to find a fix but i cannot seem to get pass this point. My apache listens to port 80. Also Id like to add that I have been connecting to my Mysql db for months through php and making sqls so I do not believe the problem is with apache. However if it is please point me to where i should be looking.
It sounds like you have MySQL running on a GCE VM, not an actual CloudSQL instance (that is a different service from GCE). Is that right?
If so, then if you are trying to connect from your local machine directly to the mysql instance, you are probably getting blocked by the firewall. Go to the networks tab (under Compute Engine) on the cloud console and see what firewall rules you have enabled. You might need to add one for 3306 or whatever port you are using.