How to configure PostgreSQL database over the tunnel in jmeter - postgresql

I am using jmeter to test an application which uses PostgreSQL. I can connect to the database by using ssh tunnel provided by the database applications.
Can someone please tell me how do I do this using jmeter. I do not see any ssh tunnel option in jmeter database connection confi element.

You could use port forwarding,as explained in this answer:
https://stackoverflow.com/a/1968446/460802

I don't think you should be load testing the database directly, your load test should simulate real-life application under test usage. So instead of testing the database you should focus on the application itself and treat it like a black box so my general recommendation is reconsidering the approach.
If you have performed normal load testing already and identified that the database is the bottleneck and would like to load test the database separately - performance testing it over the SSH tunnel is not the best idea itself as the SSH tunnel traffic might be the next bottleneck due to the nature of TCP protocol and immense CPU footprint required for encryption/decryption of the data sent over SSH. So I would recommend talking to network administrators and asking them to temporarily open the Postgres network port to the machine(s) you're running JMeter from or provide you access to the machine(s) where you can install JMeter which will be having access to the database directly (preferably in the same subnet / physical location, otherwise you might be suffering from high latencies)
If for any reason the above instructions are not applicable for you - you can use SSH Local Forwarding in order to map remote Postgres port to your local port, the relevant command would be:
ssh -L 2345:localhost:5432 username#your_postgresql_server
Once done you should be able to connect to Postgres instance as it is installed locally on port 2345 like:
postgres://localhost:2345/your_database

Related

connecting wget to vpn

I'm trying to download some files using wget but the problem is the files will only download from specific servers how can I use wget over VPN?
p s: I tried use_proxy=yes -e http_proxy=[server]:[port] but it didn't work I need to connect to a VPN server not a proxy
Install a VPN on your machine first, then run the command
Proxies and VPNs are entirely different things. The proxy functionality won't be of any use to you here.
To use a VPN you have to setup a connection at the OS level (i assume linux ? but i could be wrong) - the wget tool itself wont be involved, you'll just run that after your connection is replaced with the VPN connection (no need for any special flags).
As for how you setup the vpn connection, that differs a lot based on the particular details of your situation. It could involve running openvpn yourinfo.ovpn or something like that, or your vpn provider may offer a separate application to set up the tunnel connection and then adjust your OS's routing table so traffic flows through the tunnel instead of to the normal gateway.

Incremental data from Postgresql

I have a number of identical local postgreSql databases (identical in structure - not data) on several laptops that have intermittant access to internet. Records are being added to each DB daily. So Branch A,B,C each with a local Postgresql database. I would like all records from A,B,C in each table in a cloud Database.Also A,B,C data is separate - there is no overlap - A doesnt change B, or C etc. There are no duplicated unique keys.
NEED: I would like to collect all this data on a cloud based database by adding daily incremental data to a single cloud databse - so I can query the whole consolidated data using SQL and pull reports as needed.
Please can anyone point me in the right direction?
Thanks
It sounds like you want logical replication from each laptop to the cloud server. The problem there might be that contact must be made by the replica to each of the masters, so when your laptops are online, they would need to have predictable IP addresses so that they can be reached.
Maybe the best way around this is with a reverse SSH tunnel. On the central replica, you would tell it to subscribe to a publication hosted on some non-standard port on localhost. With a different port reserved for each laptop. So, for example, 9997, 9998, and 9999.
Then when each laptop has connectivity, it could run something like:
ssh rajb#1centralserver.example.com -R9999:localhost:5432 -f -N -T
This establishes an ssh connection to the central server (requiring a password, or private key, or however you have ssh set up) and sends instructions to the central server that whenever someone connects to port 9999 on the central server it should really send that connection back over ssh tunnel and hook it up to port 5432 (the default postgres server port) of the laptop.
For initially setting things up and debugging, you might want to omit the -f -N -T. That way, in addition to setting up the tunnel, you also get an interactive ssh session you can use for monitoring things.
Once the central service notices the connection is available, it will start downloading changes since the last time it could connect. When there is no connection, you will get a lot of nuisance messages to the log file as it checks each server every ~5 seconds to see if it is available.
From each laptop's perspective, the connection is coming from within, so the replication connection will use whatever authentication is set up or 127.0.0.1 or ::1, not the authentication set up for the actual remote IP.

Connect to PostgreSQL Data Source in Grafana via SSH Bastion

I am using Grafana Cloud for a PoC project, and the long and short of it is that I cannot find a way to securely connect Grafana and the PostgreSQL data source.
For obvious reasons we do not allow any direct connections to our database and instead use jump hosts with individual SSH keys for access.
I have looked for a Connect to PostgreSQL via SSH option and found nothing, I am curious if anyone else has faced this, as it seems like it would be a common issue.
Thank you
You might be able to setup an ssh tunnel on the machine/instance that is hosting Grafana, tunnel through the jump host directly to postgresql. There are some pretty good docker-compose setups to do that.

Should I secure my MongoDB Database?

I am setting up two computers to run a web application. web-host hosts a MongoDB database and NodeJS web server, while worker runs some more demanding processes and populates the database. Using an SSH tunnel from worker, web-host:27017 is accessible using localhost:9999 from worker. web-host:80 has been set up to be accessible on http://our.corporate.site/my_site/.
At the moment MongoDB has no authentication on it - anything that can contact web-host:27017 can read or write anything to the database.
With this setup, how paranoid should I be about authenticating requests to MongoDB? The answers to this question seemed to suggest not very. Considering access is only possible from localhost it seems about as secure as the local file system. In MySQL I usually have a special 'web' user with limited privileges to limit the damage of an injection attack in case I make a mistake sanitizing input, however MongoDB seems less vulnerable to injection (or at least easier to sanitize) compared with MySQL.
Here's the issue: If you do set-up Mongo authentication, you are going to need to store the keys on the machine that accesses it.
So assuming that web-host:80 is compromised, the keys are also vulnerable.
There are some mitigation processes you can use to secure your environment, but there is no silver bullet if an attacker gains root access to your environment.
First I would consider putting mongodb on a separate machine on a private internal network that can only be accessed by machines in a DMZ (the part of the network where machines can communicate with your internal network and the outside world).
Next, assuming you are running a Linux-based system, you should be able to use AppArmor or SELinux to limit which processes are allowed to make outbound network requests. In this case only your webapp process should be able to initiate network requests such as connecting to your Mongo database.
If an attacker was able to get non-root access on your machine, the SELinux/AppArmor system policy would prevent them from initiating a connection to your database from their own script.
Using this architecture, you should be more secure than simply augmenting your current architecture with authentication. In a choice between the SELinux/AppArmor, I would use SELinux, since it is was much more mature and had much more granular control the last time I checked.

MongoDB connection over SSH Tunnel

I have two servers on Linode that can connect to each other through the local Linode regional network. The problem is that any other Linode in the region can also connect using that IP. One server hosts the python application and the other hosts the MongoDB.
Would it be a good idea to connect to the database using an SSH Tunnel? What happens if the tunnel fails? Are SSH tunnels known to fail at all?
Or am I approaching the problem the wrong way? Another alternative I can think of is setting up iptables to only accept connections from a particular source IP.
I'm thinking in a more hypothetical situation, perhaps a DB password is all I need. I've been taking an computer security course and it makes everything seem more vulnerable than it really is.