PostgreSQL still able to make connection after updating UFW - postgresql

I am trying to test how an application handles network instability. The client application makes connections and runs queries on a database server. To simulate network instability, I am trying to make ufw rules to deny traffic going out while the client application makes a connection to the database server. I start up the application and it is able to run queries on the database. I then update the UFW rules. The following two rules are the top 2 rules.
[ 1] 5432/udp DENY OUT Anywhere (out)
[ 2] 5432/tcp DENY OUT Anywhere (out)
After the ufw rules have been updated, the client is still able to make calls the the database server. However, If I reboot the client application it is then unable to make a connection to the database server.
Does anyone know why this is occurring? Is there a better way to do what I am trying to do? Any help would be much appreciated.
More Details: The client application is using postgresql-9.4-1207.jdbc4 to connect to the database. The database is running postgresql 9.4.5.

UFW comes with some default configuration options in place. On my Ubuntu server they are located in /etc/ufw. In the before.rules file there are two rules ...
#-A ufw-before-input -m state --state RELATED,ESTABLISHED -j ACCEPT
#-A ufw-before-output -m state --state RELATED,ESTABLISHED -j ACCEPT
... that allow in established connections. Since these rules are read in before the user specified rules, they take precidence. I commented out these two lines in the configuration file and my issue is resolved.
However, the comment above these two lines reads "# quickly process packets for which we already have a connection". Not sure what kind of performance impact this would have, but I am not particularly concerned about that in my case. This might be a concern for someone else though.

Related

gcloud beta sql connect "server closed the connection unexpectedly"

When trying to get a psql shell (not using iam user) I am receiving:
> gcloud alpha sql connect pg-instance --database mydb --user myuser --project my-project
Starting Cloud SQL Proxy: [/Users/me/google-cloud-sdk/bin/cloud_sql_proxy -instances my-project:us-central1:pg-instance=tcp:9470 -credential_file /Users/me/.config/gcloud/legacy_credentials/me#me.com/adc.json]]
2022/03/15 14:47:59 Rlimits for file descriptors set to {Current = 8500, Max = 9223372036854775807}
2022/03/15 14:47:59 using credential file for authentication; path="/Users/me/.config/gcloud/legacy_credentials/me#me.com/adc.json"
2022/03/15 14:48:00 Listening on 127.0.0.1:9470 for my-project:us-central1:pg-instance
2022/03/15 14:48:00 Ready for new connections
Connecting to database with SQL user [myuser].Password:
psql: error: connection to server at "127.0.0.1", port 9470 failed: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I had the same error message when connecting to Postgres(Cloud Sql) using a service account.
In my setup I did run cloud_sql_proxy inside docker container.
In order to make it work I had to add extra configuration defined in step #9 https://cloud.google.com/sql/docs/sqlserver/connect-docker#connect-client
docker run -d \
-v <PATH_TO_KEY_FILE>:/config \
-p 127.0.0.1:5432:5432\
gcr.io/cloudsql-docker/gce-proxy:1.33.1 /cloud_sql_proxy \
-instances=<INSTANCE_CONNECTION_NAME>=tcp:0.0.0.0:5432 -credential_file=/config
The missing bits were: host ip on port mapping and 0.0.0.0: in cloud_sql_proxy command
There are a few things I would like to point out. The best starting point for me would be the About connection options page; both the Overview and the Before you begin sections are very helpful to get the full idea of the process and how to properly configure the user. But the most important part is the Connection Options, for the message connection to server at "127.0.0.1" I’m guessing it is a private IP, but please make sure this section is covered before starting to debug.
In your case, the logs are saying there was an error in the connection to the server…
I used the Troubleshoot guide that includes the Diagnose issues link to get to the Debug connection issues page that has a lot of useful information on how to debug any connectivity issue.
Generally, connection issues fall into one of the following three areas:
Connecting - are you able to reach your instance over the network?
Authorizing - are you authorized to connect to the instance?
Authenticating - does the database accept your database credentials?
Each of those can be further broken down into different paths for investigation.
Once determining the connection method, there are different questions that will help to guide you through the possible troubleshooting paths.
If using these guides doesn’t get you a solution, please make sure to update your answer with the results, steps, and information followed to provide further help. This would be a good example, as it has the same log error, and this other question shows that there are a few different troubleshooting paths for this specific log message, plus they have useful information for you.

Obtain ssh version externally using nmap

I would like to know if I can obtain ssh version using nmap of my external vps.
nmap -p 22 sV <domainname>
result:
22/tcp filtered ssh
Is there another nmap syntax so I can obtain ssh service version?
Just want to obtain the ssh service version of my external vps.
I tried alot of nmap commands but probably there's a struggle in-between like a firewall, which causes a filtered state. My own network is behind a DrayTek Device. Maybe a possible cause?
Thanks in advance!
The nmap option --badsum is able to provide insight about the existence of a firewall. A non firewall device that runs a full network stack will silently drop a bad checksum. In the case that your scan reaches an end device, you would expect to see the same result as your -sV scan. A firewall may offer a different reply to the --badsum.
The answer to your question regarding version, is that -sV is ideal, however -A may run some scripts that return useful information. You can also run --script=sshv1 or another specific script that is ssh related. More script options are here nmap scripts.

MongoDB cannot remote access

I'm new to linux server. I install mongodb on centos 6.3. And I run the mongodb server in this command:
mongod -config /etc/mongodb.conf &
And i'm sure that I have make bind_ip to listen all ip:
# mongodb.conf
# Where to store the data.
dbpath=/var/lib/mongodb
#where to log
logpath=/var/log/mongodb/mongodb.log
logappend=true
rest = true
bind_ip = 0.0.0.0
port = 27017
But, I cannot make mongodb remote access either. my server ip is 192.168.2.24,and I run mongo in my local pc to access this mongodb, it show me this error:
Error: couldn't connect to server 192.168.2.24:2701
7 (192.168.2.24), connection attempt failed at src/mongo/shell/mongo.js:148
exception: connect failed
But, I can access this mongodb in server where mongodb install using this command:
mongo --host 192.168.2.24
So, I think it may success to make mongo remote access, but maybe something wrong with linux server,maybe firewall? So,I try to use the command to check the port whether open for remote access:
iptables -L -n | grep 27017
nothing is returned, then I add port to iptalbes using this command:
iptables -A INPUT -p tcp --dport 27017 -j ACCEPT
iptables -A OUTPUT -p tcp --source-port 27017 -m state --state ESTABLISHED -j ACCEPT
and save the iptables & restart it:
iptables-save | sudo tee /etc/sysconfig/iptables
service iptables restart
I can see port of 27017 is added to iptables list, but it still not work at all. I think it may not success in opening the port of 27017. How should I do for it? I'm new to linux server,by the way my linux server pc is offline. So it can't use the command about "yum". please give me solution in detail. Thanks so much.
It seems like the firewall is not configured correctly.
Disclaimer: Fiddling with firewall settings has security implications. DO NOT USE THE FOLLOWING PROCEDURE ON PRODUCTION SYSTEMS UNLESS YOU KNOW WHAT YOU ARE DOING!!! If in the slightest doubt, get back to a sysadmin or DBA.
The problem
Put simply, a firewall limits the access to services like MongoDB running on the protected machine by unauthorized parties.
CentOS only allows access to ssh by default. We need to configure the firewall so that you can access the MongoDB service.
The solution
We will install a small tool provided by CentOS < 7 (version 7 provides different means), which simplifies the use of iptables, which in turn configures netfilter, the framework of the Linux kernel allowing manipulation of network packets – thus providing firewall functionality (amongst other cool things).
Then, we will use said tool to configure the firewall functionality so that MongoDB is accessible from everywhere. I can't give you a more secure configuration, since I do not know your network setup. Again, use this procedure on production systems at your own risk. You have been warned!
Installation of system-config-firewall-tui
First, you have to log into your CentOS box as root, which allows installation and deinstallation of packages and change system-wide configurations.
Then, you need to issue (the dollar sign denotes the shell prompt)
$ yum -y install system-config-firewall-tui
The result should look something like this
Configuration of the firewall
Next, you need to start the tool we just installed
$ system-config-firewall-tui
which will create a small command line GUI:
Do not simply disable the firewall!.
Press Tab or →| respectively, until the "Customize" button is highlighted. Now press ↵. In the next screen, highlight "Forward" and press ↵. You now should be in a screen called "Other Ports",
in which you highlight "Add" and press↵. This brings you to a screen "Port and Protocol" which you fill like shown below
The configuration explained: MongoDB uses TCP for communicating with the clients and it listens on port 27017 by default for a standalone instance. Note that you might need to change the port according to the referenced list in case you do not run a standalone instance or replica set.
The next step is to highlight "OK" and press ↵, which will seemingly clear the inputs. However, the configuration we just made is saved. So we will press "Cancel" and return to the "Other Ports" screen, which should now look like this:
Now, we press "Close" and return to the main screen of "system-config-firewall-tui". Here, we press "Ok" and the tool asks you if you really want to apply those the changes you just made. Take the time to really think about that. ;)
Pressing "Yes" will now modify the firewall rules executed by the Linux kernel.
We can verify that by issuing
$ iptables -L -n | grep 27017
which should result in the output below:
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:27017
Now you should be able to connect to your MongoDB server.

iptables / cherrypy redirection changes request mid-processing

Sorry for the vague title, but my issue is a bit complicated to explain.
I have written a "captive portal" for a WLAN access point in cherrypy, which is just a server that blocks MAC addresses from accessing the internet before they have registered at at certain page. For this purpose, I wrote some iptables rules that redirect all HTTP traffic to me
sudo iptables -t mangle -N internet
sudo iptables -t mangle -A PREROUTING -i $DEV_IN -p tcp -m tcp --dport 80 -j internet
sudo iptables -t mangle -A internet -j MARK --set-mark 99
sudo iptables -t nat -A PREROUTING -i wlan0 -p tcp -m mark --mark 99 -m tcp --dport 80 -j DNAT --to-destination 10.0.0.1
(the specifics of this setup are not really important for my question, just note that an "internet" chain is created which redirects HTTP to port 80 on the access point)
At port 80 on the AP, a cherrypy server serves a static landing page with a "register" button that issues a POST request to http://10.0.0.1/agree . To process this request, I have created a method like this:
#cherrypy.expose
def agree(self, **kwargs):
#retrieve MAC address of client by checking ARP table
ip = cherrypy.request.remote.ip
mac = str(os.popen("arp -a " + str(ip) + " | awk '{ print $4 }' ").read())
mac = mac.rstrip('\r\n')
#add an iptables rule to whitelist the client, rmtrack to remove previous connection information
os.popen("sudo iptables -I internet 1 -t mangle -m mac --mac-source %s -j RETURN" %mac)
os.popen("sudo rmtrack %s" %ip)
return open('welcome.html')
So this method retrieves the client's MAC address from the arp table, then adds an iptables exception to remove that specific MAC from the "internet" chain that redirects traffic to the portal.
Now when I test this setup, something interesting happens. Adding the exception in iptables works - i.e. the client can now access web pages without getting redirected to me. The problem is that the initial request doesn't come through to my server , i.e. the page welcome.html is never opened - instead, right after the iptables and rmtrack calls are executed, the client tries to open the "agree" path on the page they requested before the redirect to my portal.
For example, if they hit "google.com" in the address bar, then got sent to my portal and agreed, they would now try to open http://google.com/agree . As a result, they get an error after a while. It appears that the iptables or the rmtrack call changes the request to go for the original destination while it is still being processed at my server, which doesn't make any sense to me. Consequently, it doesn't matter which static page I return or which redirects I make after those terminal commands have been issued - the return value of my function isn't used by the client.
How could I fix this problem? Every piece of useful information is appreciated.
Today I managed to solve my problem, so I'm gonna put the solution here although I kinda doubt that there's a lot of people running into the same problem.
Basically, all that was needed was an absolute-path redirect somewhere during the request processing on the captive portal server. For example, in my case, the form on the index page where you agreed to my T&C was calling action /agree . This meant that the client was left believing he was accessing those paths on his original destination server (eg google.com/agree).
Using the absolute-form 10.0.0.1/agree instead, the client will follow the correct redirect after the iptables call.

How to get MONGO_URL from command line Meteor Up deployment?

I am currently deploying to Digital Ocean using Meteor Up. If I don't specify a MONGO_URL in the mup.json, can I get the value from the command line while the website is running, i.e. I don't want to shutdown the site?
If I go to the app directory and run meteor mongo --url, I get the following error:
mongo: Meteor isn't running a local MongoDB server.
This command only works while Meteor is running your application
locally. Start your application first. (This error will also occur if
you asked Meteor to use a different MongoDB server with $MONGO_URL when
you ran your application.)
If you're trying to connect to the database of an app you deployed
with 'meteor deploy', specify your site's name with this command.
Even if I run the app from the app directory, it will only give the localhost MONGO_URL. I need the MONGO_URL for the deployed app.
I have also taken a look at a similar question as suggested by some of the answers. I disagree that it is "impossible" to get the MONGO_URL without some other program running on the server. It's not as if we are defying the laws of physics here, folks. Fundamentally, there should be a way to access it. Just because no one has yet figured it out doesn't mean it is impossible.
meteor mongo --url should return the URL.
Try opening another shell in the app directory and running that command.
Meteor Up packages your app in production mode with meteor build so that it runs via node rather than the meteor command line interface. Among other things, this means meteor foo won't work on the remote server (at least not by default). So what you're really looking for is a way to access mongo itself remotely.
I recently set up mongo on an AWS EC2 instance and listed some lessons learned here: https://stackoverflow.com/a/28846703/2669596. Some details of how you do it are going to be different on Digital Ocean, but these are the main things you have to take care of once mongo itself is installed:
Public IP/DNS Address: This is probably fine already since you can deploy to the server.
Port Security Rules: You need to make sure port 27017 is open for TCP access, at least from your IP address. MongoDB also has an http interface you can set up; if you want to use that you'll need to open 28017 as well.
/etc/mongod.conf (file location may differ depending on Linux flavor):
Uncomment port=27017 to make sure you have the default port (I don't think this is actually necessary, but it made me feel better and it's good to know where to change the default port...).
Comment out bind_ip=127.0.0.1 in order to listen to external interfaces (e.g. remote connections).
Uncomment httpinterface=true if you want to use the http interface.
You may have to restart the mongod host via sudo service mongod restart. That's a problem if you can't have downtime, but I don't know of a way around that if you change the config file.
Create User: You need to create an admin and/or user to access the database remotely.
Once you've done all of that, you should be able to access the database from your local machine (assuming you have the mongo client installed locally) by running
mongo server.url.com:27017/mup-app-name -u username -p
where server.url.com is the URL or IP address of your remote server, mup-app-name is the appName parameter from your mup.json file, username is the user you created to access the database, and you'll be prompted for that user's password after you run the command (or you could put it after -p on the same line, depending on the password).
There may also be a way to do this by setting up nginx to reverse-proxy 127.0.0.1:27017 on your remote server, but I've never done it and that's just me speculating.