Armitage 'Connection refused' error in new install of Kali Linux after full upgrade - postgresql

I installed Kali Linux via VMware and did a full system upgrade:
apt-get update
apt-get upgrade
apt-get full-upgrade
As part of the upgrade postgresql upgraded from v11 to v12. I followed the instructions to finish this part of the upgrade:
pg_dropcluster 12 main --stop
pg_upgradecluster 11 main
pg_dropcluster 11 main
I start postgresql, initialize metasploit, and start Armitage:
/etc/init.d/postgresql start
msfdb init
armitage
The only console output appears unrelated:
Picked up _JAVA_OPTIONS: -Dawt.useSystemAAFontSettings=on
-Dswing.aatext=true
I do get the popup box with the connection information. I found that I get the "Unexpected end of file from server" if I use 'localhost' as the host, so - per their instructions - I change it to the external IP (in this case 192.168.9.134). I checked metasploit-framework/config/database.yml for
the port and login credentials.
After clicking 'Connect' with this information I get a connection window stating:
Connecting to 192.168.9.134:5432 Connection refused (Connection
refused)
There's also the progress bar that over time will completely fill up (unless I click 'Cancel'). After which nothing happens. As I run the command from the terminal I can see that the process is still running (I don't get my prompt back) but the window disappears and Armitage doesn't actually start. The log file, as verified by pg_lsclusters (/var/log/postgresql/postgresql-12-main.log) doesn't is actually empty.
The link I mentioned before suggests that the problem could either be not enough RAM (I set the VM to have 4gb and free -m shows):
total used free shared buff/cache available
Mem: 3964 803 2677 29 483 2787
Swap: 4093 0 4093
Or that the Metasploit RPC daemon never started (that window does come up the first time, but not subsequent times). I verified that it's running via msfdb status:
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; disabled; vendor preset: disabled)
Active: active (exited) since Fri 2020-02-07 16:06:52 EST; 19min ago
Process: 1753 ExecStart=/bin/true (code=exited, status=0/SUCCESS) Main PID: 1753 (code=exited, status=0/SUCCESS)
Feb 07 16:06:52 kali systemd1: Starting PostgreSQL RDBMS... Feb 07
16:06:52 kali systemd1: Started PostgreSQL RDBMS.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME postgres
1735 postgres 3u IPv6 32516 0t0 TCP localhost:5432 (LISTEN)
postgres 1735 postgres 4u IPv4 32517 0t0 TCP localhost:5432
(LISTEN)
UID PID PPID C STIME TTY STAT TIME CMD postgres 1735
1 0 16:06 ? Ss 0:00 /usr/lib/postgresql/12/bin/postgres -D
/var/lib/postgresql/12/main -c
config_file=/etc/postgresql/12/main/postgresql.conf
[+] Detected configuration file
(/usr/share/metasploit-framework/config/database.yml)
Also, running regular Metasploit appears to work fine (msfconsole) and loads without error (not sure if there's any output that would be helpful here). I don't use postgresql directly, so I haven't messed with any configuration nor do I have any other applications (that I'm aware of) that use it, so it should be a pretty clean setup (not to mention this is a fresh install of Kali Linux). I'm out of ideas for what to check next. An online search didn't seem to match this problem well. Any thoughts?

Armitage has been deprecated for some time now, as it has not been updated since 2015, and is (to some extent) incompatible with current versions of metasploit.
Although this may not fix your problem, I suggest not using software this much out of date.

Related

How can I upgrade PostgreSQL from version 11 to version 13?

I'm trying to upgrade PostgreSQL from 11 to 13 on a Debian system, but it fails. I have a single cluster that needs to be upgraded:
$ sudo -u postgres pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
11 main 5432 online postgres /var/lib/postgresql/11/main /var/log/postgresql/postgresql-11-main.log
Here's what I've tried to upgrade it:
$ sudo -u postgres pg_upgradecluster 11 main
Stopping old cluster...
Warning: stopping the cluster using pg_ctlcluster will mark the systemd unit as failed. Consider using systemctl:
sudo systemctl stop postgresql#11-main
Restarting old cluster with restricted connections...
Notice: extra pg_ctl/postgres options given, bypassing systemctl for start operation
Error: cluster configuration already exists
Error: Could not create target cluster
After this, the system is left in an unusable state:
$ sudo systemctl status postgresql#11-main.service
● postgresql#11-main.service - PostgreSQL Cluster 11-main
Loaded: loaded (/lib/systemd/system/postgresql#.service; enabled-runtime; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2022-06-14 06:48:20 CEST; 19s ago
Process: 597 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 11-main start (code=exited, status=0/SUCCE>
Process: 4508 ExecStop=/usr/bin/pg_ctlcluster --skip-systemctl-redirect -m fast 11-main stop (code=exited, status=>
Main PID: 684 (code=exited, status=0/SUCCESS)
CPU: 1.862s
Jun 14 06:47:23 argos systemd[1]: Starting PostgreSQL Cluster 11-main...
Jun 14 06:47:27 argos systemd[1]: Started PostgreSQL Cluster 11-main.
Jun 14 06:48:20 argos postgresql#11-main[4508]: Cluster is not running.
Jun 14 06:48:20 argos systemd[1]: postgresql#11-main.service: Control process exited, code=exited, status=2/INVALIDARG>
Jun 14 06:48:20 argos systemd[1]: postgresql#11-main.service: Failed with result 'exit-code'.
Jun 14 06:48:20 argos systemd[1]: postgresql#11-main.service: Consumed 1.862s CPU time.
$ sudo systemctl start postgresql#11-main.service
Job for postgresql#11-main.service failed because the service did not take the steps required by its unit configuration.
See "systemctl status postgresql#11-main.service" and "journalctl -xe" for details.
Luckily, rebooting the system brought the old cluster back online, but nothing has been upgraded. Why does the upgrade fail? What are "the steps required by its unit configuration"? How can I upgrade PostgreSQL with minimal downtime?
I found the source of my problem: a configuration file owned by the wrong user (root instead of postgres) that could not be removed by the pg_dropcluster command because I ran it as the user postgres.
For future reference, here are the correct steps to upgrade a PostgreSQL cluster from 11 to 13:
Verify the current cluster is the still the old version:
$ pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
11 main 5432 online postgres /var/lib/postgresql/11/main /var/log/postgresql/postgresql-11-main.log
13 main 5434 down postgres /var/lib/postgresql/13/main /var/log/postgresql/postgresql-13-main.log
Run pg_dropcluster 13 main as user postgres:
$ sudo -u postgres pg_dropcluster 13 main
Warning: systemd was not informed about the removed cluster yet.
Operations like "service postgresql start" might fail. To fix, run:
sudo systemctl daemon-reload
Run the pg_upgradecluster command as user postgres:
$ sudo -u postgres pg_upgradecluster 11 main
Verify that everything works, and that the only online cluster is now 13:
$ pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
11 main 5434 down postgres /var/lib/postgresql/11/main /var/log/postgresql/postgresql-11-main.log
13 main 5432 online postgres /var/lib/postgresql/13/main /var/log/postgresql/postgresql-13-main.log
Drop the old cluster:
$ sudo -u postgres pg_dropcluster 11 main
Uninstall the previous version of PostgreSQL:
$ sudo apt remove 'postgresql*11'
The Debian packages create a cluster automatically when you install the server package, so get rid of that:
pg_dropcluster 13 main
Then stop the v11 server and try again.

could not bind IPv4 socket: Permission denied

I am trying to set up a new instance of PostgreSQL 9.6 on a machine. I have tested it on another machine and its working fine on that machine. But the same process is not working on new machine. Below are the steps I am using
created a new data directory with below command
/opt/rh/rh-postgresql96/root/bin/initdb -D /var/lib/pgsql/9.6/data/
created a service file /etc/systemd/system/rh-postgresql96-inst2.service with below content
.include /lib/systemd/system/rh-postgresql96-postgresql.service
[Service]
Environment=PGDATA=/var/lib/pgsql/9.6/data/
Environment=PGPORT=5433
User=postgres
Group=root
registered service using command systemctl enable rh-postgresql96-inst2
now using command systemctl start rh-postgresql96-inst2 to start service.
All these steps are working fine on one machine but not on the 2nd one.
I am getting below error while starting service on the 2nd machine
rh-postgresql96-inst2.service - PostgreSQL database server
Loaded: loaded (/etc/systemd/system/rh-postgresql96-inst2.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2018-06-18 09:59:01 UTC; 10s ago
Process: 7552 ExecStart=/opt/rh/rh-postgresql96/root/usr/libexec/postgresql-ctl start -D ${PGDATA} -s -w -t ${PGSTARTTIMEOUT} (code=exited, status=1/FAILURE)
Process: 7550 ExecStartPre=/opt/rh/rh-postgresql96/root/usr/libexec/postgresql-check-db-dir %N (code=exited, status=0/SUCCESS)
HINT: Is another postmaster already running on port 5433? If not, wait a few seconds and retry.
LOG: could not bind IPv4 socket: Permission denied
HINT: Is another postmaster already running on port 5433? If not, wait a few seconds and retry.
WARNING: could not create listen socket for "localhost"
FATAL: could not create any TCP/IP sockets
LOG: database system is shut down
systemd[1]: rh-postgresql96-inst2.service: control process exited, code=exited status=1
systemd[1]: Failed to start PostgreSQL database server.
systemd[1]: Unit rh-postgresql96-inst2.service entered failed state.
systemd[1]: rh-postgresql96-inst2.service failed.
However, I am able to start service using pg_ctl.
Also, I have checked with netstat, lsof command to check if any other postgresql instance is running on port 5433 but its not the case.
Infact i tried 5431, 5434 ports also but server is not starting up
Instead of turning of SELinux you should allow postgres to bind to port 5433 in SELinux.
There is a port parameter postgresql_port_t which by default has port 5432 and 9898.
semanage port -l | grep post
postgresql_port_t tcp 5433, 9898
What you could do is simply add port 5433 to this list.
semanage port -a -t postgresql_port_t 5433 -p tcp
semanage port -l | grep post
postgresql_port_t tcp 5433, 5432, 9898
After that you can start your postgres server listening on port 5433
systemctl enable rh-postgresql96-postgresql
systemctl start rh-postgresql96-postgresql
netstat -tulpn
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 2847/postgres
tcp 0 0 127.0.0.1:5433 0.0.0.0:* LISTEN 2775/postgres
There is also a handy tool called audit2allow to help debug selinux problems.
audit2allow -m whatiswrong < /var/log/audit/audit.log > /root/showme.te
The file showme.te show you why SELinux is not allowing the service to do what you need.
You should not turn off SELinux just because it's hard to understand or if you don't know how it works. Instead you should study it :)
I reccomend this lecture from the Red Hat Summit https://www.redhat.com/en/about/videos/summit-2018-security-enhanced-linux-mere-mortals
This issue was related to SELinux.
When I run command sestatus on both machines, output was a little bit different.
One server had Current mode: permissive and 2nd one had Current mode: enforcing.
So I changed the current mode to permissive on the 2nd machine using command setenforce 0.
and it resolved the permission related issue. Now I am able to start 2nd instance.

PG::ConnectionBad Postgres Cluster down

Digitalocean disabled my droplet's internet access. After fixing the error (rollback to older backup) they restored the internet access. But afterwards I constantly get an error when deploying, I can't seem to get my Postgres database up and running.
I'm getting an error each time I try to deploy my application.
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
So I used SSH to login to my server and check if my Postgres was actually running with:
pg_lsclusters
Results into:
Ver Cluster Port Status Owner Data directory Log file
9.5 main 5432 down postgres /var/lib/postgresql/9.5/main /var/log/postgresql/postgresql-9.5-main.log
Postgres server status
So my Postgres server seems to be down. I tried putting it 'up' again with:
pg_ctlcluster 9.5 main start After doing so I got the error: Insecure directory in $ENV{PATH} while running with -T switch at /usr/bin/pg_ctlcluster line 403.
And /usr/bin/pg_ctlcluster on line 403 says:
system 'systemctl', 'is-active', '-q', "postgresql\#$version-$cluster";
But I'm not to sure what the problem could be here and how I could fix this.
Update
I also tried updating the permissions on /bin to 755 as mentioned here. Sadly that did not fix my problem.
Update 2
I changed the /usr/bin to 755. Now when I try pg_ctlcluster 9.5 main start, I get this:
Job for postgresql#9.5-main.service failed because the control process exited with error code. See "systemctl status postgresql#9.5-main.service" and "journalctl -xe" for details.
And inside the systemctl status postgresql#9.5-main.service:
postgresql#9.5-main.service - PostgreSQL Cluster 9.5-main
Loaded: loaded (/lib/systemd/system/postgresql#.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-01-28 17:32:38 EST; 45s ago
Process: 22473 ExecStart=postgresql#%i --skip-systemctl-redirect %i start (code=exited, status=1/FAILURE)
Jan 28 17:32:08 *url* systemd[1]: Starting PostgreSQL Cluster 9.5-main...
Jan 28 17:32:38 *url* postgresql#9.5-main[22473]: The PostgreSQL server failed to start.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Control process exited, code=exited status=1
Jan 28 17:32:38 *url* systemd[1]: Failed to start PostgreSQL Cluster 9.5-main.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Unit entered failed state.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Failed with result 'exit-code'.
Thanks!
You better not mix systemctl and pg_ctlcluster. Let systemctl makes the calls to pg_ctlcluster with the right user and permissions. You should start your postgresql instance with
sudo systemctl start postgresql#9.5-main.service
Also, check the errors in the startup log. You can post them too, to help you figure out what's going on.
Your systemctl status also outputs that the service is disable, so, when the server reboots, you will have to start the service manually. To enable it run:
sudo systemctl enable postgresql#9.5-main.service
I hope it helps
It is mainly because /etc/hosts file is somehow changed.I have removed extra space inside /etc/hosts file.Use cat /etc/hosts
Add these lines into the file
127.0.0.1 localhost
127.0.1.1 your-host-name
::1 ip6-localhost ip6-loopback
And I have given permission 644 to /etc/hosts file.It is working for me even after the reboot of the system.

Cant access Postgres, getting an error about “/var/run/postgresql/.s.PGSQL.5432”

I was running a Django application with Postgres as backend database. It was working fine. All of a sudden, today I saw my database connection is refusing in the production server. So I logged into my server and tried to:
psql
and then it was showing this error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
The file listed above does not seem to exist.
I checked if my Postgres is running or not with:
/etc/init.d/postgresql status
and it was returning with SUCCESS message.
postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (exited) since Thu 2017-11-16 10:03:41 UTC; 55min ago
Process: 4701 ExecReload=/bin/true (code=exited, status=0/SUCCESS)
Process: 4899 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 4899 (code=exited, status=0/SUCCESS)
Nov 16 10:03:41 median systemd[1]: Starting PostgreSQL RDBMS...
Nov 16 10:03:41 median systemd[1]: Started PostgreSQL RDBMS.
Still, I stopped and started postgres again.
I checked if there any other postgres task running or not with:
ps -ef | grep postgres
And it returned:
root 5210 5117 0 10:21 pts/0 00:00:00 grep --color=auto postgres
So there is no other Postgres.
I even checked if my Postgres accidentally got deleted or not with
dpkg -l | grep postgres
and it returned
ii postgresql 9.5+173ubuntu0.1 all object-relational SQL database (supported version)
ii postgresql-9.5 9.5.10-0ubuntu0.16.04 amd64 object-relational SQL database, version 9.5 server
ii postgresql-client-9.5 9.5.10-0ubuntu0.16.04 amd64 front-end programs for PostgreSQL 9.5
ii postgresql-client-common 173ubuntu0.1 all manager for multiple PostgreSQL client versions
ii postgresql-common 173ubuntu0.1 all PostgreSQL database-cluster manager
ii postgresql-contrib-9.5 9.5.10-0ubuntu0.16.04 amd64 additional facilities for PostgreSQL
So there is no rc instead of ii, which means it has not been uninstalled.
I have tried every other solution in the internet but uninstalling Postgres. Didn't work for me. I might be doing something very silly. I don't want to uninstall and lose my data.
Run the following command to check which port your PostgreSQL server is actually running on:
[root#server ~]# netstat -a | grep .s.PGSQL
unix 2 [ ACC ] STREAM LISTENING 2534417 /var/run/postgresql/.s.PGSQL.5432
I can see from this output that my server is listening on port 5432 and also that the file is created in /var/run/postgresql/.s.PGSQL.5432
You can then update /etc/init.d/postgresql to include the lines
PGPORT=5432
export PGPORT
and update your postgresql.conf file which is located in your data directory and set the port key/value to port=5432
and then restart your PostgreSQL server
[root#server] service postgresql restart

Failed to start puppetserver Service

While trying to run a puppet update form a node:
sudo /opt/puppetlabs/bin/puppet agent -t
I get an error:
Error: Could not retrieve catalog; skipping run
Error: Could not send report: Connection refused - connect(2) for "puppet" port 8140`
Elsewhere indicates this is likely a problem with the puppetserver service, and suggests to reboot the server. Restarting didn't help, and when I try to restart the service I get failure:
~$ sudo service puppetserver restart
Job for puppetserver.service failed because the control process exited with error code. See "systemctl status puppetserver.service" and "journalctl -xe" for details.
I've looked at these logs, and as a puppet/linux noob, I'm not sure what to do next.
systemctl status puppetserver.service
● puppetserver.service - puppetserver Service
Loaded: loaded (/lib/systemd/system/puppetserver.service; enabled; vendor preset: enabled)
Active: activating (start-post) since Fri 2016-09-02 15:54:26 PDT; 2s ago
Process: 22301 ExecStartPre=/usr/bin/install --directory --owner=puppet --group=puppet --mode=775 /var/run/puppetlabs/puppetserver (code=exited
Main PID: 22306 (java); : 22307 (bash)
Tasks: 17
Memory: 335.7M
CPU: 5.535s
CGroup: /system.slice/puppetserver.service
├─22306 /usr/bin/java -Xms6g -Xmx6g -XX:MaxPermSize=256m -XX:OnOutOfMemoryError=kill -9 %p -Djava.security.egd=/dev/urandom -cp /opt/p
└─control
├─22307 /bin/bash /opt/puppetlabs/server/apps/puppetserver/ezbake-functions.sh wait_for_app
└─22331 sleep 1
Sep 02 15:54:26 puppet systemd[1]: Starting puppetserver Service...
Sep 02 15:54:26 puppet java[22306]: OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
puppet version 4.6.1
The puppet master communicates with the other node using port number 8140.
I don't think a restart will help, since this looks like a connection issue between the server and the node.
please try the following -
first make sure that the puppet master is actually listening on port 8140. run the following command on the puppetmaster -
netstat -ntlp | grep 8140
this command should return something like this -
tcp 0 0 0.0.0.0:8140 0.0.0.0:* LISTEN 1783/puppetmaster
If you don't get the same output, your puppetmaster is not listening, and therefore can not compile catalogs for the node.
Try checking the puppet master log at /var/log/puppetmaster.log
check that the node can communicate with the puppetmaster on the relevant port. you can check this quickly with the telnet command. run this on your node -
telnet < puppetmaster ip address \ dns name> 8140
you should get something like -
Connected to <puppet-master-IP/DNS-name>
Escape character is '^]'.
if you don't get this output, this means that something is blocking you from accessing the puppetmaster. try opening the port in your firewall to access the puppetmaster.
if you're still stuck try using the --debug flag for verbose output and edit your question.
Could be 2 things: (1) in puppet.conf you have configured more memory than you have on your machine. Or (2) You installed both apt-get install puppetserver and apt-get install puppet.
If you get failed to start puppet.service: unit not found. error on slave machine while connecting to puppet.
Close the putty and then again open and connect it.The issue wont come while starting putty on slave.
The error occurs because there is not enough RAM and to fix the error, open the Puppet server configuration file:
sudo nano /etc/sysconfig/puppetserver
And reduce the amount of allocated RAM for the Puppet server (for example, I specified 512m instead of 2g):
JAVA_ARGS="-Xms512m -Xmx512m"
Now let’s start the Puppet server:
sudo systemctl start puppetserver