Mac OS Sierra log show missing SSH source IP - macos-sierra

Before, we can track all the SSH logins either success/failure in OS X El Capitan. When moved to OS Sierra, It seems that all the logs were moved which can be viewed by log show, log stream, and syslog.
We can't track the source IP of an SSH process by looking those logs. e.g. :
Jun 27 15:38:47 MAC sshd: administrator [priv][240] <Notice>: USER_PROCESS: 243 ttys000
Jun 27 15:39:34 MAC sshd: administrator [priv][249] <Notice>: USER_PROCESS: 257 ttys001
Jun 27 15:42:50 MAC sshd: administrator [priv][249] <Notice>: DEAD_PROCESS: 257 ttys001
Screen sharing logs works perfectly just like before:
screensharingd: Authentication: SUCCEEDED :: User Name: administrator :: Viewer Address: 10.X.X.X :: Type: DH
Though we can see the logs of sshd if the attempt failed:
sshd: error: PAM: authentication error for administrator from 10.10.5.73
Any help will be greatly appreciated.
Thank you very much.

Try with this command:
log stream --info --predicate 'processImagePath contains[c] "sshd"'
It will log the successful and failed attempts.

I've found out that SSH logs can be shown using this command:
log show --style JSON | grep "ssh"

Related

Armitage 'Connection refused' error in new install of Kali Linux after full upgrade

I installed Kali Linux via VMware and did a full system upgrade:
apt-get update
apt-get upgrade
apt-get full-upgrade
As part of the upgrade postgresql upgraded from v11 to v12. I followed the instructions to finish this part of the upgrade:
pg_dropcluster 12 main --stop
pg_upgradecluster 11 main
pg_dropcluster 11 main
I start postgresql, initialize metasploit, and start Armitage:
/etc/init.d/postgresql start
msfdb init
armitage
The only console output appears unrelated:
Picked up _JAVA_OPTIONS: -Dawt.useSystemAAFontSettings=on
-Dswing.aatext=true
I do get the popup box with the connection information. I found that I get the "Unexpected end of file from server" if I use 'localhost' as the host, so - per their instructions - I change it to the external IP (in this case 192.168.9.134). I checked metasploit-framework/config/database.yml for
the port and login credentials.
After clicking 'Connect' with this information I get a connection window stating:
Connecting to 192.168.9.134:5432 Connection refused (Connection
refused)
There's also the progress bar that over time will completely fill up (unless I click 'Cancel'). After which nothing happens. As I run the command from the terminal I can see that the process is still running (I don't get my prompt back) but the window disappears and Armitage doesn't actually start. The log file, as verified by pg_lsclusters (/var/log/postgresql/postgresql-12-main.log) doesn't is actually empty.
The link I mentioned before suggests that the problem could either be not enough RAM (I set the VM to have 4gb and free -m shows):
total used free shared buff/cache available
Mem: 3964 803 2677 29 483 2787
Swap: 4093 0 4093
Or that the Metasploit RPC daemon never started (that window does come up the first time, but not subsequent times). I verified that it's running via msfdb status:
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; disabled; vendor preset: disabled)
Active: active (exited) since Fri 2020-02-07 16:06:52 EST; 19min ago
Process: 1753 ExecStart=/bin/true (code=exited, status=0/SUCCESS) Main PID: 1753 (code=exited, status=0/SUCCESS)
Feb 07 16:06:52 kali systemd1: Starting PostgreSQL RDBMS... Feb 07
16:06:52 kali systemd1: Started PostgreSQL RDBMS.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME postgres
1735 postgres 3u IPv6 32516 0t0 TCP localhost:5432 (LISTEN)
postgres 1735 postgres 4u IPv4 32517 0t0 TCP localhost:5432
(LISTEN)
UID PID PPID C STIME TTY STAT TIME CMD postgres 1735
1 0 16:06 ? Ss 0:00 /usr/lib/postgresql/12/bin/postgres -D
/var/lib/postgresql/12/main -c
config_file=/etc/postgresql/12/main/postgresql.conf
[+] Detected configuration file
(/usr/share/metasploit-framework/config/database.yml)
Also, running regular Metasploit appears to work fine (msfconsole) and loads without error (not sure if there's any output that would be helpful here). I don't use postgresql directly, so I haven't messed with any configuration nor do I have any other applications (that I'm aware of) that use it, so it should be a pretty clean setup (not to mention this is a fresh install of Kali Linux). I'm out of ideas for what to check next. An online search didn't seem to match this problem well. Any thoughts?
Armitage has been deprecated for some time now, as it has not been updated since 2015, and is (to some extent) incompatible with current versions of metasploit.
Although this may not fix your problem, I suggest not using software this much out of date.

PG::ConnectionBad Postgres Cluster down

Digitalocean disabled my droplet's internet access. After fixing the error (rollback to older backup) they restored the internet access. But afterwards I constantly get an error when deploying, I can't seem to get my Postgres database up and running.
I'm getting an error each time I try to deploy my application.
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
So I used SSH to login to my server and check if my Postgres was actually running with:
pg_lsclusters
Results into:
Ver Cluster Port Status Owner Data directory Log file
9.5 main 5432 down postgres /var/lib/postgresql/9.5/main /var/log/postgresql/postgresql-9.5-main.log
Postgres server status
So my Postgres server seems to be down. I tried putting it 'up' again with:
pg_ctlcluster 9.5 main start After doing so I got the error: Insecure directory in $ENV{PATH} while running with -T switch at /usr/bin/pg_ctlcluster line 403.
And /usr/bin/pg_ctlcluster on line 403 says:
system 'systemctl', 'is-active', '-q', "postgresql\#$version-$cluster";
But I'm not to sure what the problem could be here and how I could fix this.
Update
I also tried updating the permissions on /bin to 755 as mentioned here. Sadly that did not fix my problem.
Update 2
I changed the /usr/bin to 755. Now when I try pg_ctlcluster 9.5 main start, I get this:
Job for postgresql#9.5-main.service failed because the control process exited with error code. See "systemctl status postgresql#9.5-main.service" and "journalctl -xe" for details.
And inside the systemctl status postgresql#9.5-main.service:
postgresql#9.5-main.service - PostgreSQL Cluster 9.5-main
Loaded: loaded (/lib/systemd/system/postgresql#.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-01-28 17:32:38 EST; 45s ago
Process: 22473 ExecStart=postgresql#%i --skip-systemctl-redirect %i start (code=exited, status=1/FAILURE)
Jan 28 17:32:08 *url* systemd[1]: Starting PostgreSQL Cluster 9.5-main...
Jan 28 17:32:38 *url* postgresql#9.5-main[22473]: The PostgreSQL server failed to start.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Control process exited, code=exited status=1
Jan 28 17:32:38 *url* systemd[1]: Failed to start PostgreSQL Cluster 9.5-main.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Unit entered failed state.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Failed with result 'exit-code'.
Thanks!
You better not mix systemctl and pg_ctlcluster. Let systemctl makes the calls to pg_ctlcluster with the right user and permissions. You should start your postgresql instance with
sudo systemctl start postgresql#9.5-main.service
Also, check the errors in the startup log. You can post them too, to help you figure out what's going on.
Your systemctl status also outputs that the service is disable, so, when the server reboots, you will have to start the service manually. To enable it run:
sudo systemctl enable postgresql#9.5-main.service
I hope it helps
It is mainly because /etc/hosts file is somehow changed.I have removed extra space inside /etc/hosts file.Use cat /etc/hosts
Add these lines into the file
127.0.0.1 localhost
127.0.1.1 your-host-name
::1 ip6-localhost ip6-loopback
And I have given permission 644 to /etc/hosts file.It is working for me even after the reboot of the system.

Perl exit code 17920 on sendmail

i have the problem with my software on my server with plesk.
I send one email from perl script but system return the following error:
error closing /usr/lib/sendmail: (exit 17920)
why recive this error?
In the maillog i recive this error:
Jun 29 11:23:57 ip-xxx-xx-xx-xxx journal: plesk sendmail[7708]: handlers_stderr: PASS
Jun 29 11:23:57 ip-xxx-xx-xx-xxx journal: plesk sendmail[7708]: PASS during call 'limit-out' handler
Jun 29 11:23:57 ip-xxx-xx-xx-xxx journal: plesk sendmail[7708]: Unable to rename '/usr/local/psa/handlers/spool/messageEFbeQO' file: Permission denied
Jun 29 11:23:57 ip-xxx-xx-xx-xxx journal: plesk sendmail[7708]: System error (/usr/local/psa/handlers/spool/messageEFbeQO): No such file or directory
Jun 29 11:23:57 ip-xxx-xx-xx-xxx journal: plesk sendmail[7710]: Unable to open temporary file `/usr/local/psa/handlers/spool/messageEFbeQO' (2): No such file or directory
and i not have idea where the problem.
i hope have said all information.
This might be caused by selinux (if it's enabled on your server). You can check that by issuing the command bellow:
sestatus
If the output says permissive or disabled then selinux is not the issue.
Also this link might help you in fixing your mail problem on your plesk server:
https://support.plesk.com/hc/en-us/articles/213947085-Mail-server-does-not-work-How-to-repair-the-mail-server-configuration

centos 7 crond expired password

I am a newbie in CentOS, whenever I am trying to restart puppet services - pe-puppetdb, pe-puppetserver etc I am getting the following errors:
Jun 23 04:03:01 abc.xyz.com crond[12117]: pam_unix(crond:account): expired password for user root (root enforced)
Jun 23 04:03:01 abc.xyz.com crond[12117]: (root) PAM ERROR (Authentication token is no longer valid; new one required)
Jun 23 04:03:01 abc.xyz.com crond[12117]: (root) FAILED to authorize user with PAM (Authentication token is no longer valid; new one required)
Following are the entries in /etc/pam.d/crond:
account required pam_access.so
account include password-auth
session required pam_loginuid.so
session include password-auth
auth include password-auth
I assume there are two things that need to be done here:
Reset the password for crond user (by using passwd command)
Make sure that the password never expires
I found one solution here https://www.centos.org/forums/viewtopic.php?t=17634 but since the post is 6 years old so I am wondering whether there is any other way the issue can be resolved.
Please advise.
Edit - I even tried changing the password for crond user but got the following error:
[root#abc ~]# chage -l crond
chage: user 'crond' does not exist in /etc/passwd
[root#abc ~]# chage -M 99999 -m 99999 crond
chage: user 'crond' does not exist in /etc/passwd
Edit2 - Added the following line in /etc/pam.d/crond and started the puppetdb service:
account sufficient pam_succeed_if.so uid = 0
Still the service did not start and got the following error (journalctl -xe):
-- Unit session-11.scope has begun starting up.
Jun 23 10:28:01 abc.xyz.com CROND[30598]: (root) CMD (/var/awslogs/bin/awslogs-nanny.sh > /dev/null 2>&1)
Jun 23 10:28:02 abc.xyz.com systemd[1]: Removed slice user-0.slice.
-- Subject: Unit user-0.slice has finished shutting down
-- Defined-By: systemd
--
-- Unit user-0.slice has finished shutting down.
Jun 23 10:28:02 abc.xyz.com systemd[1]: Stopping user-0.slice.
-- Subject: Unit user-0.slice has begun shutting down
-- Defined-By: systemd
--
-- Unit user-0.slice has begun shutting down.
Jun 23 10:28:05 abc.xyz.com amazon-ssm-agent[845]: 2017-06-23 10:28:05 ERROR [instanceID=i-0a9865085e27f6862] [MessageProcessor] [Association] error when calling AWS APIs. error details - AccessDeniedException: User: arn:aws:sts::045981373300:assumed-role/ServerLabServer/i-0a9865085e27f6862 is not authorized to perform: ssm:ListInstanceAssociations on resource: arn:aws:ec2:ap-southeast-1:045981373300:instance/i-0a9865085e27f6862
The problem is well described in the initial error. The password is expired for the user root, which crond uses.
Check the status of the password with sudo chage -l root. If the password is expired, use sudo passwd to change it. You can also change the expiration settings with sudo chage root.

Where are the Kubernetes kubelet logs located?

I installed Kubernetes on my Ubuntu machine. For some debugging purposes I need to look at the kubelet log file (if there is any such file).
I have looked in /var/logs but I couldn't find a such file. Where could that be?
If you run kubelet using systemd, then you could use the following method to see kubelet's logs:
# journalctl -u kubelet
If you are trying to go directly to the file you can find the kubelet logs in /var/log/syslog directory. This is for ubuntu 16.04 and above.
It depends how it was installed. I installed Kubernetes on some Ubuntu machines following the Docker-MultiNode instructions.
With this install, I find the logs using the logs command like this.
Find your container ID.
$ docker ps | egrep kubelet
Use that container ID to view the logs
$ docker logs `<container-id>`
Finally I could find it in /var/log/upstart directory. Kubernetes in my machine is started using upstart. That's why those log files are in upstart directory
I installed Kubernetes by kind (Kubernetes in docker).
find docker container of kind to enter
$ docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
62588e4d284b kindest/node:v1.17.0 "/usr/local/bin/entr…" 2 weeks ago Up 2 weeks 127.0.0.1:32769->6443/tcp kind2-control-plane
$ docker container exec -it kind2-control-plane bash
root#kind2-control-plane:/#
Inside container kind2-control-plane, you could find logfiles in two place:
/var/log/containers/
/var/log/pods/
And then,you will find they are the same, you can see the example below:
root#kind2-control-plane:/# cat /var/log/containers/redis-master-7db7f6579f-scw95_default_master-f6374281c2c6afcfcd0ee1214d9bd51c1684c0b6c0ba1056295246ecd055563c.log | tail -n 5
2020-04-08T12:09:29.824252114Z stdout F
2020-04-08T12:09:29.824372278Z stdout F [1] 08 Apr 12:09:29.822 # Server started, Redis version 2.8.19
2020-04-08T12:09:29.824440661Z stdout F [1] 08 Apr 12:09:29.823 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
2020-04-08T12:09:29.824459317Z stdout F [1] 08 Apr 12:09:29.823 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
2020-04-08T12:09:29.82446451Z stdout F [1] 08 Apr 12:09:29.824 * The server is now ready to accept connections on port 6379
root#kind2-control-plane:/# cat /var/log/pods/default_redis-master-7db7f6579f-scw95_094824e1-25aa-4e1e-ab23-d4bae861988a/master/0.log | tail -n 5
2020-04-08T12:09:29.824252114Z stdout F
2020-04-08T12:09:29.824372278Z stdout F [1] 08 Apr 12:09:29.822 # Server started, Redis version 2.8.19
2020-04-08T12:09:29.824440661Z stdout F [1] 08 Apr 12:09:29.823 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
2020-04-08T12:09:29.824459317Z stdout F [1] 08 Apr 12:09:29.823 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
2020-04-08T12:09:29.82446451Z stdout F [1] 08 Apr 12:09:29.824 * The server is now ready to accept connections on port 6379
root#kind2-control-plane:/# ls -l /var/log/containers/ | grep redis
lrwxrwxrwx 1 root root 101 Apr 8 12:09 redis-master-7db7f6579f-scw95_default_master-f6374281c2c6afcfcd0ee1214d9bd51c1684c0b6c0ba1056295246ecd055563c.log -> /var/log/pods/default_redis-master-7db7f6579f-scw95_094824e1-25aa-4e1e-ab23-d4bae861988a/master/0.log
If you want to know more in detail about the directories, you can see 2019-2-merge-request in Github.