centos 7 crond expired password - centos

I am a newbie in CentOS, whenever I am trying to restart puppet services - pe-puppetdb, pe-puppetserver etc I am getting the following errors:
Jun 23 04:03:01 abc.xyz.com crond[12117]: pam_unix(crond:account): expired password for user root (root enforced)
Jun 23 04:03:01 abc.xyz.com crond[12117]: (root) PAM ERROR (Authentication token is no longer valid; new one required)
Jun 23 04:03:01 abc.xyz.com crond[12117]: (root) FAILED to authorize user with PAM (Authentication token is no longer valid; new one required)
Following are the entries in /etc/pam.d/crond:
account required pam_access.so
account include password-auth
session required pam_loginuid.so
session include password-auth
auth include password-auth
I assume there are two things that need to be done here:
Reset the password for crond user (by using passwd command)
Make sure that the password never expires
I found one solution here https://www.centos.org/forums/viewtopic.php?t=17634 but since the post is 6 years old so I am wondering whether there is any other way the issue can be resolved.
Please advise.
Edit - I even tried changing the password for crond user but got the following error:
[root#abc ~]# chage -l crond
chage: user 'crond' does not exist in /etc/passwd
[root#abc ~]# chage -M 99999 -m 99999 crond
chage: user 'crond' does not exist in /etc/passwd
Edit2 - Added the following line in /etc/pam.d/crond and started the puppetdb service:
account sufficient pam_succeed_if.so uid = 0
Still the service did not start and got the following error (journalctl -xe):
-- Unit session-11.scope has begun starting up.
Jun 23 10:28:01 abc.xyz.com CROND[30598]: (root) CMD (/var/awslogs/bin/awslogs-nanny.sh > /dev/null 2>&1)
Jun 23 10:28:02 abc.xyz.com systemd[1]: Removed slice user-0.slice.
-- Subject: Unit user-0.slice has finished shutting down
-- Defined-By: systemd
--
-- Unit user-0.slice has finished shutting down.
Jun 23 10:28:02 abc.xyz.com systemd[1]: Stopping user-0.slice.
-- Subject: Unit user-0.slice has begun shutting down
-- Defined-By: systemd
--
-- Unit user-0.slice has begun shutting down.
Jun 23 10:28:05 abc.xyz.com amazon-ssm-agent[845]: 2017-06-23 10:28:05 ERROR [instanceID=i-0a9865085e27f6862] [MessageProcessor] [Association] error when calling AWS APIs. error details - AccessDeniedException: User: arn:aws:sts::045981373300:assumed-role/ServerLabServer/i-0a9865085e27f6862 is not authorized to perform: ssm:ListInstanceAssociations on resource: arn:aws:ec2:ap-southeast-1:045981373300:instance/i-0a9865085e27f6862

The problem is well described in the initial error. The password is expired for the user root, which crond uses.
Check the status of the password with sudo chage -l root. If the password is expired, use sudo passwd to change it. You can also change the expiration settings with sudo chage root.

Related

Unable to start PostgreSQL 12 Server

I want to setup PostgreSQL 12 with PostGIS 3 on Ubuntu 20.04 for the purpose of creating an OSM Tile Server. I want to have 2 different clusters, one for a regular PSQL database and another for OSM data. I can't seem to get the one for the OSM data up and running:
When I run pg_lsclusters, I get the following:
Ver Cluster Port Status Owner Data directory Log file
12 main 5433 online postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
12 osm_psql_db 5432 down postgres /var/lib/postgresql/12/2TB1/osm_psql_db /var/log/postgresql/postgresql-12-osm_psql_db.log
When I run journalctl -xe, I get the following:
Mar 13 11:47:37 cdil-MS-7B92 systemd[1]: Dependency failed for PostgreSQL Cluster 12-osm_psql_db.
-- Subject: A start job for unit postgresql#12-osm_psql_db.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit postgresql#12-osm_psql_db.service has finished with a failure.
--
-- The job identifier is 9566 and the job result is dependency.
Mar 13 11:47:37 cdil-MS-7B92 systemd[1]: postgresql#12-osm_psql_db.service: Job postgresql#12-osm_psql_db.service/start failed with result 'dependency'.
Mar 13 11:47:37 cdil-MS-7B92 systemd[1]: var-lib-postgresql-12-osm_psql_db.mount: Job var-lib-postgresql-12-osm_psql_db.mount/start failed with result 'dependency'.
Mar 13 11:47:37 cdil-MS-7B92 systemd[1]: dev-disk-by\x2dlabel-osm_psql_db.device: Job dev-disk-by\x2dlabel-osm_psql_db.device/start failed with result 'timeout'.
Mar 13 11:47:43 cdil-MS-7B92 PackageKit[27900]: daemon quit
Mar 13 11:47:43 cdil-MS-7B92 systemd[1]: packagekit.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit packagekit.service has successfully entered the 'dead' state.
Any idea what could be holding me up?
*** EXTRA INFO JUST IN CASE ***
In terms of how I set up everything, I installed the following packages:
sudo apt install postgresql-12 postgresql-contrib postgis postgresql-12-postgis-3
Because the OSM data is quite large, I want to store that particular cluster on another hard disk. It's called "2TB1" and it's been mounted to /var/lib/postgresql/12/2TB1 because I realized that the postgres user needed access to the data_directory folder and all parent folders leading up to it.
To do so I modified the permissions of the new hard drive:
sudo chown -R postgres:postgres /var/lib/postgresql/12/2TB1
Next, I created the new db cluster instance:
sudo pg_createcluster 12 osm_psql_db -d /var/lib/postgresql/12/2TB1/osm_psql_db -p 5432
I start the new instance:
sudo pg_ctlcluster 12 osm_psql_db start
I get the following error:
A dependency job for postgresql#12-osm_psql_db.service failed. See 'journalctl -xe' for details.
For anyone that stumbles upon the same issue... I tracked the problem down to the *.service file referencing the wrong mount point for the database cluster location. Here's what I did:
Enable the new service (not sure if this is needed, but what the heck...)
sudo systemctl enable postgresql#12-osm_psql_db
Edit the postgresql#12-osm_psql_db.service
sudo systemctl edit --full postgresql#12-osm_psql_db.service
Change
RequiresMountsFor=/etc/postgresql/%I /var/lib/postgresql/%I
To
RequiresMountsFor=/etc/postgresql/%I /var/lib/postgresql/12/2TB1/osm_psql_db
As part of the service script, %I expands to VERSION/CLUSTER which in my case would have been 12/osm_psql_db. Since I was choosing to place the DB on another SSD and the database can't reside in the root directory of a disk, the mount location in the *.service file needed to be updated to 12/2TB1/osm_psql_db. This would not be necessary if you were storing all your databases on a single hard disk.

PG::ConnectionBad Postgres Cluster down

Digitalocean disabled my droplet's internet access. After fixing the error (rollback to older backup) they restored the internet access. But afterwards I constantly get an error when deploying, I can't seem to get my Postgres database up and running.
I'm getting an error each time I try to deploy my application.
PG::ConnectionBad: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
So I used SSH to login to my server and check if my Postgres was actually running with:
pg_lsclusters
Results into:
Ver Cluster Port Status Owner Data directory Log file
9.5 main 5432 down postgres /var/lib/postgresql/9.5/main /var/log/postgresql/postgresql-9.5-main.log
Postgres server status
So my Postgres server seems to be down. I tried putting it 'up' again with:
pg_ctlcluster 9.5 main start After doing so I got the error: Insecure directory in $ENV{PATH} while running with -T switch at /usr/bin/pg_ctlcluster line 403.
And /usr/bin/pg_ctlcluster on line 403 says:
system 'systemctl', 'is-active', '-q', "postgresql\#$version-$cluster";
But I'm not to sure what the problem could be here and how I could fix this.
Update
I also tried updating the permissions on /bin to 755 as mentioned here. Sadly that did not fix my problem.
Update 2
I changed the /usr/bin to 755. Now when I try pg_ctlcluster 9.5 main start, I get this:
Job for postgresql#9.5-main.service failed because the control process exited with error code. See "systemctl status postgresql#9.5-main.service" and "journalctl -xe" for details.
And inside the systemctl status postgresql#9.5-main.service:
postgresql#9.5-main.service - PostgreSQL Cluster 9.5-main
Loaded: loaded (/lib/systemd/system/postgresql#.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-01-28 17:32:38 EST; 45s ago
Process: 22473 ExecStart=postgresql#%i --skip-systemctl-redirect %i start (code=exited, status=1/FAILURE)
Jan 28 17:32:08 *url* systemd[1]: Starting PostgreSQL Cluster 9.5-main...
Jan 28 17:32:38 *url* postgresql#9.5-main[22473]: The PostgreSQL server failed to start.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Control process exited, code=exited status=1
Jan 28 17:32:38 *url* systemd[1]: Failed to start PostgreSQL Cluster 9.5-main.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Unit entered failed state.
Jan 28 17:32:38 *url* systemd[1]: postgresql#9.5-main.service: Failed with result 'exit-code'.
Thanks!
You better not mix systemctl and pg_ctlcluster. Let systemctl makes the calls to pg_ctlcluster with the right user and permissions. You should start your postgresql instance with
sudo systemctl start postgresql#9.5-main.service
Also, check the errors in the startup log. You can post them too, to help you figure out what's going on.
Your systemctl status also outputs that the service is disable, so, when the server reboots, you will have to start the service manually. To enable it run:
sudo systemctl enable postgresql#9.5-main.service
I hope it helps
It is mainly because /etc/hosts file is somehow changed.I have removed extra space inside /etc/hosts file.Use cat /etc/hosts
Add these lines into the file
127.0.0.1 localhost
127.0.1.1 your-host-name
::1 ip6-localhost ip6-loopback
And I have given permission 644 to /etc/hosts file.It is working for me even after the reboot of the system.

HAProxy not running stats socket

I installed haproxy from aur in Arch Linux and modified the config file a bit:
global
maxconn 20000
log 127.0.0.1 local0
user haproxy
stats socket /run/haproxy/haproxy.sock mode 660 level admin
stats timeout 30s
chroot /usr/share/haproxy
pidfile /run/haproxy.pid
daemon
defaults
mode http
stats enable
stats uri /stats
stats realm Haproxy\ Statistics
frontend www-http
bind 127.0.0.1:80
default_backend www-backend
backend www-backend
mode http
balance roundrobin
timeout connect 5s
timeout server 30s
timeout queue 30s
server app1 127.0.0.1:5001 check
server app2 127.0.0.1:5002 check
I have made sure that the directory /run/haproxy exists and has permissions for the user haproxy to write to it:
ツ ls -al /run/haproxy
total 0
drwxr-xr-x 2 haproxy root 40 May 13 21:37 .
drwxr-xr-x 27 root root 720 May 13 22:00 ..
When I launch haproxy using systemctl start haproxy.service, it loads fine. I can even go to the /stats page and view stats, however, socat reports the following error:
ツ sudo socat unix-connect:/run/haproxy/haproxy.sock stdio
2016/05/13 22:04:11 socat[24202] E connect(5, AF=1 "/run/haproxy/haproxy.sock", 27): No such file or directory
I am at wits end and not able to understand what is happening. This is what I get from journalctl -xe:
May 13 21:56:31 rohanarch.local systemd[1]: Starting HAProxy Load Balancer...
May 13 21:56:31 rohanarch.local systemd[1]: Started HAProxy Load Balancer.
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: haproxy-systemd-wrapper: executing /usr/bin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: [WARNING] 133/215631 (20456) : config : missing timeouts for frontend 'www-http'.
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | While not properly invalid, you will certainly encounter various problems
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | with such a configuration. To fix this, please ensure that all following
May 13 21:56:31 rohanarch.local haproxy-systemd-wrapper[20454]: | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
Basically, no errors/warnings or not even so much as an indication about the stats socket. Others who have faced a problem with the stats socket fail to get haproxy started. In my case, it starts up fine, but the socket just isn't creating.
You need to manually create the directory yourself. Please ensure
/run/haproxy exists. If it doesn't, then first create it with:
sudo mkdir /run/haproxy
This should resolve your issue.
try to make selinux permissive with the command belowe and restart HAproxy service.
selinux command

Changing port of OpenLdap on Centos installed with yum

I am trying to change the default port of openldap (not so experienced with openldap so I might be doing something incorrectly).
Currently I am installing it through yum package manager on CentOS 7.1.1503 as follows :
yum install openldap-servers
After installing 'openldap-servers' I can start the openldap server by invoking service slapd start
however when I try to change the port by editing /etc/sysconfig/slapd for instance by changing SLAPD_URLS to the following :
# OpenLDAP server configuration
# see 'man slapd' for additional information
# Where the server will run (-h option)
# - ldapi:/// is required for on-the-fly configuration using client tools
# (use SASL with EXTERNAL mechanism for authentication)
# - default: ldapi:/// ldap:///
# - example: ldapi:/// ldap://127.0.0.1/ ldap://10.0.0.1:1389/ ldaps:///
SLAPD_URLS="ldapi:/// ldap://127.0.0.1:3421/"
# Any custom options
#SLAPD_OPTIONS=""
# Keytab location for GSSAPI Kerberos authentication
#KRB5_KTNAME="FILE:/etc/openldap/ldap.keytab"
(see SLAPD_URLS="ldapi:/// ldap://127.0.0.1:3421/" )..
it is failing to start
service slapd start
Redirecting to /bin/systemctl start slapd.service
Job for slapd.service failed. See 'systemctl status slapd.service' and 'journalctl -xn' for details.
service slapd status
Redirecting to /bin/systemctl status slapd.service
slapd.service - OpenLDAP Server Daemon
Loaded: loaded (/usr/lib/systemd/system/slapd.service; disabled)
Active: failed (Result: exit-code) since Fri 2015-07-31 07:49:06 EDT; 10s ago
Docs: man:slapd
man:slapd-config
man:slapd-hdb
man:slapd-mdb
file:///usr/share/doc/openldap-servers/guide.html
Process: 41704 ExecStart=/usr/sbin/slapd -u ldap -h ${SLAPD_URLS} $SLAPD_OPTIONS (code=exited, status=1/FAILURE)
Process: 41675 ExecStartPre=/usr/libexec/openldap/check-config.sh (code=exited, status=0/SUCCESS)
Main PID: 34363 (code=exited, status=0/SUCCESS)
Jul 31 07:49:06 osboxes runuser[41691]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41693]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41695]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41697]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41699]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes runuser[41701]: pam_unix(runuser:session): session opened for user ldap by (uid=0)
Jul 31 07:49:06 osboxes slapd[41704]: #(#) $OpenLDAP: slapd 2.4.39 (Mar 6 2015 04:35:49) $
mockbuild#worker1.bsys.centos.org:/builddir/build/BUILD/openldap-2.4.39/openldap-2.4.39/servers/slapd
Jul 31 07:49:06 osboxes systemd[1]: slapd.service: control process exited, code=exited status=1
Jul 31 07:49:06 osboxes systemd[1]: Failed to start OpenLDAP Server Daemon.
Jul 31 07:49:06 osboxes systemd[1]: Unit slapd.service entered failed state.
ps I also disabled firewalld
the solution was provided when I ran journalctl -xn which basically says:
SELinux is preventing /usr/sbin/slapd from name_bind access on the tcp_socket port 9312.
***** Plugin bind_ports (92.2 confidence) suggests ************************
If you want to allow /usr/sbin/slapd to bind to network port 9312
Then you need to modify the port type.
Do
# semanage port -a -t ldap_port_t -p tcp 9312
***** Plugin catchall_boolean (7.83 confidence) suggests ******************
If you want to allow nis to enabled
Then you must tell SELinux about this by enabling the 'nis_enabled' boolean.
You can read 'None' man page for more details.
Do
setsebool -P nis_enabled 1
***** Plugin catchall (1.41 confidence) suggests **************************
If you believe that slapd should be allowed name_bind access on the port 9312 tcp_socket by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# grep slapd /var/log/audit/audit.log | audit2allow -M mypol
# semodule -i mypol.pp

'service postgresql start' fails to start postgres service on Fedora

Newcomer to postgres here!
I edited pg_hba.conf as mentioned here , but when I try to restart postgresql service, the attempt fails. Below is the command line output with all the information I could gather.
[root#arunpc modules]# service postgresql restart
Redirecting to /bin/systemctl restart postgresql.service
Job failed. See system logs and 'systemctl status' for details.
[root#arunpc modules]# systemctl status postgresql.service
postgresql.service - PostgreSQL database server
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled)
Active: failed since Sun, 08 Apr 2012 21:29:06 +0530; 14s ago
Process: 12228 ExecStop=/usr/bin/pg_ctl stop -D ${PGDATA} -s -m fast (code=exited, status=0/SUCCESS)
Process: 12677 ExecStart=/usr/bin/pg_ctl start -D ${PGDATA} -s -o -p ${PGPORT} -w -t 300 (code=exited, status=1/FAILURE)
Process: 12672 ExecStartPre=/usr/bin/postgresql-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Main PID: 12184 (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/postgresql.service
[root#arunpc modules]# tail /var/log/messages
....
Apr 8 21:29:06 arunpc systemd[1]: postgresql.service: control process exited, code=exited status=1
Apr 8 21:29:06 arunpc systemd[1]: Unit postgresql.service entered failed state.
Apr 8 21:29:06 arunpc pg_ctl[12677]: pg_ctl: could not start server
Apr 8 21:29:06 arunpc pg_ctl[12677]: Examine the log output.
FWIW, here is the configuration file (pg_hba.conf) used:
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all postgres ident sameuser
local all all ident sameuser
# IPv4 local connections:
host all all 127.0.0.1 password
# IPv6 local connections:
host all all ::1 password
What could be the error here? It used to work fine before I made the edit (and since this was a development machine, I brilliantly didn't make any backup).
I would also like to get a more detailed log output. The log message in /var/log/messages file does ask me to "Examine the log output" - which log output would this be? What other troubleshooting steps can I take?
Many thanks in advance!
Depending on your startup script, it might redirect the postmaster's output to a file. This is usually server.log in the PGDATA directory. Things I'd try:
Comment out everything in pg_hba.conf and retry. If the problem is a syntax error in that file, then commenting out the offending line will allow the server to start and then you'll be able to uncomment one at a time until you find the error.
Start postmaster directly from the shell without sending it to the background. Just run postmaster -D <pgdata dir> and it should spew some more helpful logs.