Postgres 9.2 on CentOS 7.
After "su - postgres" I installed using
pg-ctl initdb -D /var/lib/pgsql/data
which ran fine.
[root#server ~]# systemctl start postgresql
Job for postgresql.service failed. See 'systemctl status postgresql.service' and 'journalctl -xn' for details.
[root#server ~]# systemctl status postgresql.service
postgresql.service - PostgreSQL database server
Loaded: loaded (/usr/lib/systemd/system/postgresql.service; disabled)
Active: failed (Result: exit-code) since Fri 2015-11-27 13:48:57 EST; 9s ago
Process: 3262 ExecStart=/usr/bin/pg_ctl start -D ${PGDATA} -s -o -p ${PGPORT} -w -t 300 (code=exited, status=1/FAILURE)
Process: 3256 ExecStartPre=/usr/bin/postgresql-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Nov 27 13:48:57 server.company.network systemd[1]: Starting PostgreSQL database server...
Nov 27 13:48:57 server.company.network pg_ctl[3262]: pg_ctl: could not open PID file "/var/lib/pgsql/data/postmaster.pid": Permission denied
Nov 27 13:48:57 server.company.network systemd[1]: postgresql.service: control process exited, code=exited status=1
Nov 27 13:48:57 server.company.network systemd[1]: Failed to start PostgreSQL database server.
Nov 27 13:48:57 server.company.network systemd[1]: Unit postgresql.service entered failed state.
[root#server ~]# journalctl -xn
-- Logs begin at Fri 2015-11-27 13:29:37 EST, end at Fri 2015-11-27 13:48:57 EST. --
Nov 27 13:48:35 server.company.network sudo[3228]: pam_unix(sudo:auth): conversation failed
Nov 27 13:48:35 server.company.network sudo[3228]: pam_unix(sudo:auth): auth could not identify password for [myuserid]
Nov 27 13:48:46 server.company.network sudo[3230]: myuserid : TTY=pts/0 ; PWD=/home/myuserid ; USER=root ; COMMAND=/bin/su -
Nov 27 13:48:46 server.company.network su[3234]: (to root) myuserid on pts/0
Nov 27 13:48:46 server.company.network su[3234]: pam_unix(su-l:session): session opened for user root by myuserid(uid=0)
Nov 27 13:48:57 server.company.network systemd[1]: Starting PostgreSQL database server...
-- Subject: Unit postgresql.service has begun with start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit postgresql.service has begun starting up.
Nov 27 13:48:57 server.company.network pg_ctl[3262]: pg_ctl: could not open PID file "/var/lib/pgsql/data/postmaster.pid": Permission denied
Nov 27 13:48:57 server.company.network systemd[1]: postgresql.service: control process exited, code=exited status=1
Nov 27 13:48:57 server.company.network systemd[1]: Failed to start PostgreSQL database server.
-- Subject: Unit postgresql.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit postgresql.service has failed.
--
-- The result is failed.
Nov 27 13:48:57 server.company.network systemd[1]: Unit postgresql.service entered failed state.
When I "su - postgres" I can "touch" the file, "ls" the file, "rm" /var/lib/pgsql/data/postmaster.pid. Permissions on data are 700 postgres:postgres. pgsql is a symlink to /data0/postgres and postgres is 700 postgres:postgres.
ADDITIONS:
I forgot to mention that after having this problem, I replaced the commands for ExecStartPre and ExecStart with shell scripts that wrote the user, primary group, PGDATA, and PGPORT values to a file. They were all correct. The start still died on postmaster.pid .
The postgresql.service file:
[root#server /]# cat /usr/lib/systemd/system/postgresql.service
# It's not recommended to modify this file in-place, because it will be
# overwritten during package upgrades. If you want to customize, the
# best way is to create a file "/etc/systemd/system/postgresql.service",
# containing
# .include /lib/systemd/system/postgresql.service
# ...make your changes here...
# For more info about custom unit files, see
# http://fedoraproject.org/wiki/Systemd#How_do_I_customize_a_unit_file.2F_add_a_custom_unit_file.3F
# For example, if you want to change the server's port number to 5433,
# create a file named "/etc/systemd/system/postgresql.service" containing:
# .include /lib/systemd/system/postgresql.service
# [Service]
# Environment=PGPORT=5433
# This will override the setting appearing below.
# Note: changing PGPORT or PGDATA will typically require adjusting SELinux
# configuration as well; see /usr/share/doc/postgresql-*/README.rpm-dist.
# Note: do not use a PGDATA pathname containing spaces, or you will
# break postgresql-setup.
# Note: in F-17 and beyond, /usr/lib/... is recommended in the .include line
# though /lib/... will still work.
[Unit]
Description=PostgreSQL database server
After=network.target
[Service]
Type=forking
User=postgres
Group=postgres
# Port number for server to listen on
Environment=PGPORT=5432
# Location of database directory
Environment=PGDATA=/var/lib/pgsql/data
# Where to send early-startup messages from the server (before the logging
# options of postgresql.conf take effect)
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog
# Disable OOM kill on the postmaster
OOMScoreAdjust=-1000
ExecStartPre=/usr/bin/postgresql-check-db-dir ${PGDATA}
ExecStart=/usr/bin/pg_ctl start -D ${PGDATA} -s -o "-p ${PGPORT}" -w -t 300
ExecStop=/usr/bin/pg_ctl stop -D ${PGDATA} -s -m fast
ExecReload=/usr/bin/pg_ctl reload -D ${PGDATA} -s
# Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=300
[Install]
WantedBy=multi-user.target
I figured it out. After running initdb, I copied the data directory to the other drive. With SELinux, the FILETYPE switches to the target parent directory FILETYPE. I tried to semanage the directory, but that wasn't working. So I started over again and moved the data directory instead, which maintained the FILETYPE.
Related
I've tried this on CentOS 7 and 8 and I'm getting the same result. Also tried with different versions of MongoDB. I'm trying to run a Pritunl VPN on a hyper-v VM and have followed this tutorial for installing mongodb and this tutorial for setting everything up with the exception that I'm using a VM rather than a VPS.
When I run "systemctl start mongod" I get the error "Job for mongod.service failed because a fatal signal was delivered causing the control process to dump core.
See "systemctl status mongod.service" and "journalctl -xe" for details."
Running journalctl -xe yields something along the lines of the code included below. I tried disabling core dump and the "Process" number changed from 3397 to 4143. Also included the output of systemctl status mongod.service.
This is my first time working with something like this so there's a high possibility I'm missing something simple. I keep seeing mention of directory files in some of the solution posts but according to the mongodb install instructions it should create it's own directories. Any help is appreciated because I am beyond lost.
journalctl -xe:
-- Unit mongod.service has begun starting up.
Dec 22 14:54:12 localhost.localdomain kernel: traps: mongod[33894] trap invalid opcode ip:558aaaedaeda sp:7ffd3f5a6560 error>
Dec 22 14:54:12 localhost.localdomain systemd[1]: Started Process Core Dump (PID 33895/UID 0).
-- Subject: Unit systemd-coredump#8-33895-0.service has finished start-up
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit systemd-coredump#8-33895-0.service has finished starting up.
--
-- The start-up result is done.
Dec 22 14:54:12 localhost.localdomain systemd-coredump[33896]: Resource limits disable core dumping for process 33894 (mongo>
Dec 22 14:54:12 localhost.localdomain systemd-coredump[33896]: Process 33894 (mongod) of user 974 dumped core.
-- Subject: Process 33894 (mongod) dumped core
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- Documentation: man:core(5)
--
-- Process 33894 (mongod) crashed and dumped core.
--
-- This usually indicates a programming error in the crashing program and
-- should be reported to its vendor as a bug.
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Control process exited, code=dumped status=4
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Failed with result 'core-dump'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- The unit mongod.service has entered the 'failed' state with result 'core-dump'.
Dec 22 14:54:12 localhost.localdomain systemd[1]: Failed to start MongoDB Database Server.
-- Subject: Unit mongod.service has failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit mongod.service has failed.
--
-- The result is failed.
Dec 22 14:54:12 localhost.localdomain systemd[1]: systemd-coredump#8-33895-0.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- The unit systemd-coredump#8-33895-0.service has successfully entered the 'dead' state.
status of mongod.service:
● mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: core-dump) since Wed 2021-12-22 14:54:12 EST; 5min ago
Docs: https://docs.mongodb.org/manual
Process: 33894 ExecStart=/usr/bin/mongod $OPTIONS (code=dumped, signal=ILL)
Process: 33892 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 33890 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 33888 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Dec 22 14:54:12 localhost.localdomain systemd[1]: Starting MongoDB Database Server...
Dec 22 14:54:12 localhost.localdomain systemd-coredump[33896]: Process 33894 (mongod) of user 974 dumped core.
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Control process exited, code=dumped status=4
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Failed with result 'core-dump'.
Dec 22 14:54:12 localhost.localdomain systemd[1]: Failed to start MongoDB Database Server.
mongod.conf:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options
#auditLog:
"/etc/mongod.conf" 44L, 830C
mongod.service:
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network-online.target
Wants=network-online.target
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod.conf"
EnvironmentFile=-/etc/sysconfig/mongod
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongod.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for mongod as specified in
# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings
[Install]
WantedBy=multi-user.target
~
Trying to start a service to run gunicorn as backend server for Flask, not working. Running nginx as frontend server for React, working.
Server:
Virtualization: vmware
Operating System: Red Hat Enterprise Linux 8.4 (Ootpa)
CPE OS Name: cpe:/o:redhat:enterprise_linux:8.4:GA
Kernel: Linux 4.18.0-305.3.1.el8_4.x86_64
Architecture: x86-64
Service file in /etc/systemd/system/myservice.service:
[Unit]
Description="Description"
After=network.target
[Service]
User=root
Group=root
WorkingDirectory=/home/project/app/api
ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app
Restart=always
[Install]
WantedBy=multi-user.target
/app/api:
-rwxr-xr-x. 1 root root 2018 Jun 9 20:06 api.py
drwxrwxr-x+ 5 root root 100 Jun 7 10:11 venv
Error message:
● myservice.service - "Description"
Loaded: loaded (/etc/systemd/system/myservice.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2021-06-10 19:01:01 CEST; 5s ago
Process: 18307 ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app (code=exited, status=203/EXEC)
Main PID: 18307 (code=exited, status=203/EXEC)
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Service RestartSec=100ms expired, scheduling restart.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Scheduled restart job, restart counter is at 5.
Jun 10 19:01:01 xxxx systemd[1]: Stopped "Description".
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Start request repeated too quickly.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Failed with result 'exit-code'.
Jun 10 19:01:01 xxxx systemd[1]: Failed to start "Description".
Tried, not working:
Adding Environment="PATH=/home/project/app/api/venv/bin" under [Service]
$ systemctl reset-failed myservice.service
$ systemctl daemon-reload
Reboot, ofc.
Tried, working:
Running (as root) /home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app while in /app/api directory
Does anyone know how to fix this problem?
Typically enough, I figured it out shortly after posting this issue.
SELinux is messing with permissions for files and directories, so for anyone experiencing the same issue, make sure to test with the following alterings (as root):
$ setsebool -P httpd_can_network_connect on
$ chcon -Rt httpd_sys_content_t /path/to/your/Flask/dir
In my case: $ chcon -Rt httpd_sys_content_t /home/project/app/api
While this is NOT a permanent fix, it's worth a try. Check out the SELinux docs for more permanent solutions.
One day, My Postgresql server stopped working. Checked log. It was shutdown somehow.
root#ip_address:/# tail /var/log/postgresql/postgresql-10-main.log
2020-02-19 06:47:49.215 CET [23497] LOG: received smart shutdown request
2020-02-19 06:47:49.477 CET [23497] LOG: worker process: logical replication launcher (PID 23512) exited with exit code 1
2020-02-19 06:47:49.482 CET [23507] LOG: shutting down
2020-02-19 06:47:49.546 CET [23497] LOG: database system is shut down
When I run,
root#ip_address:/# psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
It complained that there are no files and directory. so I checked if my postgresql running.
root#ip_address:/# systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (exited) since Sun 2020-03-08 16:19:24 CET; 26min ago
Process: 30136 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 30136 (code=exited, status=0/SUCCESS)
Mar 08 16:19:24 vps584959 systemd[1]: Starting PostgreSQL RDBMS...
Mar 08 16:19:24 vps584959 systemd[1]: Started PostgreSQL RDBMS.
It was running. but, if I check postgresql cluster.
root#ip_address:/# pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
10 main 5432 down postgres /var/lib/postgresql/10/main /var/log/postgresql/postgresql-10-main.log
It was DOWN
so I tried
root#ip_address:/# pg_ctlcluster 10 main start
Error: Config owner (deploy:1003) and data owner (postgres:114) do not match, and config owner is not root
I wasn't able to make it work. then I tried.
sudo chown -R deploy:postgres /var/lib/postgresql/10/ && sudo chmod -R u=rwX,go= /var/lib/postgresql/10/
try again.
root#ip_address:/# pg_ctlcluster 10 main start
Job for postgresql#10-main.service failed because the service did not take the steps required by its unit configuration.
See "systemctl status postgresql#10-main.service" and "journalctl -xe" for details.
root#ip_address:/# systemctl status postgresql#10-main.service
● postgresql#10-main.service - PostgreSQL Cluster 10-main
Loaded: loaded (/lib/systemd/system/postgresql#.service; indirect; vendor preset: enabled)
Active: failed (Result: protocol) since Sun 2020-03-08 16:59:53 CET; 2min 52s ago
Process: 31635 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 10-main start (code=exited, status=1/FAILURE)
Main PID: 23497 (code=exited, status=0/SUCCESS)
Mar 08 16:59:53 vps584959 systemd[1]: Starting PostgreSQL Cluster 10-main...
Mar 08 16:59:53 vps584959 postgresql#10-main[31635]: Error: /usr/lib/postgresql/10/bin/pg_ctl /usr/lib/postgresql/10/bin/pg_ctl start -D /var/lib/postgresql/10/main -l /var/log/postgre
Mar 08 16:59:53 vps584959 systemd[1]: postgresql#10-main.service: Can't open PID file /var/run/postgresql/10-main.pid (yet?) after start: No such file or directory
Mar 08 16:59:53 vps584959 systemd[1]: postgresql#10-main.service: Failed with result 'protocol'.
Mar 08 16:59:53 vps584959 systemd[1]: Failed to start PostgreSQL Cluster 10-main.
Don't know what to do more. Is anybody had the same problem?
More infos.
root#ip_address:/var/run/postgresql# ls
total 0
drwxrwsr-x 3 postgres postgres 60 Feb 19 06:47 .
drwxr-xr-x 28 root root 1060 Mar 8 13:58 ..
drwxr-s--- 2 postgres postgres 40 Feb 19 06:47 10-main.pg_stat_tmp
pg_ctlcluster 10 main start
Error: Config owner (deploy:1003) and data owner (postgres:114) do not match, and config owner is not root
That's pretty clear, isn't it?
The Ubuntu PostgreSQL startup script wants that postgresql.conf and/or pg_hba.conf be owned by user postgres, else it refuses to proceed.
We are trying to setup Kubernetes cluster on 3 nodes with coreos following official step by step documentation - https://coreos.com/kubernetes/docs/latest/deploy-master.html
Servers are behind company proxy, and have proxy service defined in both
/etc/systemd/system/docker.service.d
/etc/systemd/system/flanneld.service.d
Following is picked in
systemctl cat flanneld
# /usr/lib/systemd/system/flanneld.service
[Unit]
Description=flannel - Network fabric for containers (System Application Container)
Documentation=https://github.com/coreos/flannel
After=etcd.service etcd2.service etcd-member.service
Before=docker.service flannel-docker-opts.service
Requires=flannel-docker-opts.service
[Service]
Type=notify
Restart=always
RestartSec=10s
LimitNOFILE=40000
LimitNPROC=1048576
Environment="FLANNEL_IMAGE_TAG=v0.6.2"
Environment="FLANNEL_OPTS=--ip-masq=true"
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/lib/coreos/flannel-wrapper.uuid"
EnvironmentFile=-/run/flannel/options.env
ExecStartPre=/sbin/modprobe ip_tables
ExecStartPre=/usr/bin/mkdir --parents /var/lib/coreos /run/flannel
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/lib/coreos/flannel-wrapper.uuid
ExecStart=/usr/lib/coreos/flannel-wrapper $FLANNEL_OPTS
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/lib/coreos/flannel-wrapper.uuid
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/flanneld.service.d/40-ExecStartPre-symlink.conf
[Service]
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env
# /etc/systemd/system/flanneld.service.d/proxy.conf
[Service]
Environment="HTTP_PROXY=http://10.140.65.114:8080/"
Environment="HTTPS_PROXY=http://10.140.65.114:8080/"
and
systemctl cat docker
# /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=containerd.service docker.socket early-docker.target network.target
Requires=containerd.service docker.socket early-docker.target
[Service]
Type=notify
EnvironmentFile=-/run/flannel/flannel_docker_opts.env
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/lib/coreos/dockerd --host=fd:// --containerd=/var/run/docker/libcontainerd/docker-containerd.sock $DOCKER_OPTS $DOCKER_CGROUPS $DOCKER_OPT_BIP $DOCKER_OP
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/docker.service.d/40-flannel.conf
[Unit]
Requires=flanneld.service
After=flanneld.service
[Service]
EnvironmentFile=/etc/kubernetes/cni/docker_opts_cni.env
# /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://10.140.65.114:8080/"
Environment="HTTPS_PROXY=http://10.140.65.114:8080/"
# /etc/systemd/system/flanneld.service.d/40-ExecStartPre-symlink.conf
[Service]
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env
# /etc/systemd/system/flanneld.service.d/proxy.conf
[Service]
Environment="HTTP_PROXY=http://10.140.65.114:8080/"
Environment="HTTPS_PROXY=http://10.140.65.114:8080/"
after running systemctl daemon-reload and systemctl start flannel, getting following error
Feb 16 19:50:40 localhost systemd[1]: Starting flannel - Network fabric for containers (System Application Container)...
-- Subject: Unit flanneld.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flanneld.service has begun starting up.
Feb 16 19:50:40 localhost rkt[52933]: rm: cannot get pod: no matches found for "26778eb4-9d8a-4d3c-9bb7-6ffb13a55d6a"
Feb 16 19:50:40 localhost rkt[52933]: rm: failed to remove one or more pods
Feb 16 19:50:40 localhost flannel-wrapper[52947]: + exec /usr/bin/rkt run --uuid-file-save=/var/lib/coreos/flannel-wrapper.uuid --trust-keys-from-https --mount volume=notify,target=/run/systemd/notify --volume notify,kind=host,source=/run/systemd/notify --set-env=NOTIFY_SOCKET=/run/systemd/notify --net=host --volume run-flannel,kind=host,source=/run/flannel,readOnly=false --volume etc-ssl-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true --volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true --volume etc-hosts,kind=host,source=/etc/hosts,readOnly=true --volume etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true --mount volume=run-flannel,target=/run/flannel --mount volume=etc-ssl-certs,target=/etc/ssl/certs --mount volume=usr-share-certs,target=/usr/share/ca-certificates --mount volume=etc-hosts,target=/etc/hosts --mount volume=etc-resolv,target=/etc/resolv.conf --inherit-env --stage1-from-dir=stage1-fly.aci quay.io/coreos/flannel:v0.6.2 -- --ip-masq=true
Feb 16 19:50:41 localhost sudo[52978]: admin : TTY=pts/1 ; PWD=/home/admin ; USER=root ; COMMAND=/bin/journalctl -e -u kubelet
Feb 16 19:50:41 localhost sudo[52978]: pam_unix(sudo:session): session opened for user root by admin(uid=0)
Feb 16 19:50:41 localhost sudo[52978]: pam_systemd(sudo:session): Cannot create session: Already running in a session
Feb 16 19:50:41 localhost sudo[52978]: pam_unix(sudo:session): session closed for user root
Feb 16 19:50:42 localhost flannel-wrapper[52947]: image: keys already exist for prefix "quay.io/coreos/flannel", not fetching again
Feb 16 19:50:43 localhost sudo[52990]: admin : TTY=pts/1 ; PWD=/home/admin ; USER=root ; COMMAND=/bin/journalctl -e -u kubelet
Feb 16 19:50:43 localhost sudo[52990]: pam_unix(sudo:session): session opened for user root by admin(uid=0)
Feb 16 19:50:43 localhost sudo[52990]: pam_systemd(sudo:session): Cannot create session: Already running in a session
Feb 16 19:50:43 localhost sudo[52990]: pam_unix(sudo:session): session closed for user root
Feb 16 19:50:44 localhost flannel-wrapper[52947]: Downloading signature: 0 B/473 B
Feb 16 19:50:44 localhost flannel-wrapper[52947]: Downloading signature: 473 B/473 B
Feb 16 19:50:45 localhost flannel-wrapper[52947]: Downloading signature: 473 B/473 B
Feb 16 19:50:45 localhost flannel-wrapper[52947]: run: Get https://quay-registry.s3.amazonaws.com/sharedimages/36acf4f7-a5bd-470b-9a44-13cbd244b571/layer?Signature=v8rQghQZR0k%2B1UxDG8oGw89vTqY%3D&Expires=1487255465&AWSAccessKeyId=AKIAJWZWUIS24TWSMWRA: Blocked site:
Feb 16 19:50:45 localhost systemd[1]: flanneld.service: Main process exited, code=exited, status=254/n/a
Feb 16 19:50:45 localhost rkt[52993]: stop: cannot get pod: no matches found for "26778eb4-9d8a-4d3c-9bb7-6ffb13a55d6a"
Feb 16 19:50:45 localhost rkt[52993]: stop: failed to stop 1 pod(s)
Feb 16 19:50:45 localhost systemd[1]: Failed to start flannel - Network fabric for containers (System Application Container).
-- Subject: Unit flanneld.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flanneld.service has failed.
--
-- The result is failed.
Feb 16 19:50:45 localhost systemd[1]: flanneld.service: Unit entered failed state.
Feb 16 19:50:45 localhost systemd[1]: flanneld.service: Failed with result 'exit-code'.
Feb 16 19:50:45 localhost systemd[1]: Starting flannel docker export service - Network fabric for containers (System Application Container)...
-- Subject: Unit flannel-docker-opts.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flannel-docker-opts.service has begun starting up.
Feb 16 19:50:45 localhost sudo[53003]: admin : TTY=pts/1 ; PWD=/home/admin ; USER=root ; COMMAND=/bin/journalctl -e -u kubelet
Feb 16 19:50:45 localhost sudo[53003]: pam_unix(sudo:session): session opened for user root by admin(uid=0)
Feb 16 19:50:45 localhost sudo[53003]: pam_systemd(sudo:session): Cannot create session: Already running in a session
Feb 16 19:50:45 localhost sudo[53003]: pam_unix(sudo:session): session closed for user root
Feb 16 19:50:45 localhost rkt[53000]: rm: cannot get pod: UUID cannot be empty
Feb 16 19:50:45 localhost rkt[53000]: rm: failed to remove one or more pods
Feb 16 19:50:45 localhost flannel-wrapper[53019]: + exec /usr/bin/rkt run --uuid-file-save=/var/lib/coreos/flannel-wrapper2.uuid --trust-keys-from-https --net=host --volume run-flannel,kind=host,source=/run/flannel,readOnly=false --volume etc-ssl-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true --volume usr-share-certs,kind=host,source=/usr/share/ca-certificates,readOnly=true --volume etc-hosts,kind=host,source=/etc/hosts,readOnly=true --volume etc-resolv,kind=host,source=/etc/resolv.conf,readOnly=true --mount volume=run-flannel,target=/run/flannel --mount volume=etc-ssl-certs,target=/etc/ssl/certs --mount volume=usr-share-certs,target=/usr/share/ca-certificates --mount volume=etc-hosts,target=/etc/hosts --mount volume=etc-resolv,target=/etc/resolv.conf --inherit-env --stage1-from-dir=stage1-fly.aci quay.io/coreos/flannel:v0.6.2 --exec=/opt/bin/mk-docker-opts.sh -- -d /run/flannel/flannel_docker_opts.env -i
Feb 16 19:50:46 localhost flannel-wrapper[53019]: run: discovery failed
Feb 16 19:50:46 localhost systemd[1]: flannel-docker-opts.service: Main process exited, code=exited, status=254/n/a
Feb 16 19:50:46 localhost systemd[1]: Failed to start flannel docker export service - Network fabric for containers (System Application Container).
-- Subject: Unit flannel-docker-opts.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit flannel-docker-opts.service has failed.
--
-- The result is failed.
Feb 16 19:50:46 localhost systemd[1]: flannel-docker-opts.service: Unit entered failed state.
Feb 16 19:50:46 localhost systemd[1]: flannel-docker-opts.service: Failed with result 'exit-code'.
We tried different document https://www.upcloud.com/support/deploy-kubernetes-coreos/ following it, getting same type error while starting kubelet.
Seems to be problem with rkt and quay registry issue behind company proxy.
Let us know if we missed something or configured something wrong.
can you please try to
$ sudo rkt fetch quay.io/coreos/flannel:v0.6.2
first in the shell.
I believe the issue is due to either running https proxy over http, or rkt fetch running as an unprivileged user and not inheriting system environmental variables.
I recently installed mongoDB in Amazon Linux and I am able to start mongod using the service command.
sudo service mongod start
Above works as expected.
Today I installed mongoDB in Centos 7 following the instructions in the mongodb site.
Now when I start the service using the same command as mentioned above, the service is not able to start.
I have done the following checks they look correct, so not sure what is going on here.
the path to data folder ie. /data/db is owned by user mongod:mongod
the /etc/mongod.conf has dbpath set to /data/db
the user in /etc/init.d/mongod script is set as mongod:mongod
Journal entry looks like this:
[centos#ip-172-31-16-240 init.d]$ sudo journalctl -xn
-- Logs begin at Thu 2015-03-26 11:45:57 UTC, end at Thu 2015-03-26 12:33:34 UTC. --
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal mongod[1645]: ******>>>> mongod user is mongod
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal runuser[1654]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal runuser[1654]: pam_unix(runuser:session): session closed for user mongod
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal mongod[1645]: Starting mongod: [FAILED]
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal systemd[1]: mongod.service: control process exited, code=exited status=1
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
-- Subject: Unit mongod.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mongod.service has failed.
--
-- The result is failed.
Mar 26 12:26:44 ip-172-31-16-240.ap-southeast-1.compute.internal systemd[1]: Unit mongod.service entered failed state.
Mar 26 12:26:49 ip-172-31-16-240.ap-southeast-1.compute.internal sudo[1660]: centos : TTY=pts/0 ; PWD=/etc/rc.d/init.d ; USER=root ; COMMAND=/bin/journalctl -xn
Mar 26 12:28:00 ip-172-31-16-240.ap-southeast-1.compute.internal sudo[1664]: centos : TTY=pts/1 ; PWD=/home/centos ; USER=root ; COMMAND=/bin/less /var/log/mongodb/mongod.log
Mar 26 12:33:34 ip-172-31-16-240.ap-southeast-1.compute.internal sudo[1668]: centos : TTY=pts/0 ; PWD=/etc/rc.d/init.d ; USER=root ; COMMAND=/bin/journalctl -xn
[centos#ip-172-31-16-240 init.d]$
However, if I start using sudo mongod, the mongod process starts up.
Any ideas why the service command is not working?
Just incase anyone encountered this problem, this is how I fixed.
After all it was permission related and SELinux security context which is set to enforced by default.
so, after you attempt to start mongod service and it fails, run this command and this should show you the reason if anything permission related.
sudo ausearch -m avc -ts today | audit2allow
You would see somethign like below for mongod related audits
allow mongod_t default_t:file getattr;
To fix the above error, you do the following:
967 30/03/15 07:06:52 sudo chcon -Rv --type=mongod_var_lib_t /data
Note /data/db is where my mongod data files are located.