i'm trying to start fresh installed mongodb
(packets mongodb-org=5.0.2 mongodb-org-database=5.0.2 mongodb-org-server=5.0.2 mongodb-org-shell=5.0.2 mongodb-org-mongos=5.0.2 mongodb-org-tools=5.0.2)
OS ubuntu20.04 (clear, fresh installed)
Vmware exsi
config default
and after install i'm trying start service ang get errors like that:
sudo systemctl status mongod.service
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: failed (Result: core-dump) since Mon 2021-11-22 13:02:15 UTC; 1s ago
Docs: https://docs.mongodb.org/manual
Process: 1769 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=dumped, signal=ILL)
Main PID: 1769 (code=dumped, signal=ILL)
Nov 22 13:02:14 rocket systemd[1]: Started MongoDB Database Server.
Nov 22 13:02:15 rocket systemd[1]: mongod.service: Main process exited, code=dumped, status=4/ILL
Nov 22 13:02:15 rocket systemd[1]: mongod.service: Failed with result 'core-dump'.
journal -xe:
-- A start job for unit mongod.service has finished successfully.
--
-- The job identifier is 456.
Nov 22 12:59:33 rocket sudo[1687]: pam_unix(sudo:session): session closed for user root
Nov 22 12:59:33 rocket kernel: show_signal: 18 callbacks suppressed
Nov 22 12:59:33 rocket kernel: traps: mongod[1693] trap invalid opcode ip:562c4fb0708a sp:7ffe3d3abcb0 error:0 in mongod[562c4bbc8000+5055000]
Nov 22 12:59:34 rocket systemd[1]: mongod.service: Main process exited, code=dumped, status=4/ILL
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit mongod.service has exited.
--
-- The process' exit code is 'dumped' and its exit status is 4.
Nov 22 12:59:34 rocket systemd[1]: mongod.service: Failed with result 'core-dump'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit mongod.service has entered the 'failed' state with result 'core-dump'.
rocket kernel: traps: mongod[1693] trap invalid opcode ip:562c4fb0708a
sp:7ffe3d3abcb0 error:0 in mongod[562c4bbc8000+5055000]
limits:
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 47570
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 65000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 47570
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
a. the issue looks to be a matter of missing dependencies.
Run:
apt-get -u dist-upgrade
and then reinstall mongodb
b. Dont use 5.0.2 - it has a severe alert from MongoDB for that version.
Use 5.0.3 instead
It looks like Mongo is sensitive to extensions on the underlying chipset (AVX). If you are running this under KVM, you have to experiment with the underlying CPU
architecture. I had the same problem with Mongo 6.0.4.
I am running KVM on an older Xeon chip, so instead of QEMU emulation, I pass execution to the underlying processor. All is well then.
https://www.mongodb.com/community/forums/t/mongodb-5-0-cpu-intel-g4650-compatibility/116610
Related
I've tried this on CentOS 7 and 8 and I'm getting the same result. Also tried with different versions of MongoDB. I'm trying to run a Pritunl VPN on a hyper-v VM and have followed this tutorial for installing mongodb and this tutorial for setting everything up with the exception that I'm using a VM rather than a VPS.
When I run "systemctl start mongod" I get the error "Job for mongod.service failed because a fatal signal was delivered causing the control process to dump core.
See "systemctl status mongod.service" and "journalctl -xe" for details."
Running journalctl -xe yields something along the lines of the code included below. I tried disabling core dump and the "Process" number changed from 3397 to 4143. Also included the output of systemctl status mongod.service.
This is my first time working with something like this so there's a high possibility I'm missing something simple. I keep seeing mention of directory files in some of the solution posts but according to the mongodb install instructions it should create it's own directories. Any help is appreciated because I am beyond lost.
journalctl -xe:
-- Unit mongod.service has begun starting up.
Dec 22 14:54:12 localhost.localdomain kernel: traps: mongod[33894] trap invalid opcode ip:558aaaedaeda sp:7ffd3f5a6560 error>
Dec 22 14:54:12 localhost.localdomain systemd[1]: Started Process Core Dump (PID 33895/UID 0).
-- Subject: Unit systemd-coredump#8-33895-0.service has finished start-up
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit systemd-coredump#8-33895-0.service has finished starting up.
--
-- The start-up result is done.
Dec 22 14:54:12 localhost.localdomain systemd-coredump[33896]: Resource limits disable core dumping for process 33894 (mongo>
Dec 22 14:54:12 localhost.localdomain systemd-coredump[33896]: Process 33894 (mongod) of user 974 dumped core.
-- Subject: Process 33894 (mongod) dumped core
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
-- Documentation: man:core(5)
--
-- Process 33894 (mongod) crashed and dumped core.
--
-- This usually indicates a programming error in the crashing program and
-- should be reported to its vendor as a bug.
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Control process exited, code=dumped status=4
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Failed with result 'core-dump'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- The unit mongod.service has entered the 'failed' state with result 'core-dump'.
Dec 22 14:54:12 localhost.localdomain systemd[1]: Failed to start MongoDB Database Server.
-- Subject: Unit mongod.service has failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit mongod.service has failed.
--
-- The result is failed.
Dec 22 14:54:12 localhost.localdomain systemd[1]: systemd-coredump#8-33895-0.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- The unit systemd-coredump#8-33895-0.service has successfully entered the 'dead' state.
status of mongod.service:
● mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: core-dump) since Wed 2021-12-22 14:54:12 EST; 5min ago
Docs: https://docs.mongodb.org/manual
Process: 33894 ExecStart=/usr/bin/mongod $OPTIONS (code=dumped, signal=ILL)
Process: 33892 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 33890 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 33888 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Dec 22 14:54:12 localhost.localdomain systemd[1]: Starting MongoDB Database Server...
Dec 22 14:54:12 localhost.localdomain systemd-coredump[33896]: Process 33894 (mongod) of user 974 dumped core.
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Control process exited, code=dumped status=4
Dec 22 14:54:12 localhost.localdomain systemd[1]: mongod.service: Failed with result 'core-dump'.
Dec 22 14:54:12 localhost.localdomain systemd[1]: Failed to start MongoDB Database Server.
mongod.conf:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options
#auditLog:
"/etc/mongod.conf" 44L, 830C
mongod.service:
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network-online.target
Wants=network-online.target
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod.conf"
EnvironmentFile=-/etc/sysconfig/mongod
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongod.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for mongod as specified in
# https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings
[Install]
WantedBy=multi-user.target
~
I have installed postgresql for a long time, but it suddenly stopped working when I added an IP address in the pg_hba file.
I'm trying to run it using sudo service postgresql restart but this doesn't work.
Also
root#ubuntu-dev-server:/# systemctl start postgresql#9.6-main
Job for postgresql#9.6-main.service failed because the service did not take the steps required by its unit configuration.
See "systemctl status postgresql#9.6-main.service" and "journalctl -xe" for details.
And then,
root#ubuntu-dev-server:/# systemctl status postgresql#9.6-main.service
● postgresql#9.6-main.service - PostgreSQL Cluster 9.6-main
Loaded: loaded (/lib/systemd/system/postgresql#.service; indirect; vendor preset: enabled)
Active: failed (Result: protocol) since Fri 2020-07-24 16:56:06 UTC; 1min 54s ago
Process: 31877 ExecStop=/usr/bin/pg_ctlcluster --skip-systemctl-redirect -m fast 9.6-main stop (code=exited, status=0/SUCCESS)
Process: 951 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 9.6-main start (code=exited, status=1/FAILURE)
Main PID: 24944 (code=exited, status=0/SUCCESS)
Jul 24 16:56:06 ubuntu-dev-server systemd[1]: Starting PostgreSQL Cluster 9.6-main...
Jul 24 16:56:06 ubuntu-dev-server postgresql#9.6-main[951]: The PostgreSQL server failed to start. Please check the log output:
Jul 24 16:56:06 ubuntu-dev-server postgresql#9.6-main[951]: 2020-07-24 16:56:06.236 UTC [956] FATAL: could not map anonymous shared memory: Cannot allocate memory
Jul 24 16:56:06 ubuntu-dev-server postgresql#9.6-main[951]: 2020-07-24 16:56:06.236 UTC [956] HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 148471808 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
Jul 24 16:56:06 ubuntu-dev-server postgresql#9.6-main[951]: 2020-07-24 16:56:06.236 UTC [956] LOG: database system is shut down
Jul 24 16:56:06 ubuntu-dev-server systemd[1]: postgresql#9.6-main.service: Can't open PID file /run/postgresql/9.6-main.pid (yet?) after start: No such file or directory
Jul 24 16:56:06 ubuntu-dev-server systemd[1]: postgresql#9.6-main.service: Failed with result 'protocol'.
Jul 24 16:56:06 ubuntu-dev-server systemd[1]: Failed to start PostgreSQL Cluster 9.6-main.
I have tried several things but still do not start. Any recommendation is welcome.
I have made this service file to start a python script when my raspberry pi (4) boots up:
/etc/systemd/system/plants.service
[Unit]
Description=plant-sender
After=network.target
[Service]
Type=simple
User=root
Group=root
WorkingDirectory=/home/theo/Repos/plants-monitor/remote
ExecStart=/usr/bin/python main.py
Restart=on-failure
[Install]
WantedBy=multi-user.target
However, once the pi is on, I run sudo systemctl status plants, and get:
* plants.service - plant-sender
Loaded: loaded (/etc/systemd/system/plants.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2020-03-30 20:22:43 EDT; 1min 45s ago
Process: 323 ExecStart=/usr/bin/python main.py (code=exited, status=1/FAILURE)
Main PID: 323 (code=exited, status=1/FAILURE)
Mar 30 20:22:43 arpi systemd[1]: plants.service: Scheduled restart job, restart counter is at 5.
Mar 30 20:22:43 arpi systemd[1]: Stopped plant-sender.
Mar 30 20:22:43 arpi systemd[1]: plants.service: Start request repeated too quickly.
Mar 30 20:22:43 arpi systemd[1]: plants.service: Failed with result 'exit-code'.
Mar 30 20:22:43 arpi systemd[1]: Failed to start plant-sender.
But, after running sudo systemctl restart plants, the service starts up and everything is fine.
If it doesn't start on boot but does on systemctl restart, I'd be looking at whether /home/theo/Repos/plants-monitor/remote is mounted at that point.
There may be something automounting or home-mounting your home directory when you log in.
If so, you could change the working directory to something that exists always, even if only a test.
Additionally, using journalctl -n 9999 -u plants will get you more log messages, so you can see why it's failing, rather than just seeing the "tried too many times, giving up" messages.
I have been trying to activate bcm2835_wdt watchdog module of raspberry pi 3 for 6 hours but I couldn't.
modprobe bcm2835_wdt returns no error but lsmod command doesn't return bcm2835_wdt module in the list.
I have loaded watchdog and chkconfig
then;
sudo chkconfig watchdog on
when I try to start service
sudo /etc/init.d/watchdog start
I got an error
[....] Starting watchdog (via systemctl): watchdog.service Job for watchdog.service failed because the control process exited with error code.
See "systemctl status watchdog.service" and "journalctl -xe" for details.
failed!
journalctl -xe returns;
-- Kernel start-up required 2093448 microseconds.
--
-- Initial RAM disk start-up required INITRD_USEC microseconds.
--
-- Userspace start-up required 5579375635 microseconds.
Jan 11 16:03:45 al sudo[935]: root : TTY=pts/1 ; PWD=/ ; USER=root ; COMMAND=/etc/init.d/watchdog start
Jan 11 16:03:45 al sudo[935]: pam_unix(sudo:session): session opened for user root by root(uid=0)
Jan 11 16:03:46 al systemd[1]: Starting watchdog daemon...
-- Subject: Unit watchdog.service has begun start-up
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit watchdog.service has begun starting up.
Jan 11 16:03:46 al sh[949]: modprobe: **FATAL: Module dcm2835_wdt not found in directory /lib/modules/4.9.59-v7+**
Jan 11 16:03:46 al systemd[1]: watchdog.service: Control process exited, code=exited status=1
Jan 11 16:03:46 al systemd[1]: Failed to start watchdog daemon.
-- Subject: Unit watchdog.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Unit watchdog.service has failed.
My question is how to enable watchdog kernel module bcm2835_wdt for raspberry pi3 ?
Thank you in advance...
May the bcm2835_wdt has been compiled into the kernel on your system, so you don't see it with lsmod. Just try:
# cat /lib/modules/$(uname -r)/modules.builtin | grep wdt
kernel/drivers/watchdog/bcm2835_wdt.ko
If you can see it in the list, it has been compiled within the kernel. You may also see if it has been enabled with this:
journalctl --no-pager | grep -i watchdog
Regarding you watchdog configuration, see this error:
modprobe: **FATAL: Module dcm2835_wdt not found in directory /lib/modules/4.9.59-v7+**
The module has been called dcm2835_wdt, not bcm2835_wdt.
Also, keep in mind that your watchdog may be used by SystemD, so you should refer to that for using it.
If you don't mind, you may also try a fork bomb to see if the watchdog is able to restart you system when a problem is detected:
python -c "import os, itertools; [os.fork() for i in itertools.count()]"
Postgres 9.2 on CentOS 7.
After "su - postgres" I installed using
pg-ctl initdb -D /var/lib/pgsql/data
which ran fine.
[root#server ~]# systemctl start postgresql
Job for postgresql.service failed. See 'systemctl status postgresql.service' and 'journalctl -xn' for details.
[root#server ~]# systemctl status postgresql.service
postgresql.service - PostgreSQL database server
Loaded: loaded (/usr/lib/systemd/system/postgresql.service; disabled)
Active: failed (Result: exit-code) since Fri 2015-11-27 13:48:57 EST; 9s ago
Process: 3262 ExecStart=/usr/bin/pg_ctl start -D ${PGDATA} -s -o -p ${PGPORT} -w -t 300 (code=exited, status=1/FAILURE)
Process: 3256 ExecStartPre=/usr/bin/postgresql-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Nov 27 13:48:57 server.company.network systemd[1]: Starting PostgreSQL database server...
Nov 27 13:48:57 server.company.network pg_ctl[3262]: pg_ctl: could not open PID file "/var/lib/pgsql/data/postmaster.pid": Permission denied
Nov 27 13:48:57 server.company.network systemd[1]: postgresql.service: control process exited, code=exited status=1
Nov 27 13:48:57 server.company.network systemd[1]: Failed to start PostgreSQL database server.
Nov 27 13:48:57 server.company.network systemd[1]: Unit postgresql.service entered failed state.
[root#server ~]# journalctl -xn
-- Logs begin at Fri 2015-11-27 13:29:37 EST, end at Fri 2015-11-27 13:48:57 EST. --
Nov 27 13:48:35 server.company.network sudo[3228]: pam_unix(sudo:auth): conversation failed
Nov 27 13:48:35 server.company.network sudo[3228]: pam_unix(sudo:auth): auth could not identify password for [myuserid]
Nov 27 13:48:46 server.company.network sudo[3230]: myuserid : TTY=pts/0 ; PWD=/home/myuserid ; USER=root ; COMMAND=/bin/su -
Nov 27 13:48:46 server.company.network su[3234]: (to root) myuserid on pts/0
Nov 27 13:48:46 server.company.network su[3234]: pam_unix(su-l:session): session opened for user root by myuserid(uid=0)
Nov 27 13:48:57 server.company.network systemd[1]: Starting PostgreSQL database server...
-- Subject: Unit postgresql.service has begun with start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit postgresql.service has begun starting up.
Nov 27 13:48:57 server.company.network pg_ctl[3262]: pg_ctl: could not open PID file "/var/lib/pgsql/data/postmaster.pid": Permission denied
Nov 27 13:48:57 server.company.network systemd[1]: postgresql.service: control process exited, code=exited status=1
Nov 27 13:48:57 server.company.network systemd[1]: Failed to start PostgreSQL database server.
-- Subject: Unit postgresql.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit postgresql.service has failed.
--
-- The result is failed.
Nov 27 13:48:57 server.company.network systemd[1]: Unit postgresql.service entered failed state.
When I "su - postgres" I can "touch" the file, "ls" the file, "rm" /var/lib/pgsql/data/postmaster.pid. Permissions on data are 700 postgres:postgres. pgsql is a symlink to /data0/postgres and postgres is 700 postgres:postgres.
ADDITIONS:
I forgot to mention that after having this problem, I replaced the commands for ExecStartPre and ExecStart with shell scripts that wrote the user, primary group, PGDATA, and PGPORT values to a file. They were all correct. The start still died on postmaster.pid .
The postgresql.service file:
[root#server /]# cat /usr/lib/systemd/system/postgresql.service
# It's not recommended to modify this file in-place, because it will be
# overwritten during package upgrades. If you want to customize, the
# best way is to create a file "/etc/systemd/system/postgresql.service",
# containing
# .include /lib/systemd/system/postgresql.service
# ...make your changes here...
# For more info about custom unit files, see
# http://fedoraproject.org/wiki/Systemd#How_do_I_customize_a_unit_file.2F_add_a_custom_unit_file.3F
# For example, if you want to change the server's port number to 5433,
# create a file named "/etc/systemd/system/postgresql.service" containing:
# .include /lib/systemd/system/postgresql.service
# [Service]
# Environment=PGPORT=5433
# This will override the setting appearing below.
# Note: changing PGPORT or PGDATA will typically require adjusting SELinux
# configuration as well; see /usr/share/doc/postgresql-*/README.rpm-dist.
# Note: do not use a PGDATA pathname containing spaces, or you will
# break postgresql-setup.
# Note: in F-17 and beyond, /usr/lib/... is recommended in the .include line
# though /lib/... will still work.
[Unit]
Description=PostgreSQL database server
After=network.target
[Service]
Type=forking
User=postgres
Group=postgres
# Port number for server to listen on
Environment=PGPORT=5432
# Location of database directory
Environment=PGDATA=/var/lib/pgsql/data
# Where to send early-startup messages from the server (before the logging
# options of postgresql.conf take effect)
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog
# Disable OOM kill on the postmaster
OOMScoreAdjust=-1000
ExecStartPre=/usr/bin/postgresql-check-db-dir ${PGDATA}
ExecStart=/usr/bin/pg_ctl start -D ${PGDATA} -s -o "-p ${PGPORT}" -w -t 300
ExecStop=/usr/bin/pg_ctl stop -D ${PGDATA} -s -m fast
ExecReload=/usr/bin/pg_ctl reload -D ${PGDATA} -s
# Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=300
[Install]
WantedBy=multi-user.target
I figured it out. After running initdb, I copied the data directory to the other drive. With SELinux, the FILETYPE switches to the target parent directory FILETYPE. I tried to semanage the directory, but that wasn't working. So I started over again and moved the data directory instead, which maintained the FILETYPE.