I'm running a sharded MongoDB instance and as per the instructions, the config servers are a replica set. I'm unable to upgrade from v4.2.9 to 4.4.0. Per the upgrade instructions, I need to upgrade the config servers first, starting with a secondary. It already failed there. I shut down the secondary's instance, replaced the binaries, and restarted it. But it didn't start up again. The logs say the following (I removed the timestamps for clarity):
"msg":"The size storer reports that the oplog contains","attr":{"numRecords":53890848,"dataSize":13618131721}}
"msg":"Sampling the oplog to determine where to place markers for truncation"}
"msg":"Sampling from the oplog to determine where to place markers for truncation","attr":{"from":{"$timestamp":{"t":1494750837,"i":1}},"to":{"$timestamp":{"t":1598687615,"i":1}}}}
"msg":"Taking samples and assuming each oplog section contains","attr":{"numSamples":253,"containsNumRecords":2124552,"containsNumBytes":536870917}}
"msg":"User assertion","attr":{"error":"Location13111: field not found, expected type date","file":"src/mongo/bson/bsonelement.h","line":810}}
"msg":"WiredTiger record store oplog processing finished","attr":{"durationMillis":21}}
"msg":"~WiredTigerRecordStore for: {ns}","attr":{"ns":"local.oplog.rs"}}
"msg":"Invariant failure","attr":{"expr":"_oplogManagerCount > 0","file":"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp","line":2467}}
"msg":"\n\n***aborting after invariant() failure\n\n"}
"msg":"Writing fatal message","attr":{"message":"Got signal: 6 (Aborted).\n"}}
"msg":"BACKTRACE: {bt}","attr":{"bt":{"backtrace":[{"a":"55C91A79E621","b":"55C917AE3000","o":"2CBB621","s":"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.606","s+":"1E1"},{"a":"55C91A79FCC9","b":"55C917AE3000","o":"2CBCCC9","s":"_ZN5mongo15printStackTraceEv","s+":"29"},{"a":"55C91A79D4B6","b":"55C917AE3000","o":"2CBA4B6","s":"_ZN5mongo12_GLOBAL__N_116abruptQuitActionEiP9siginfo_tPv","s+":"66"},{"a":"7FAF200070E0","b":"7FAF1FFF6000","o":"110E0","s":"funlockfile","s+":"50"},{"a":"7FAF1FC89FFF","b":"7FAF1FC57000","o":"32FFF","s":"gsignal","s+":"CF"},{"a":"7FAF1FC8B42A","b":"7FAF1FC57000","o":"3442A","s":"abort","s+":"16A"},{"a":"55C9189E6C5F","b":"55C917AE3000","o":"F03C5F","s":"_ZN5mongo15invariantFailedEPKcS1_j","s+":"12C"},{"a":"55C9186CE4B6","b":"55C917AE3000","o":"BEB4B6","s":"_ZN5mongo18WiredTigerKVEngine16haltOplogManagerEv.cold.1904","s+":"18"},{"a":"55C918B0711C","b":"55C917AE3000","o":"102411C","s":"_ZN5mongo21WiredTigerRecordStoreD1Ev","s+":"2FC"},{"a":"55C918B0D68B","b":"55C917AE3000","o":"102A68B","s":"_ZN5mongo29StandardWiredTigerRecordStoreD0Ev","s+":"1B"},{"a":"55C9186CEC5B","b":"55C917AE3000","o":"BEBC5B","s":"_ZN5mongo18WiredTigerKVEngine21getGroupedRecordStoreEPNS_16OperationContextENS_10StringDataES3_RKNS_17CollectionOptionsENS_8KVPrefixE.cold.1921","s+":"57"},{"a":"55C919378A76","b":"55C917AE3000","o":"1895A76","s":"_ZN5mongo17StorageEngineImpl15_initCollectionEPNS_16OperationContextENS_8RecordIdERKNS_15NamespaceStringEb","s+":"316"},{"a":"55C91937A7BD","b":"55C917AE3000","o":"18977BD","s":"_ZN5mongo17StorageEngineImpl11loadCatalogEPNS_16OperationContextE","s+":"90D"},{"a":"55C91937E3D0","b":"55C917AE3000","o":"189B3D0","s":"_ZN5mongo17StorageEngineImplC1EPNS_8KVEngineENS_20StorageEngineOptionsE","s+":"270"},{"a":"55C918AC8005","b":"55C917AE3000","o":"FE5005","s":"_ZNK5mongo12_GLOBAL__N_117WiredTigerFactory6createERKNS_19StorageGlobalParamsEPKNS_21StorageEngineLockFileE","s+":"1A5"},{"a":"55C9193889EE","b":"55C917AE3000","o":"18A59EE","s":"_ZN5mongo23initializeStorageEngineEPNS_14ServiceContextENS_22StorageEngineInitFlagsE","s+":"4CE"},{"a":"55C918A84587","b":"55C917AE3000","o":"FA1587","s":"_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi.isra.1409","s+":"3F7"},{"a":"55C918A88610","b":"55C917AE3000","o":"FA5610","s":"_ZN5mongo12_GLOBAL__N_111mongoDbMainEiPPcS2_","s+":"650"},{"a":"55C9189F7849","b":"55C917AE3000","o":"F14849","s":"main","s+":"9"},{"a":"7FAF1FC772E1","b":"7FAF1FC57000","o":"202E1","s":"__libc_start_main","s+":"F1"},{"a":"55C918A83A3A","b":"55C917AE3000","o":"FA0A3A","s":"_start","s+":"2A"}],"processInfo":{"mongodbVersion":"4.4.0","gitVersion":"563487e100c4215e2dce98d0af2a6a5a2d67c5cf","compiledModules":[],"uname":{"sysname":"Linux","release":"4.9.0-7-amd64","version":"#1 SMP Debian 4.9.110-3+deb9u2 (2018-08-13)","machine":"x86_64"},"somap":[{"b":"55C917AE3000","elfType":3,"buildId":"D7866CAA7FFAC402345915854064CD98A5B60C27"},{"b":"7FAF1FFF6000","path":"/lib/x86_64-linux-gnu/libpthread.so.0","elfType":3,"buildId":"16D609487BCC4ACBAC29A4EAA2DDA0D2F56211EC"},{"b":"7FAF1FC57000","path":"/lib/x86_64-linux-gnu/libc.so.6","elfType":3,"buildId":"775143E680FF0CD4CD51CCE1CE8CA216E635A1D6"}]}}}}
It appears to boil down to the following error message:
Location13111: field not found, expected type date.
src/mongo/bson/bsonelement.h:810
Googling didn't turn up anything useful. I didn't proceed after that but had to revert to v4.2.9. (I wanted to keep the damage to the config secondary and not get the same issue with the shards.)
I'm on Debian 9.13 and I tried both apt to install MongoDB 4.4.0 and directly installing the Debian 9.2 binaries. The error was the same both times.
Any ideas what to do about this one?
I recently reinstalled Netbeans IDE on my Windows 10 PC in order to restore some unrelated configurations. When I tried checking for new plugins in order to be able to download the Sakila sample database,
I get this error.
I've tested the connection on both No Proxy and Use Proxy Settings, and both connection tests seem to end succesfully.
I have allowed Netbeans through my firewall, but this has changed nothing either.
I haven't touched my proxy configuration, so it's on default (autodetect). Switching the autodetect off doesn't change anything, either, no matter what proxy config i have on Netbeans.
Here's part of my log file that might be helpful:
Compiler: HotSpot 64-Bit Tiered Compilers
Heap memory usage: initial 32,0MB maximum 910,5MB
Non heap memory usage: initial 2,4MB maximum -1b
Garbage collector: PS Scavenge (Collections=12 Total time spent=0s)
Garbage collector: PS MarkSweep (Collections=3 Total time spent=0s)
Classes: loaded=6377 total loaded=6377 unloaded 0
INFO [org.netbeans.core.ui.warmup.DiagnosticTask]: Total memory 17.130.041.344
INFO [org.netbeans.modules.autoupdate.updateprovider.DownloadListener]: Connection content length was 0 bytes (read 0bytes), expected file size can`t be that size - likely server with file at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b is temporary down
INFO [org.netbeans.modules.autoupdate.ui.Utilities]: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
java.io.IOException: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.doCopy(DownloadListener.java:155)
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.streamOpened(DownloadListener.java:78)
at org.netbeans.modules.autoupdate.updateprovider.NetworkAccess$Task$1.run(NetworkAccess.java:111)
Caused: java.io.IOException: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.notifyException(DownloadListener.java:103)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogCache.copy(AutoupdateCatalogCache.java:246)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogCache.writeCatalogToCache(AutoupdateCatalogCache.java:99)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogProvider.refresh(AutoupdateCatalogProvider.java:154)
at org.netbeans.modules.autoupdate.services.UpdateUnitProviderImpl.refresh(UpdateUnitProviderImpl.java:180)
at org.netbeans.api.autoupdate.UpdateUnitProvider.refresh(UpdateUnitProvider.java:196)
[catch] at org.netbeans.modules.autoupdate.ui.Utilities.tryRefreshProviders(Utilities.java:433)
at org.netbeans.modules.autoupdate.ui.Utilities.doRefreshProviders(Utilities.java:411)
at org.netbeans.modules.autoupdate.ui.Utilities.presentRefreshProviders(Utilities.java:405)
at org.netbeans.modules.autoupdate.ui.UnitTab$14.run(UnitTab.java:806)
at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:1423)
at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:2033)
It might be that the update server is down just right now; i haven't been able to test this either. But it also might be something wrong with my configurations. I'm going crazy!!1!
Something that worked for me was changing the "http:" to "https:" in the update urls.
I.E. Change "http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz"
to "https://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz"
No idea why that makes it work on my end. I'm running Linux Mint 19.1.
Yesterday we had crash of PostgreSQL 9.5.14 running on Debian 8 (Linux xxxxxx 3.16.0-7-amd64 #1 SMP Debian 3.16.59-1 (2018-10-03) x86_64 GNU/Linux) - Segmentation fault. Database closed all connections and reinitialized itself staying ~1 minute in recovery mode.
PostgreSQL log:
2018-10-xx xx:xx:xx UTC [580-2] LOG: server process (PID 16461) was
terminated by signal 11: Segmentation fault
kern.log:
Oct xx xx:xx:xx xxxxxxxx kernel: [117977.301353] postgres[16461]:
segfault at 7efd3237db90 ip 00007efd3237db90 sp 00007ffd26826678 error
15 in libc-2.19.so[7efd322a2000+1a1000]
According to libc documentation (https://support.novell.com/docs/Tids/Solutions/10100304.html) error code 15 means:
NX_EDEADLK 15 resource deadlock would occur - which does not tell me much.
Could you tell me please if we can do something to avoid this problem in the future? Because this server is of course production one.
All packages are up to date currently. Upgrade of PG is unfortunately not the option. Server runs on Google Compute Engine.
error code 15 means: NX_EDEADLK 15
No, it doesn't mean that. This answer explains how to interpret 15 here.
It's bits 0, 1, 2, 3 set => protection fault, write access, user mode, use of reserved bit. Most likely your postgress process attempted to write to some wild pointer.
if we can do something to avoid this problem in the future?
The only thing you can do is find the bug and fix it, or upgrade to a release of postgress where that bug is already fixed (and hope that no new ones were introduced).
To understand where the bug might be, you should check whether a core dump was produced (if not, do enable them). If you have the core, use gdb /path/to/postgress /path/to/core, and then where GDB command. That will give you crash stack trace, which may allow you to find similar bug reports.
I'm new to web development and I wanted to get started with some RoR (using Locomotive CMS).
One of the things Locomotive asks for is to have Mongodb. I installed using homebrew by following this link http://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x/
It installs fine but then im not able to run it!
When I type 'mongo' on terminal I get the following output :
"MongoDB shell version: 2.4.3
connecting to: test
Mon May 6 11:12:28.927
JavaScript execution failed:
Error: couldn't connect to server
127.0.0.1:27017 at src/mongo/shell/mongo.js:L112
exception: connect failed"
BACKGROUND TO HELP DEBUGGING ( on Terminal) :
1.When I type in mongod I get the following :
"all output going to: /usr/local/var/log/mongodb/mongo.log"
Ownership of mongo.log :
-rw-r--r-- 1 username admin 22133 May 6 11:13 mongo.log
2.When I input mongod --fork I get the following :
about to fork child process, waiting until server is ready for connections.
forked process: 77566
all output going to: /usr/local/var/log/mongodb/mongo.log
ERROR: child process failed, exited with error number 100
3.Typing mongod --help gives the following warning:
* WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
4.I have a folder called data (which acts as amongodb database, is this where it should be?)in root (PATH : /data) Ownership of data folder :
"drwxr-xr-x 3 username wheel 102 Apr 23 21:38 data"
5.Checking if ports are free: lsof -i :27017. Ive also tried to check for a running mongo process using activity montior and found zilch!
No output
6.Ive also tried : mongo --repair. Dint help!
Ive been stuk on this for a while, I've looked at most responses on stackoverflow and searched around to find a solution to this but nothing has helped so far!
UPDATE:
When I tried to start the mongo shell, I was getting the following l
log message from mongo.log:
5/6/13 1:33:27.616 PM com.apple.launchd:
(org.mongodb.mongod[79133])
open("/private/var/log/mongodb/output.log", ...): Permission denied
So I did a chmod777 for the particular folder and the shell launches!
Although I still get a warning when it launches as:
Server has startup warnings:
Mon May 6 13:33:27.693 [initandlisten]
Mon May 6 13:33:27.693 [initandlisten]
** WARNING: soft rlimits too low.
Number of files is 256, should be at least 1000
Any idea how I can silence these warnings?
To get the information you need to determine the cause of failure you need to look in (and post for us) the output from /usr/local/var/log/mongodb/mongo.log when it is trying to start.
However, the most common reason for the failure is the lack of the default database path - at /data/db. Either create that folder (and don't forget to make sure your user has permission to read/write to it) or specify a different path with the --dbpath option.
UPDATE: as you have since found, bad permissions on the log file can cause the issue, in a similar way to bad permissions on the data path.
In terms of the warning, the information you need is here:
https://superuser.com/questions/433746/is-there-a-fix-for-the-too-many-open-files-in-system-error-on-os-x-10-7-1
It is just that though, a warning - you can run MongoDB without an issue with those limits as long as it is not under heavy load. So, if this is a development environment, unless you plan on load testing, you should be fine
Got system with Postgres broken (a RAID is the reason) , without any backups.
Trying to put data to another comptuter with Postgres (and make however backup).
But always when I set up data directory and run postgres I've got message
GET FATAL: database files are incompatible with server
2012-08-15 19:58:38 GET DETAIL: The database cluster was initialized with BLCKSZ 16777216, but the server was compiled with BLCKSZ 8192.
2012-08-15 19:58:38 GET HINT: It looks like you need to recompile or initdb.
It's very strange number 16777216(2 to power 24 - to big).
However I can't reset default value 8192 when compiling (playing with --with-blocksize= take no effect; BLCKSZ - I can't find it in headers files)
).
Any way to extract data ?
This is environment and circumstances:
harddrive: RAID 1 with 3 SAS disks in array
OS: ubuntu 10.04.04 amd64
Postgres: 9.1 (by apt-get (we change repository links to higher version of Ubuntu))
the system become broken - after some time got
AAC: Host Adapter BLINK LED 0x56
AACO: Adapter kernel panic'd 56
(filesystem or hardware error)
Somehow we got data directory. pg_conroldata shown:
pg_control version number: 903
Catalog version number: 201105231
Database system identifier: 5714530593695276911
Database cluster state: shut down
pg_control last modified: Tue 15 Aug 2012 11:50:50
Latest checkpoint location: 1B595668/2000020
Prior checkpoint location: 0/0
Latest checkpoint's REDO location: 1B595668/2000020
Latest checkpoint's TimeLineID: 1
Latest checkpoint's NextXID: 0/4057946
Latest checkpoint's NextOID: 40960
Latest checkpoint's NextMultiXactId: 1
Latest checkpoint's NextMultiOffset: 0
Latest checkpoint's oldestXID: 670
Latest checkpoint's oldestXID's DB: 1344846103
Latest checkpoint's oldestActiveXID: 0
Time of latest checkpoint: Tue 15 Aug 2012 11:50:50
Minimum recovery ending location: 0/0
Backup start location: 0/0
Current wal_level setting: minimal
Current max_connections setting: 100
Current max_prepared_xacts setting:0
Current max_locks_per_xact setting: 64
Maximum data alignment: 8
Database block size: 16777216
Blocks per segment of large relation:131072
WAL block size: 8192
Bytes per WAL segment: 16777216
Maximum length of identifiers: 64
Maximum columns in an index: 2387576020
Maximum size of a TOAST chunk: 0
Date/time type storage: floating-point numbers
Float4 argument passing: by reference
Float8 argument passing: by reference
First I effort to up DB in Ubuntu servers (harddisk - simple serial 2, Ubuntu 10.04 i386, Postgres 9.1) and got the same exception above (with BLCKSZ).
That's why I deployed Ubuntu 10.04 amd64 with english Postgres 9.1 (because got '?' instead of russian symbols in error logs in previous step) in virtual machine
Got the same exception (with BLCKSZ).
Ather that have removed apt-get postgres version and compiled it as described at docs http://www.postgresql.org/docs/9.1/static/installation.html.
Playing withconfigure --with-blocksize=BLOCKSIZE had take no effect - got the same error
Sorry, for the post.
The pg_contol was broken by some manipulations with.
Sow, the cluster was succeful restored by pg_resetxlog with initial data.
A blocksize of 16Mb would be really weird, and since these two values also look completely bogus:
Maximum columns in an index: 2387576020
Maximum size of a TOAST chunk: 0
...you might want to question the integrity of this data before spending time on compiling postgres with a non-standard block size.
If you look at the sizes of files corresponding to relations, are they multiple of 16Mb or 8Kb?
If the database have some gigabytes tables, what appears to be the cut-off size on disk (the size above which postgres split the data into several files)? This should be equal to data block size*Blocks per segment of large relation. On a default install, it's 1Gb.
See here for details on configuring kernel resources. Perhaps the default/current settings for this new OS won't allow the postmaster to start.
Here are details on the meaning and context of the BLCKSZ parameter. Was the system that failed running a 64bit build of PostgreSQL and the new system is a 32bit build? If possible, attempting to obtain version information on the failed system's PostgreSQL could shed light on the problem. Let us know what version, build, and OS were used. Was is a custom build?