I'm using mongodb v2.2.2 on single server(Ubuntu 12.04).
It crashed with no log on /var/log/mongodb/mongodb.log.
It seemed crashed during logging.(Character is interrupted. And, this log is normal query log.)
And, I checked on syslog about memory-issue(for example, killed proccess),
but couldn't find it.
Then, I found the following error on mongo-shell(db.printCollectionStats() command).
DLLConnectionResultData
{
"ns" : "UserData.DLLConnectionResultData",
"count" : 8215398,
"size" : 4831306500,
"avgObjSize" : 588.0794211065611,
"errmsg" : "exception: assertion src/mongo/db/database.cpp:300",
"code" : 0,
"ok" : 0
}
How do I figure out problems?
Thank you,
I checked that line in the source code for 2.2.2 (see here for reference). That error is specifically related to enforcing quotas on MongoDB. You haven't mentioned enforcing quotas here or what you have set the files limit to (default is 8) but you could be running into the limit here.
First, I would recommend getting onto a more recent version of 2.2 (and upgrading to 2.4 eventually, but definitely 2.2.7+ initially). If you are using quotas, this fix which went into 2.2.5 would log quota exceeded messages (previously logged only at log level 1, default is log level 0). Hence if a quota violation is the culprit here, you may get an early warning.
If that is the root cause, then you have a couple of options:
After upgrading to the latest version of 2.2, of the issue happens repeatedly, file a bug report for the crash on 2.2
Upgrade to 2.4, verify that the issue still occurs, and file a bug (or add to the above report for 2.2)
In either case, turning off quotas in the interim would be the obvious way to prevent the crash.
Related
I have a new laptop and I try to render the Changelogs of TYPO3 locally based on the steps on https://docs.typo3.org/m/typo3/docs-how-to-document/master/en-us/RenderingDocs/Quickstart.html#render-documenation-with-docker. It continues until the end but show some non-zero exit codes at the end.
project : 0.0.0 : Makedir
makedir /ALL/Makedir
2021-02-16 10:32:50 654198, took: 173.34 seconds, toolchain: RenderDocumentation
REBUILD_NEEDED because of change, age 448186.6 of 168.0 hours, 18674.4 of 7.0 days
OK:
------------------------------------------------
FINAL STATUS is: FAILURE (exitcode 255)
because HTML builder failed
------------------------------------------------
exitcode: 0 39 ms
When I run the command in another documentation project, it renders just fine.
I found the issue with this. It seemed the docker container did not have enough memory allocated. I changed the available memory from 2 Gb to 4 Gb in Docker Desktop and this issue is solved with that.
You already solved the problem. But in case of similar errors: To get more information on a failure, you can also use this trick:
Create a directory tmp-GENERATED-temp before rendering. Usually, this is automatically created and then removedd after rendering. If you create it before rendering, you will find logfiles with more details in this directory.
See the Troubleshooting page.
I had some errors where I found the output in the console insufficient and this helped me to narrow down the problem.
In case of other problems, I would file an issue in the GitHub repo: https://github.com/t3docs/docker-render-documentation
Note: This is specific to TYPO3 docs rendering and may change in the future.
I have a simple flow setup in Nifi:
GetFile picks up CSV files from a directory
PutMongoRecord stores them in a MongoDB collection (using a CSVReader)
I want to put the records into a collection whose name is derived from the filename: ${filename:substringBefore('.csv')}. My problem is that I can't seem to get the PutMongoRecord processor to read the filename. Every time, I get the same error:
com.mongodb.MongoCommandException: Command failed with error 73: 'Invalid namespace specified 'xxx.'' on server localhost:27017. The full response is { "ok" : 0.0, "errmsg" : "Invalid namespace specified 'xxx.'", "code" : 73, "codeName" : "InvalidNamespace" }
If I try hard-coding a collection name, it works. It also works with ${hostname()}. Since the processor is connected to the "success" output of GetFile, why isn't it reading the filename?
NOTE: I have tested this with a LogAttribute processor: a filenameattribute is indeed present. I have tried various other attributes, but none seem to produce anything.
It is a bug till NiFi 1.6.0 and it is recently fixed. Take a look at NIFI-5197. It will be released in NiFi 1.7.0 which, I believe, will be available in a couple of weeks.
If it is an urgent need, write to the dev#nifi.apache.org and it is possible to get the patch for this.
I'm using :
TYPO3 6.2
ke_search 2.2
Everything work fine except the indexing process, I mean :
If I manually index (with the backend module) it's OK, no error messages.
If I run manually the scheduler indexing task it's OK, no error messages.
If I run the scheduler with the php typo3/cli_dispatch.phpsh scheduler command, then I got this error :
Fatal error: Allowed memory size of 16777216 bytes exhausted (tried to
allocate 87 bytes) in
/path_to_my_website/typo3/sysext/core/Classes/Cache/Frontend/VariableFrontend.php on line 99
For your information :
my PHP memory_limit setting is on 128M.
Other tasks are OK.
After this error appears on my console, the scheduler task is locked :
I can't figure out what's wrong ?
EDIT : I made flush frontend caches + flush general caches + flush system caches. If I run one more time the scheduler via the console, this is the new error I get :
Fatal error: Allowed memory size of 16777216 bytes exhausted (tried to
allocate 12288 bytes) in
/path_to_my_website/typo3/sysext/core/Classes/Database/QueryGenerator.php
on line 1265
EDIT 2 : if I disable all my indexer configurations, all goes well. But if I enable even 1 configuration -> PHP error.
Here is one of the indexer file :
I know how to RESOLVE the problem, but I do not have any idea, how to find the cause/source (f.e. which statement) of the problem. Where (table, tools, commands) to look.
can I see something in the excerpt from db2diag.log?
2015-06-24-09.23.29.190320+120 ExxxxxxxxxE530 LEVEL: Error
PID : 15972 TID : 1 PROC : db2agent (XXX) 0
INSTANCE: db2inst2 NODE : 000 DB : XXX
APPHDL : 0-4078 APPID: xxxxxxxx.xxxx.xxxxxxxxxxxx
AUTHID : XXX
FUNCTION: DB2 UDB, data protection services, sqlpgResSpace, probe:2860
MESSAGE : ADM1823E The active log is full and is held by application handle
"3308". Terminate this application by COMMIT, ROLLBACK or FORCE
APPLICATION.
The db2diag.log shows you the agent ID (application handle) of the application causing the problem (3308).
Provided you are seeing this in real time (as opposed to looking at db2diag.log after the fact), you can:
Use db2top to view information about this connection
Query sysibmadm.snapstmt (looking at stmt_text and agent_id)
Use db2pd -activestatements and db2pd -dynamic (keying on AnchID and StmtUID
Use good old get snapshot for application
There are also many 3rd party tools that can also give you the information you are looking for.
I'm new to web development and I wanted to get started with some RoR (using Locomotive CMS).
One of the things Locomotive asks for is to have Mongodb. I installed using homebrew by following this link http://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x/
It installs fine but then im not able to run it!
When I type 'mongo' on terminal I get the following output :
"MongoDB shell version: 2.4.3
connecting to: test
Mon May 6 11:12:28.927
JavaScript execution failed:
Error: couldn't connect to server
127.0.0.1:27017 at src/mongo/shell/mongo.js:L112
exception: connect failed"
BACKGROUND TO HELP DEBUGGING ( on Terminal) :
1.When I type in mongod I get the following :
"all output going to: /usr/local/var/log/mongodb/mongo.log"
Ownership of mongo.log :
-rw-r--r-- 1 username admin 22133 May 6 11:13 mongo.log
2.When I input mongod --fork I get the following :
about to fork child process, waiting until server is ready for connections.
forked process: 77566
all output going to: /usr/local/var/log/mongodb/mongo.log
ERROR: child process failed, exited with error number 100
3.Typing mongod --help gives the following warning:
* WARNING: soft rlimits too low. Number of files is 256, should be at least 1000
4.I have a folder called data (which acts as amongodb database, is this where it should be?)in root (PATH : /data) Ownership of data folder :
"drwxr-xr-x 3 username wheel 102 Apr 23 21:38 data"
5.Checking if ports are free: lsof -i :27017. Ive also tried to check for a running mongo process using activity montior and found zilch!
No output
6.Ive also tried : mongo --repair. Dint help!
Ive been stuk on this for a while, I've looked at most responses on stackoverflow and searched around to find a solution to this but nothing has helped so far!
UPDATE:
When I tried to start the mongo shell, I was getting the following l
log message from mongo.log:
5/6/13 1:33:27.616 PM com.apple.launchd:
(org.mongodb.mongod[79133])
open("/private/var/log/mongodb/output.log", ...): Permission denied
So I did a chmod777 for the particular folder and the shell launches!
Although I still get a warning when it launches as:
Server has startup warnings:
Mon May 6 13:33:27.693 [initandlisten]
Mon May 6 13:33:27.693 [initandlisten]
** WARNING: soft rlimits too low.
Number of files is 256, should be at least 1000
Any idea how I can silence these warnings?
To get the information you need to determine the cause of failure you need to look in (and post for us) the output from /usr/local/var/log/mongodb/mongo.log when it is trying to start.
However, the most common reason for the failure is the lack of the default database path - at /data/db. Either create that folder (and don't forget to make sure your user has permission to read/write to it) or specify a different path with the --dbpath option.
UPDATE: as you have since found, bad permissions on the log file can cause the issue, in a similar way to bad permissions on the data path.
In terms of the warning, the information you need is here:
https://superuser.com/questions/433746/is-there-a-fix-for-the-too-many-open-files-in-system-error-on-os-x-10-7-1
It is just that though, a warning - you can run MongoDB without an issue with those limits as long as it is not under heavy load. So, if this is a development environment, unless you plan on load testing, you should be fine