Log file keeps crashing the Mongodb portion of my project - mongodb

So currently I am hosting a MEAN stack project on Linode cloud. Every few days the Mongodb side of my project seems to crash and the only way I can fix it is by logging into the project via WinSCP, deleting the mongod.log file and then rebooting the project. This is however only a short-term solution.
So far I’ve changed the rotation of the log file from weekly to daily, in order to reduce the size of the log file, however that hasn’t worked.
So in the last crash it gave the message:
FileStreamFailed: Failed to write to interim file buffer for full-time diagnostic data capture: /var/lib/mongodb/diagnostic.data/metrics.interim.temp\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)39, mongo::AssertionException>\n"

Related

CS50x 2022 Vscode/codespaces/github remote setup does not work

I have recently enrolled with Edx for the CS50x course. I have successfully completed my week 0 set. i am now struggling to setup the vscode/ codespaces/ github for the next psets. i have followewd all the all the steps as per the cs50 procedure together with the provided links however i keep getting different error messages such as "failed to save 'settings.json':unable to write file 'vscode-remote://codespaces....." another unable to save 'settings.json': the content of the file is newer. Please compare your version with the file contents or overwrite the content of the file with your changes. the terminal has no cursor and i cannot type anything or even paste anything on it. Also on the CLI there is normally the problem - terminal and output tabs only the other tabs(jupyter, ports and debug) that i have seen on videos i have watched are never there.

mongodb log file returning continuous lines of flushed journal instances

Why is it that when I type 'mongod -f \pluralsight\mongod.conf' (path of my conf file) in terminal, I get the following flush spam in my log file? :
Is this normal?
Here is my configuration file in case you need it.
I recently installed MongoDB and I just don't want a file that is logging that my storage is being flushed, seems like poor data management. I'm not sure what is available to me to address this, or if this is normal and permissible, or if maybe there's something I'm doing wrong when I started this project.
I figured it out, it's because in my mongod.conf file I had the verbose property set to "verbose=vvvvvv". I set it to "verbose=vvv" so it shows less info on logged.

What are the size limits of file *.agg.flex.data?

What are the size limits of file *.agg.flex.data ?These files are typically located at SSAS data directory.
While processing the cubes with "Process Index", I am getting below error message:
File system error: The following file is corrupted: Physical file: \?\F:\OLAP\.0.db\.0.cub\.0.det\.0.prt\33.agg.flex.data. Logical file .
However, if we navigate to the location mentioned in the error message, the specified file is not present(at given location).
If anyone have faced such issue earlier please help.
Any help would be highly appreciated.
I don't believe agg.flex.data files have a hard upper limit. I suspect that error either means you had a disk failure or that the database is corrupt. I would either unprocess (ProcessClear) and reprocess the database. Or I would delete the database and redeploy and process. Hopefully you can workaround that error.

Aggregation of IIS logs

We have an IIS .Net application deployed across several machines. We use IIS log information to do reporting of performance of the web application and navigation by the user. Currently the reporting is only required infrequently (once a day, for the previous day), so we just roll the logs every 24 hours, and move the old logs to our reporting server.
We have a new requirement that means we need much faster turnaround on the IIS log information, say every minute for the sake of the discussion.
There exist Apache tools like Facebook's Scribe to scalably move Apache web server logs across a network of servers.
Are there any similar tools available for IIS?
Is this the right question to ask?
Should we be doing something different, if the timing requirements have changed so much?
I've looked at this question and the answers, and the only one that seems to come close is this one.
Pointers appreciated!
Snare is a little old but worth mentioning.
Snare Agent for IIS Servers
http://www.intersectalliance.com/projects/SnareIIS/index.html
I used this old version a long time ago and it worked well by forwarding/sending/replicating IIS logs over a network via syslog.
Today, they have a newer version called Snare Epilog
http://www.intersectalliance.com/projects/EpilogWindows/index.html
The code is also open source; perhaps you might find it useful.
You might also want to try ...
http://nxlog.org
http://www.syslogserver.com/syslogagent.html
I tend to write a .bat file in conjunction with LOG Parser 2.2. The .Bat file will determine the appropriate file dates and pull the corresponding logs from multiple IIS server log locations into a single local directory. Once the files are across I then run a Log Parser command to query the log content over all log files and then produce a single output file in .csv format. Finally, I run an SSIS job to import the new .csv file into a running log table which I can then query on an ongoing basis.

DB2 Transaction log is full. How to flush / clear it?

I’m working on a experiment regarding to a course I’m taking about tuning DB2. I’m using the EC2 from Amazon (aws) to conduct the experiment.
My problem is, however, that I have to test a non-compression against row-compression in DB2 and to do that I’ve created a bsh file that run those experiments. But when I reach to my compression part I get the error ”Transaction log is full”; and no matter how low I set the inserts for it is complaining about my transaction log.
I’ve scouted Google for a day now trying to find some way to flush / clear the log or just get rit of it, i don’t need it. I’ve tried to increase the size but nothing has helped.
Please, I hope someone has an answer to solve this frustrating problem
Thanks
- Mestika
There is no need to "clear the log" in DB2. When a transaction is rolled back, DB2 releases the log space used by the transaction.
If you've increased the log size and it has not helped, please post more information about what you're trying to do.
No need of restarting. Just try to force the applications using DB2 force applications all.
Increase the Actie Log File Size and try to force application connections and terminate the connections.
Try to run the job now.
db2 force applications all
db2 update db cfg for sample using logfilsiz 5125
db2 force applications all
db2 terminate
db2 connect to sample
Run your job and monitor.
Just restart the instance, it would release the pending logs and you should be fine