What are the size limits of file *.agg.flex.data? - sql-server-2008-r2

What are the size limits of file *.agg.flex.data ?These files are typically located at SSAS data directory.
While processing the cubes with "Process Index", I am getting below error message:
File system error: The following file is corrupted: Physical file: \?\F:\OLAP\.0.db\.0.cub\.0.det\.0.prt\33.agg.flex.data. Logical file .
However, if we navigate to the location mentioned in the error message, the specified file is not present(at given location).
If anyone have faced such issue earlier please help.
Any help would be highly appreciated.

I don't believe agg.flex.data files have a hard upper limit. I suspect that error either means you had a disk failure or that the database is corrupt. I would either unprocess (ProcessClear) and reprocess the database. Or I would delete the database and redeploy and process. Hopefully you can workaround that error.

Related

Log file keeps crashing the Mongodb portion of my project

So currently I am hosting a MEAN stack project on Linode cloud. Every few days the Mongodb side of my project seems to crash and the only way I can fix it is by logging into the project via WinSCP, deleting the mongod.log file and then rebooting the project. This is however only a short-term solution.
So far I’ve changed the rotation of the log file from weekly to daily, in order to reduce the size of the log file, however that hasn’t worked.
So in the last crash it gave the message:
FileStreamFailed: Failed to write to interim file buffer for full-time diagnostic data capture: /var/lib/mongodb/diagnostic.data/metrics.interim.temp\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)39, mongo::AssertionException>\n"

Unable read file from partitioned directory

I am unable to read file from partitioned directory in DBFS
But the other files are read easily in the normal scenarios
Am I missing something? Any alternative?
Failed
Screengrab for successful run
Successful
Please change the path in the failed scenario to /dbfs/<path> instead of dbfs:/

Quicklisp: Archive is the wrong size

I'm trying to use the cl-heap library, but when I run
(quicklisp:quickload 'cl-heap)
it returns:
The archive file "cl-heap-0.1.6.tgz" for "cl-heap" is the wrong size: expected 26,979, got 12,288
What can I do to be able to run cl-heap?
I am quite sure that this means that your downloaded file is broken. Maybe the download was interrupted, or your disk is full.
Retry by calling ql:uninstall on the system first, make sure that you have enough disk space and a working network connection, then ql:quickload again.

could not open session as Root

I came across this error that is apparently pretty common among Linux Systems.
"Too many files Open"
In my code I tried to set the Python open file limit to unlimited and it threw an error saying that I could not exceed the system limit.
import resource
try:
resource.setrlimit(resource.RLIMIT_NOFILE, (500,-1))
except Exception as err:
print err
pass
So...I Googled around a bit and followed this tutorial.
However, I set everything to 9999999 which I thought would be as close to unlimited as I could get. Now I cannot open a session as root on that machine. I can't login as root at all and am pretty much stuck. What can I do to get this machine working again? I need to be able to login as root! I am running Centos 6 and it's as up to date as possible.
Did you try turning it off and on?
If this doesn't help you can supply init=/bin/bash as kernel boot parameter to enter a root shell. Or boot from a live cd and revert your changes.
After performing an 'strace su -', I looked for the 'No such file or directory' error. When comparing the output, I found that some of those errors are ok, however, there were other files missing on my problem system that existed on a comparison system. Ultimately, it led me to a faulty line in /etc/pam.d/system-auth-ac referencing an invalid shared object.
So, my recommendation is to go through your /etc/pam.d config files and validate the existence of the shared object libraries, or, look in /var/log/secure and it should give some clue to missing shared objects as well.

Is it possible to recoverDB File in sybase? i have lost my db file

I have lost my "Trak.db" there is log file is available is it possible to recover this one through log file? use of Log files?
The best you can do is to run DBTran against the log file and it will generate a SQL file for the statements that were executed while that log was running. Of course, whether or not the log will contain all your data is going to be based on how you were logging/truncating/backing up. Technically it is possible if you have all your logs and the schema.
For reference: http://www.ianywhere.com/developer/product_manuals/sqlanywhere/0901/en/html/dbdaen9/00000590.htm