I have a .nfy dump file in my client system which is taking too much space. It has been last updated on 4 August. Will it be OK if I delete it? Will it be permanently deleted or DB2 will create a new one?
Yes, it is perfectly safe to delete the administration notification log file, and yes, it will be re-created as necessary by the instance.
You can also enable automatic rotation of this file and the diagnostic log file (db2diag.log) by setting the diagsize instance configuration parameter, for example
db2 update dbm cfg using diagsize 1024
The command above instructs the instance to create 10 rotating log files, each with the maximum size of 1024 MB. Once the 10th file reaches the maximum size, the oldest of the 10 files will be deleted and a new file created.
Note that you will need to restart the DB2 instance for the new parameter value to take effect.
More info here.
Related
Recently I had to wipe and reinstall my InfluxDB database which manages about 100 smart home devices from plugs to switches and a lot of sensors (/var/lib/influxdb was about 2GB). Since the sensors were continuing to collect data, I took the corrupt database offline, set up a new Influx instance which then continued collecting data. My idea was to inspect the broken DB and copy the intact parts of the data over to the new one later.
I actually managed to export most data using influxdb_inspect export -database foo ... to a file. I also managed to initiate a reimport using influx -import -path ..., which churned along happily for about two days, until my data was copied.
But when I issue requests now to the new database, the imported data is not found. It exists nowhere. Queries like before the crash only return data collected since the reinstall.
The filesystem size is similar so the data is in there somewhere:
pi#raspberrypi:~ $ sudo du -ks /var/lib/influxdb*
1910720 /var/lib/influxdb
1902236 /var/lib/influxdb2
1910284 /var/lib/influxdb-old
influxdb is the old DB, influxdb2 is the new current one, influxdb-old is a previous backup copy.
But a call like SELECT value FROM my_measurement which would return 100.000s of values from the old database now just returns a few hundred (collected since two days ago). Also all my frontend tools (like Grafana) which used to return two years worth of data for visualization now just show the last two days.
So: where is the reimported data gone?
I am using a Raspi 4b, Raspbian Linux, with ioBroker and InfluxDB 1.8.6.
Solved it.
My old database had a different retention policy configured as the default, and the new database did not use the same default retention policy. And since none of my queries specified an explicit retention policy, only the data in the default retention policy bucket was found.
Re-imported all data with the retention policy set to the previous value, and everything seems to be OK.
It is possible to move data between policies, like this:
SELECT * INTO "db"."newrp"."newmeasurement" FROM "db"."oldrp"."oldmeasurement" GROUP BY *
but I ended up re-importing which also worked fine.
See https://community.influxdata.com/t/applying-retention-policies-to-existing-measurments/802 for the above command.
Why is it that when I type 'mongod -f \pluralsight\mongod.conf' (path of my conf file) in terminal, I get the following flush spam in my log file? :
Is this normal?
Here is my configuration file in case you need it.
I recently installed MongoDB and I just don't want a file that is logging that my storage is being flushed, seems like poor data management. I'm not sure what is available to me to address this, or if this is normal and permissible, or if maybe there's something I'm doing wrong when I started this project.
I figured it out, it's because in my mongod.conf file I had the verbose property set to "verbose=vvvvvv". I set it to "verbose=vvv" so it shows less info on logged.
I have two drives A and B. Using a python script I am creating some files in "A" drive and I am running a powerscript which copies all the files in the drive A to drive B in the interval of 1 sec.
I am getting this error in my powershell.
2015/03/10 23:55:35 ERROR 32 (0x00000020) Time-Stamping Destination
File \x.x.x.x\share1\source\ Dummy_100.txt The process cannot access
the file because it is being used by another process. Waiting 30
seconds...
How will I overcome this error?
This happened is because the file is locked by running process. To fix this, download Process Explorer. Then use Find>Find Handle or DLL, find out which process locked this file. Use 'taskkill' to kill that process in commandline. You will be fine.
if you want to skip this files you can use /r:n that n is times of tries
for example /w:3 /r:5 will try 5 time every 3 seconds
How will I overcome this error?
If backup is, what you got in mind, and you encounter in-use files frequently, you look into Volume Shadow Copies (VSS), which allow to copy files despite them being ‘in use’. It's not a product, but a windows technology used by various backup tool.
Sadly, it's not built into robocopy, but can be used in conjunction with it. See
➝ https://superuser.com/a/602833/75914
and especially:
➝ https://github.com/candera/shadowspawn
It could be many reasons.
In my case, I was running a CMD script to copy from one server to another, a heap of SQL Server backups and transaction logs. I too had the same problem because it was trying to write into a log file that was supposedly opened by another process. It was not.
I ran many IP checks and Process ID checkers that I ran out of knowing what was hogging the log file. Event viewer said nothing.
I found out it was not even the log file that was being locked. I was able to delete it by logging into the server as a normal user with no admin privileges!
It was the backup files themselves by the SQL Server Agent. Like #Oseack said, there may have been the need to use another tool whilst the backup files themselves were still being used or locked by the SQL Server Agent.
The way I got around it was to force ROBOCOPY to wait.
/W:5
did it.
I've wrote the code that creates full backups of my ESENT database, using JetBeginExternalBackup API.
Following the MSDN guidelines, I backed up every file returned by JetGetAttachInfo and JetGetLogInfo.
I've made the backup, erased old database, and copied the backup data to the database folder.
The DB engine was unable to start, the JetInit error code is "JET_errMissingLogFile".
I've checked the backup, it only contains the database file, and "<inst>XXXXX.log" log files. It lacks the current log file (I'm using circular logging, BTW).
Is there any way to restore such backup?
I don't want to use JetExternalRestore API because it's too complex: I don't need to restore to another location, I don't understand why there're 3 input folders not 2, and I don't know the values to supply in genLow and genHigh arguments.
I do need external backups: the ESENT database is used by ASP.NET on a remote server, and I'm backing it up over the Internet.
Or, maybe there's a way to retrieve the name of the current log file, and I should just add it to the backup?
Thanks in advance!
P.S. I've got no permissions to span processes on my web server, so using eseutil.exe is not an option.
Unpack all backed up files to a single folder.
Take the name of your main database file. Replace extension to .pat. Create zero-length file with that name, e.g. database.pat.
After this simple step, call JetRestoreInstance API, it will restore the backup from that folder.
I have lost my "Trak.db" there is log file is available is it possible to recover this one through log file? use of Log files?
The best you can do is to run DBTran against the log file and it will generate a SQL file for the statements that were executed while that log was running. Of course, whether or not the log will contain all your data is going to be based on how you were logging/truncating/backing up. Technically it is possible if you have all your logs and the schema.
For reference: http://www.ianywhere.com/developer/product_manuals/sqlanywhere/0901/en/html/dbdaen9/00000590.htm