I've wrote the code that creates full backups of my ESENT database, using JetBeginExternalBackup API.
Following the MSDN guidelines, I backed up every file returned by JetGetAttachInfo and JetGetLogInfo.
I've made the backup, erased old database, and copied the backup data to the database folder.
The DB engine was unable to start, the JetInit error code is "JET_errMissingLogFile".
I've checked the backup, it only contains the database file, and "<inst>XXXXX.log" log files. It lacks the current log file (I'm using circular logging, BTW).
Is there any way to restore such backup?
I don't want to use JetExternalRestore API because it's too complex: I don't need to restore to another location, I don't understand why there're 3 input folders not 2, and I don't know the values to supply in genLow and genHigh arguments.
I do need external backups: the ESENT database is used by ASP.NET on a remote server, and I'm backing it up over the Internet.
Or, maybe there's a way to retrieve the name of the current log file, and I should just add it to the backup?
Thanks in advance!
P.S. I've got no permissions to span processes on my web server, so using eseutil.exe is not an option.
Unpack all backed up files to a single folder.
Take the name of your main database file. Replace extension to .pat. Create zero-length file with that name, e.g. database.pat.
After this simple step, call JetRestoreInstance API, it will restore the backup from that folder.
Related
I am currently using Community version on linux server, have configured db2audit process
Which generated audit files at respective location. Then user have to manually execute db2audit archive command to achieved logs file and have to execute thedb2 extract command to extract the archived files into flat ascIII files and then we have to load the files into respective tables.
There only we can analyze the logs by query the tables. In this whole process lots of manual intervention is required.
Question:-Do we have any config settings or utility
with the help of which we can generate logs files which include SQL statement event, host, session id,Timestamp and all instantly and automatically.
need to set instant level logging mechanism to generate flat files for logs of any SQL execution happened or any event triggered in database level in DB2 on linux server
I have a large Mongo database (5M documents). I edit the database from an offline application, so I store the database on my local computer. However, I want to be able to maintain an online copy of the database, so that my website can access it.
How can I update the online copy regularly, without having to upload multiple GBs of data every time?
Is there some way to "track changes" and upload only the diff, like in Git?
Following up on my comment:
Can't you store the commands you used on your offline db, and then
apply them on the online db, through a script running on SSH for
instance ? Or even better upload a file with all the commands you ran
on your offline base, to your server and then execute them with a cron
job, or a bash script ? (The only requirement would be for your bases
to have the same start point, and same state, when you execute the
script)
I would recommend to store all the queries you execute on your offline base, to do this you have many options, I can think about the following : You can set the profiling level to log all your queries.
(Here is a more detailed thread on the matter: MongoDB logging all queries)
Then you would have to extract then somehow (grep ?), or store them directly in another file on the fly, when they are executed.
For the uploading of the script, it depends on what you would like to use, but i suppose you would need to do it during low usage hours, and you could automate the task with a CRON job, and an SSH tunnel.
I guess it all depends on your constraints (security, downtime, etc..)
I have a simple python script that is hosted on Heroku and I'm using the Heroku Scheduler to run the script every hour/day. The script will possibly update a simple .txt file (could also be a config var if possible) when it runs. When it does run and conditions are met, I need that value stored and used when the next scheduled script runs. The value changed is simply a date.
However, since the app is containerized based on the most recent code I have on Github, it doesn't store those changes anywhere to be used again. Is there any way I can accomplish to update the file and use it every time it runs? Any simple add-ons or other solutions I can use?
Heroku Dynos have a local file system that does not survive an application restart or redeployment, therefore it cannot be used to persist data.
Typically you have 2 options:
use a database. On Heroku you can use (there is also a Free tier) Postgres
save the file on external storage (S3, Dropbox, even GitHub). See Files on Heroku for details and examples
I have my Crystal Reports server configured against an Oracle data source, but now I need to switch it to be configured against a SQL Server data source, on a different server. This server has no way to communicate with the original server to use the import tool.
I was looking into how to import the data myself. Looking at the tables, I see that it stores binary fields in the database. Are the reports stored as binary data inside the data source, or does each RPT file has a copy somewhere on the disk? If it is on the disk where? If it's inside the data source binary field, does importing the data to a new server will still work?
The CCM (Central Configuration Manager) – not to be confused with the CMS – has the ability to migrate your database repository from one database to another, even if the database vendor differs (i.e. Oracle vs. Microsoft).
Have a look at the administrator guide, the procedure is described in chapter 11, Managing Central Management Server (CMS) Databases, section 4 Copying data from one CMS system database to another.
A small excerpt:
You can use the Central Configuration Manager (CCM) or cmsdbsetup.sh
to copy system data from one database server into another database
server. For example, if you want to replace the database with another
database because you are upgrading the database or are moving from one
database type to another, you can copy the contents of the existing
database into the new database before decommissioning the existing
database.
Regarding the reports themselves, they are not stored in the database, but in the file repository. However, you need to view the database and file repository as one: the first contains the metadata while the latter contains the actual files. However, they can not function separately.
In other words, you cannot just copy the RPT files from the file repository to another server and expect them to work, as the information that is contained in the metadata (such as authorisations) will be missing. On the other hand, if you just copy the database repository but omit the file repository, you'll end up with all the information regarding your reports, but you won't be able to open or run them.
I have lost my "Trak.db" there is log file is available is it possible to recover this one through log file? use of Log files?
The best you can do is to run DBTran against the log file and it will generate a SQL file for the statements that were executed while that log was running. Of course, whether or not the log will contain all your data is going to be based on how you were logging/truncating/backing up. Technically it is possible if you have all your logs and the schema.
For reference: http://www.ianywhere.com/developer/product_manuals/sqlanywhere/0901/en/html/dbdaen9/00000590.htm