ms-access 2003 scheduled backup - ms-access-2003

I have been researching the possibility of scheduling an automatic back up of a database, but every link on the subject just talks about the manual back up process. Can anyone either show how to accomplish setting up a scheduled back up or a link to a good wab based training on the subject.

Microsoft Access is a file based system, so you can use script or a batch file to run in Task Sceduler at any time that you are sure the database will be closed. For example: http://www.overclock.net/t/114345/how-to-automatically-backup-files

We were running an MS Access system for several years and this is how we implemented a backup system.
Our system was split into multiple databases - import, backend and front-end
We had a dedicated desktop PC to run the process. This machine ran the import process and always had the import database open.
There was a form that would be open in the import database with a timer on it.
The timer had code that would run scheduled processes including - import process and backups and even compacting of the database.
There are other ways to perform this type of task, but this was the system that we had.
There are a few drawbacks, including:
If the desktop machine reboots, then the database is closed and nothing will run.

Related

Windows Services - How can I find the darktable instance in windows services

I accidentally screwed up my darktable configuration, so I reloaded it from scratch. To avoid losing all my recorded changes I have done to my pictures, I wrote a powershell backup script for the darktable database. I want to launch this script from the windows task scheduler when ever I launch darktable. I have found the event id which indicates in the security log of a new process has occurred which I should be able to use to automatically launch my backup script from task scheduler. I want to add code to the script to check the services to see if darktable is actually running and only perform the backup if it is. Anyone know how I can identify this?

Updating online Mongo Database from offline copy

I have a large Mongo database (5M documents). I edit the database from an offline application, so I store the database on my local computer. However, I want to be able to maintain an online copy of the database, so that my website can access it.
How can I update the online copy regularly, without having to upload multiple GBs of data every time?
Is there some way to "track changes" and upload only the diff, like in Git?
Following up on my comment:
Can't you store the commands you used on your offline db, and then
apply them on the online db, through a script running on SSH for
instance ? Or even better upload a file with all the commands you ran
on your offline base, to your server and then execute them with a cron
job, or a bash script ? (The only requirement would be for your bases
to have the same start point, and same state, when you execute the
script)
I would recommend to store all the queries you execute on your offline base, to do this you have many options, I can think about the following : You can set the profiling level to log all your queries.
(Here is a more detailed thread on the matter: MongoDB logging all queries)
Then you would have to extract then somehow (grep ?), or store them directly in another file on the fly, when they are executed.
For the uploading of the script, it depends on what you would like to use, but i suppose you would need to do it during low usage hours, and you could automate the task with a CRON job, and an SSH tunnel.
I guess it all depends on your constraints (security, downtime, etc..)

How to update files whenever script is scheduled to run in Heroku app

I have a simple python script that is hosted on Heroku and I'm using the Heroku Scheduler to run the script every hour/day. The script will possibly update a simple .txt file (could also be a config var if possible) when it runs. When it does run and conditions are met, I need that value stored and used when the next scheduled script runs. The value changed is simply a date.
However, since the app is containerized based on the most recent code I have on Github, it doesn't store those changes anywhere to be used again. Is there any way I can accomplish to update the file and use it every time it runs? Any simple add-ons or other solutions I can use?
Heroku Dynos have a local file system that does not survive an application restart or redeployment, therefore it cannot be used to persist data.
Typically you have 2 options:
use a database. On Heroku you can use (there is also a Free tier) Postgres
save the file on external storage (S3, Dropbox, even GitHub). See Files on Heroku for details and examples

Two master instances on same database

I want to use Postgresql in Windows Server 2012 R2 for one our project where it can be 24/7 uptime.
I would like to ask the community if I can have 2 master instances in 2 different servers A&B and they will 'work' on the same DB located in a shared file storage in lan. Always one master instance on server A will be online and when it goes offline for some reason (I suppose) a powershell script will recognize that the postgresql service stopped and will start the service in server B. The same script will continuous check that only one service in servers A & B is working to avoid conflicts.
I'd like to ask if this is possible or a better approach for my configuration.
(I can't use replication because when server A shuts down the server B is in read-only mode thing that I don't want)
If you manage to start two instances of PostgreSQL on the same data directory, serious data corruption will happen.
Normally there is a postmaster.pid file that prevents that, but a PostgreSQL server process on a different machine that accesses the same file system will happily unlink that after spewing some log messages, thinking it was left behind from a crash.
So you are really walking on thin ice with a solution like that.
One other issue that you didn't think of is that script that is supposed to check if the server is still running. What if that script fails, because for example the network connection between the two servers is down, but the server is still up an running happily? Such a “split brain” scenario will cause data corruption with your setup.
Another word of caution: since you seem to be using Windows (Powershell?), you probably envision a CIFS file system when you are talking of shared storage. A Windows “network share” is not a reliable file system — last time I checked, it did not honor _commit.
Creating a reliable failover cluster is harder than you think, and I'd recommend that you check existing solutions before you try to roll your own.

FileMaker Task Automation

I'd like to automate several FileMaker tasks using Windows Task Scheduler. It looks like step scripts are the way to go, but I'm not sure. I'd like to run tasks, say exporting for example, several times per day, but WITHOUT opening the FileMaker GUI. Is that possible? Any tips you have would be great. Thanks.
It's possible to initiate a Filemaker script using a schedule server script with Filemaker Server. However, if the database is not hosted using Filemaker server, or not open using Filemaker Pro (sounds like your situation), then there is no active engine able to actually perform the calculations (script steps, etc). The database has to be running somewhere to initiate and perform any scripts.
If the database is hosted using Filemaker Server then it is pretty easy to setup a scheduled script that will run at a designated time. If you don't have a license of Filemaker Server some Filemaker cloud hosting providers have monthly plans that are relatively cheap ($20/month with unlimited connections), and they'll work with you to setup a scheduled script (for free).
The best way to automate FileMaker tasks is to use FileMaker Server which has scheduled scripts. Of course it is more expensive than standalone version of FileMaker Pro.
If you automate tasks on a local FileMaker file, you can not avoid starting FileMaker and opening the file.
FileMaker has a limited support for VBScript, you can run FileMaker, open file and run a FileMaker Script from VBScript and add that script to Windows Task Scheduler.
This is not preferable way, but if you have no other option, this may be handy.
in Task Scheduler, Create a task
on Action tab, choose
"Start Program"
on the next screen, point to FileMaker Pro exe file, typically it is in C:\Program Files\Filemaker Pro\FileMaker.exe
Add argument:
"fmp://hostName/fileName.fmp12?script=scriptName&param=optionalScriptParameters"
please read more here http://www.filemaker.com/help/12/fmp/en/html/sharing_data.16.7.html about url schema. This will vary depending on whether you are hosting your file on FileMaker Server or opening it locally.
Note: avoid having spaces or special characters in script name.
Save the task. Reopen task properties and save your windows account credentials, so that the task may run without you having to login.
either save FileMaker login credentials upon login (if your FM version allows), or pass credentials through fmp url (as described in the link above), or go to FileMaker file options, and use credentials in "Log in using": (which is not secure and not recommended).
I am using this method to automatically send emails with PDF attachments, since FileMaker server does not let you Export Records as PDF (not until v.16) on server scripts.