Job Scheduling with Yesod - postgresql

Following problem:
I have a Yesod web app that is connected to a Postgres database (everything is hosted on AWS Elastic Beanstalk).
My customer wants to define a schedule (day, hour, ...) for things to happen automatically (e.g. sending out a message). E.g.: "every wednesday message A will be sent at 02:00 PM, but does not get sent after 03:00 PM if the server was down in that period". The definition could be saved to a text file on S3 for example.
One lib I found was https://hackage.haskell.org/package/cron which could be used as a basis for my needs with a caveat: if the server is shut down in that particular minute in which a job would be triggered, the message would not be sent when the server is back.
I used DelayedJob (Ruby) in the past and there the scheduled jobs were stored in a database to circumvent this issue. But in Haskell I could only find solutions without database persistence.
Is there anything to look into for Haskell or do I have to build that on my own e.g. use something like http://jdabbs.com/resquing-yesod/ as a starting point)?

Related

Irregular Trigger BizTalk Scheduled Task Adapter

I have several receive locations of type schedule in a BizTalk 2016 server. All except one work fine. This one has been getting triggered as defined in the schedule daily at 04:00 am, however it suddenly began to start at 05:00 pm and one day it didnĀ“t run.
There is no Error Log in the Application Logs or the SQL Logs. The Receive Location is enabled. The Server Time is correct.
Does anyone has a hint, what this behavior might be caused by?
BTS 2016
Scheduled Task Adapter 6.0.0.6
The current version is 7.0.2. and that includes some fixes
e.g.
In certain cases task won't trigger in set time with Timespan with Biztalk 2016
Timly Schedule Start Time (and Date) doest not work correct
So I would suggest downloading and installing 7.0.2.
I experienced a similar behavior when the Host Instance is shared and sometimes overloaded. Try to dedicate a HI for scheduling only. And as suggested by #Dijkgraaf, you can use the last version of this Adapter

Google SQL instance stuck after operation "restore from backup"

After I proceed to restore database from automated backup file generated on Mar 13, 2019, the SQL instance stuck in this state forever:"
Restoring from backup. This may take a few minutes. While this operation is running, you may continue to view information about the instance."
The database size is very small, less than 1MB.
For future users that experience problems like this is in the future, here is how you can handle it:
If you have a Google Cloud support package, file a support ticket directly with support for the quickest response.
Otherwise please file a private GCP issue describing the problem, remembering to include the project id and instance name.
However - Cloud SQL instances are monitored for stuck states like this, so often the issue will resolve itself within a few hours.

Wait for system to sync time before performing another task

I'm using a Raspberry Pi, and upon startup it's sending an e-mail with the time and an IP address. The problem is that the time is not correct, it's the time from last time the system was shut down. When I log in through ssh and do a date command, I get the correct time. In other words, the e-mail is sent before the system has updated its time.
I was thinking of automatically running ntpdate on boot, but after reading up on it it seems like a bad idea due to the many risks of error.
So, can I somehow wait until the time has been uppdated before continuing in a script?
There is a tool included in the ntp reference implementation for this very purpose. The utility has a rather cryptic name: ntp-wait. Five minutes with the man page and you will be all set.

ms-access 2003 scheduled backup

I have been researching the possibility of scheduling an automatic back up of a database, but every link on the subject just talks about the manual back up process. Can anyone either show how to accomplish setting up a scheduled back up or a link to a good wab based training on the subject.
Microsoft Access is a file based system, so you can use script or a batch file to run in Task Sceduler at any time that you are sure the database will be closed. For example: http://www.overclock.net/t/114345/how-to-automatically-backup-files
We were running an MS Access system for several years and this is how we implemented a backup system.
Our system was split into multiple databases - import, backend and front-end
We had a dedicated desktop PC to run the process. This machine ran the import process and always had the import database open.
There was a form that would be open in the import database with a timer on it.
The timer had code that would run scheduled processes including - import process and backups and even compacting of the database.
There are other ways to perform this type of task, but this was the system that we had.
There are a few drawbacks, including:
If the desktop machine reboots, then the database is closed and nothing will run.

Aggregation of IIS logs

We have an IIS .Net application deployed across several machines. We use IIS log information to do reporting of performance of the web application and navigation by the user. Currently the reporting is only required infrequently (once a day, for the previous day), so we just roll the logs every 24 hours, and move the old logs to our reporting server.
We have a new requirement that means we need much faster turnaround on the IIS log information, say every minute for the sake of the discussion.
There exist Apache tools like Facebook's Scribe to scalably move Apache web server logs across a network of servers.
Are there any similar tools available for IIS?
Is this the right question to ask?
Should we be doing something different, if the timing requirements have changed so much?
I've looked at this question and the answers, and the only one that seems to come close is this one.
Pointers appreciated!
Snare is a little old but worth mentioning.
Snare Agent for IIS Servers
http://www.intersectalliance.com/projects/SnareIIS/index.html
I used this old version a long time ago and it worked well by forwarding/sending/replicating IIS logs over a network via syslog.
Today, they have a newer version called Snare Epilog
http://www.intersectalliance.com/projects/EpilogWindows/index.html
The code is also open source; perhaps you might find it useful.
You might also want to try ...
http://nxlog.org
http://www.syslogserver.com/syslogagent.html
I tend to write a .bat file in conjunction with LOG Parser 2.2. The .Bat file will determine the appropriate file dates and pull the corresponding logs from multiple IIS server log locations into a single local directory. Once the files are across I then run a Log Parser command to query the log content over all log files and then produce a single output file in .csv format. Finally, I run an SSIS job to import the new .csv file into a running log table which I can then query on an ongoing basis.