How to periodically update a table in Postgresql via data retrieved from a php API using cronjob? - postgresql

I have a database in PostgreSQL in which few tables are supposed to be regularly updated. The data is retrieved from an external API written in PHP.
Basically the idea is to update a table related to meteo data everyday by the data collected from a meteo station. My primary idea is to do this job by using cron which will automatically update the data. In this case I probably need to write a cronjob in the form of a script and then run it in the server.
Being a newbie I find it little difficult to deal with. Please suggest me the best approach.

This works pretty much as you described and does not get any more simple.
You have to:
Write a client script (possibly in PHP) that will pull data from the remote API. You can use CURL extension or whatever you like.
Make the client script update the tables. Consider saving history, not just overwriting current values.
Make the client script log it's operation properly. You will need to know how it does once deployed to production.
Test that your script successfully runs on server.
Add (or ask your server admin to add) a line to the crontab that will execute your script.
PROFIT! :)
Good luck!

Related

Schedule script to attach CSV file report to a data source in servicenow

Schedule script to attach CSV file report to a data source in servicenow.
Schedule script that automatically attach csv file to the data source in servicenow.
how can we achieve this scenario?
Well this can be achieved in multiple ways. Bit of a vague description you have there. So I'll just drop a few general ideas for you:
If you don't mind turning things around, you could consider an external program push the file directly to ServiceNow and then run the associated TransformMap:
https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/import-sets/task/t_PostCSVOrExcelFilesToImportSet.html
If you have an FTP, you can have a scheduled script that will fetch the file from the FTP and run the transform:
https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/import-sets/task/t_ScheduleADataImport.html
You could use the MID Server application to have your custom logic of retrieving the file data. This is probably most complex to set up but also giving you the biggest advantages, like having your file encrypted etc. Basically, MID Server checks every couple seconds for a piece of code to be executed (called probes), for example you could trigger some Powershell script sitting on your server with it.
I'm sure there's other options as well. Good luck!

MongoDB - Safeguard against .remove() entire database?

I'm using MongoDB as my database, and as a first-time back-end developer the ease with which I can delete an entire database/collection really bothers me.
Simply typing db.collection.remove() removes all records from that collection!
I know that an effective backup strategy should render this a non-issue, but I occasionally do run .remove() on some collections, and I'd hate to type in the wrong collection name by accident and (a) have to go through a backup restore, and (b) lose whatever data I had gathered between the backup and the restore, especially as my app gathers a lot of user data.
Is there any 'safeguard' I can set up my database to use, even if it's just a warning/confirmation that says
"Yo, are you sure you want to remove everything from <collectionname>? Choose: Yes/No"
User roles won't fix your problem. If your account has permissions to delete one user, you could accidentally delete them all. If your account has permissions to update an attribute for one user, you could accidentally update all of your users.
There's a simple fix for this however.
Step 0: Backup your database. And test your backups regularly. And make sure you get alerted if the backup did not run, or errored. Replica sets are not backups. I know this is obvious, but evidentally it's not obvious to everybody.
Step 1: Write a web admin GUI interface for your database. This it will only take a day or two -- and it should be simple enough that a secretary or intern could use it without fear for your data. (If you think this will take a long time, find a framework with more bells and whistles. Your admin console doesn't even need to be written in the same language as your app.)
Step 2: Data migrations (maintenance transformations of your database) should always be run from scripts checked into source control and tested on non-prod beforehand. The script could be as simple as mongo -e "foo.update(blah)", but you should run it as a script to avoid cut-n-paste errors. Ideally, you would even have a checklist for all migrations. (Check that you have a recent backup. Check the database log and system load beforehand. Write a before and after query that will tell you if the migration was successful...)
Step 3: You now no longer need to use the production Mongo console. So don't. It's a useful tool for development, but that's only needed on local development databases.
The above-mentioned Roles might be useful for read-only queries. But you can already do that against the non-master replica set member.
tl;dr: You can go pretty far using cowboy admin techniques, but eventually you're going to figure out that it's better (and not much more work) to automate everything.
There is nothing you can do in the current version to provide this functionality.
In a future version when user defined roles are available you could define a role which allows insert() and update() but not remove() or drop() etc. and therefore make yourself log-in as a different higher-role user, but that's not available in the current (2.4) version.

Pattern for Google Alerts-style service

I'm building an application that is constantly collecting data. I want to provide a customizable alerts system for users where they can specify parameters for the types of information they want to be notified about. On top of that, I'd like the user to be able to specify the frequency of alerts (as they come in, daily digest, weekly digest).
Are there any best practices or guides on this topic?
My instincts tell me queues and workers will be involved, but I'm not exactly sure how.
I'm using Parse.com as my database and will also likely index everything with Lucene-style search. So that opens up the possibility of a user specifying a query string to specify what alerts s/he wants.
If you're using Rails and Heroku and Parse, we've done something similar. We actually created a second Heroku app that did not have a web dyno -- it just has a worker dyno. That one can still access the same Parse.com account and runs all of its tasks in a rake task like they specify here:
https://devcenter.heroku.com/articles/scheduler#defining-tasks
We have a few classes that can handle the heavy lifting:
class EmailWorker
def self.send_daily_emails
# queries Parse for what it needs, loops through, sends emails
end
end
We also have the scheduler.rake in lib/tasks:
require 'parse-ruby-client'
task :send_daily_emails => :environment do
EmailWorker.send_daily_emails
end
Our scheduler panel in Heroku is something like this:
rake send_daily_emails
We set it to run every night. Note that the public-facing Heroku web app doesn't do this work but rather the "scheduler" version. You just need to make sure you push to both every time you update your code. This way it's free, but if you ever wanted to combine them it's simple as they're the same code base.
You can also test it by running heroku run rake send_daily_emails from your dev machine.

Configuring Quartz.net Tasks

I want to be able to set up one or more Jobs/Triggers when my app runs. The list of Jobs and triggers will come from a db table. I DO NOT care about persisting the jobs to the db for restarting or tracking purposes. Basically I just want to use the DB table as an INIt device. Obviously I can do this by writing the code myself but I am wondering if there is some way to use the SQLJobStore to get this functionality without the overhead of keeping the db updated throughout the life of the app using the scheduler.
Thanks for you help!
Eric
The job store's main purpose is to store the scheduler's state, so there is no way built in to do what you want. You can always write directly to the tables if you want and this will give you the results you want, but this isn't really the best way to do it.
The recommended way to do this would be to write some code that reads the data from your table and then connects to the scheduler using remoting to schedule the jobs.

How to apply database updates after deployment?

i know this is an often asked question on these boards. And usually the question has been about how to manage the changes being made to the database before you even get around to deploy them.Mostly the answer has been to script the database and save it under sourcecontrol and then any additional updates are saved as scripts under version control too.(ex. Tool to upgrade SQL Express database after deployment)
my question is when is it best to apply the database updates , in the installer or when the new version first runs and connects to the database? note this is a WinApp that is deployed to customers each have their own databases.
One thing to add to the script: Back up the database (or at least the tables you're changing!) before applying the changes.
As a user I think I'd prefer it happens during the install, and going a little further that the installer can roll itself back in the event of a failure. My thinking here is that if I am installing an update, I'd like to know when the update is done that it actually is done and has succeeded. I don't want a message coming up the next time I run it informing me that something failed and I've potentially lost all my data. I would assume that a system admin would probably also appreciate install time feedback (of course, that doesn't matter if your web app isn't something that will be installed on a network). Also, as ראובן said, backing up the database would be a nice convenience.
You haven't said much about the architecture of the application, but since an installer is involved I assume it's a client/server application.
If you have a server installer, that's where you want to put it, since the database structure is only going to change once. Since the client installers are going to need to know about the change, it would be nice to have a way to detect the database version change, and for the old client to be able to download the client update from the server automatically and apply it.
If you only have a client installer, I still think it's better to put it there (maybe as a custom action that fires off the executable for updating the database). But it really isn't going to matter, because conceptually one installer or first-time user of the new version is going to have to fire off the changes to the database anyway. The database changes are going to put structural locks on the database so, in practical terms, everyone is going to have to be kicked off the system at that time for the database update to be applied.
Of course, this is all BS if it's not client-server.