I need to build a few tasks that import various data from outside APIs in my SailsJS project. I was thinking of using laterJS (unless there's a better option).
Is there a best practice for the location and loading of these "task" files?
The place where I think it should go is file config/bootstrap.js.
Description of this file from sails-docs:
This is an asynchronous boostrap function that runs before your Sails app gets lifted (i.e. > starts up). This gives you an opportunity to set up your data model, run jobs, or perform
some special logic.
Related
I'd like to get an indication about the context in which my process is running from. I'd like to distinguish between the following cases :
It runs as a persistent scheduled task (launchDaemon/launchAgent)
It was called on-demand and created by launchd using open command-line or double-click.
It was called directly from command-line terminal (i.e. > /bin/myProg from terminal )
Perhaps is there any indication about the process context using Objective-c/swift framework or any other way ? I wish to avoid inventing the wheel here :-)
thanks
There is definetely no simple public API or framework for doing this, and doing this is hard.
Some parts of this info possibly could be retreived by your process itslef with some side-ways which will work on some system versions:
There is a launchctl C-based API, which you can try to use to enumerate all
launch daemon/agent tasks and search for your app path/pid. You may
require a root rights for your process for doing this.
Using open command-line sometimes could be traced with environment
variables it sets for your process.
Running directly from command-line could leave responsible_pid filled correctly (which is private API from libquarantine, unless you are observing it with Endpoint Security starting from 11.smth version)
All this things, except launchctl API, are not public, not reliable, could be broken at any time by Apple, and may be not sufficient for your needs.
But it is worth to take them a try, because there is nothing better :)
You could potentially distinguish all cases you want using system events monitoring from some other (root-permitted) process you control, possibly adopting Endpoint Security Framework (requires an entitlement from Apple, can't be distributed via AppStore), calling a lot of private APIs and a doing bunch of reversing tricks.
The open resource I could suggest on this topic is here
I have a project which lets the user to select from the ADFs existing on a Tango-enabled device, to allow them to correctly localize in a number of different spaces.
My code (Unity 5.5, C#, Farandole SDK) essentially performs manual Tango startup with a null AreaDescription as the entry flow. If the user then selects an ADF, I'm calling TangoApplication.Shutdown() then TangoApplication.Startup(newArea).
in Eisa, this works. In Farandole, I get a permissions failure.
if, using Farandole, I explicitly request permissions (after the Shutdown) and wait for the permissions response to come back before calling Startup, the system appears to re-localise against the new ADF, but the Tango system is re-registering callbacks every time around through Startup without unregistering them, meaning I get my callbacks called multiple times for each ADF that I switch to.
What is the correct process to switch between ADFs? Should a shutdown be required before calling Startup, and if so, what is the correct way to shutdown the TangoApplication to avoid multiple callbacks?
I am interested in this answer too. The way I would do it, is reload the scene with the new ADF, just like it is done in the AreaLearning example, so TangoManager and TangoPoseController are reset.
I have several perl scripts for data download, validation, database upload etc. I need to write a job controller who can run these scripts in specified manner.
Is there any job controller module in perl?
There are a bunch of options and elements to what you're looking for.
Here for instance is a "job persistence engine"
http://metacpan.org/pod/Garivini
What I think you want might be more comprehensive. You could go big with something like "bamboo" which is a continuous integration/build system. There are several of those if you want to go down that route:
http://en.wikipedia.org/wiki/Continuous_integration
Or you could start with something like RabbitMQ, which bills itself as a message queuing system but has the ability to restart failed jobs and execute things in order, so it has some resilience built in, but you the actual job control software (what watches the queue and executes events?) might need to be written by you, using the Net::RabbitMQ module. I'm not sure.
http://metacpan.org/pod/Net::RabbitMQ
Here is a (Ruby) example of using RabbitMQ to manage job queuing.
How do I trigger a job when another completes?
I'm wanting to create a background worker for a ZF application I'm working on but I'm baffled not as much about the software architecture but more of the filesystem architecture.
The worker would be triggered by a controller to perform some tasks and then the controller would check up on the status of the worker so this bit has been covered.
From the folder structure point of view where should these the code for the worker sit in?
application/
models/
services/
worker/
application/ --> code for the worker (standard ZF structure)
worker.php --> entry-point to the worker
Or
application/
controllers/
WorkerController.php
models/
Worker/
Class.php
Class2.php
services/
worker.php --> entry-point to the worker
Bear in mind the configuration of the main application and the worker are almost identical (especially same db connection credentials, autoloading settings) and the worker would need to access the main application's models.
Any advice or opinions would be appreciated.
Many thanks,
Angel
If the worker is triggered via cronjob, then you could make the worker component a module, so it has its own controllers, views, etc. Then - as #MonkeyMonkey notes - your commandline script could make MVC requests to that module.
But it seems to me that this worker component might function more naturally as a service, a class containing functionality that gets invoked by your cron-triggered cli script. ZF-based cli scripts - optionally using Zend_Console_Getopt, which is pretty cool - can use the same config and Bootstrap class, selectively bootstrap resources (some might not be required for the cli-based task), and use the same autoloaders as the standard MVC app.
As you note, these workers will update a status table that would be accessible to the web-facing portion of the app, so those pages can read/render the status on request.
As for the filesystem structure of that, you could name these service classes something like Application_Service_MyWorker stored in the file application/services/MyWorker.php. Perhaps even push down further using something like Application_Service_Worker_MyWorker stored in application/services/Worker/MyWorker.php, though this latter might require adding another resource-type entry into the resource autoloader, similar to the way that mappers and DbTable-based models are defined in Zend_Application_Module_Autoloader.
MVC is not only helpful in web environments (Apache), you can use it for "background workers" as well (the view is your console), you just need to add a cli.php or something, handle cli arguments (module, controller, action), create the request object and pass it to the dispatcher.
So how ever your triggering of the background worker works (exec?), call your newly create cli.php and enjoy the features of your zf application (configuration, autoloading, ..).
For short, I can't tell you how exactly a cli.php would look like, but I found this tutorial:
Using Zend Framework from the Command Line
Important object for you: Zend_Controller_Request_Simple
I'm building an app that deals with customer queries, where I want to route the query through a decision tree showing appropriate views before taking some automated action against their query. Kind of like the game "20 questions"! Based on the answers at each stage, the path through the app will change.
I was thinking of using MVC, because there are only a few "types" of route and outcome - so I could build fewer pages that way, one to handle each type rather than one for each step. I was also thinking of using Workflow 4 to manage the page flow, because the flowchart model maps pretty nicely to what I'm trying to do.
Does anyone know any good reference apps that use Workflow for this kind of thing?
Thanks
Richard
There where a number of examples using WF3 doing this sort of thing but I haven't seen any for WF4. I suppose it is possible to do but it means running the workflow synchronously and checking the bookmarks as soon as it becomes idle to see which operations are enabled at the moment. That should be possible using a custom SynchronizationContext that does things synchronous and using the Idle callback on the WorklfowApplication to check the current bookmarks.
I actually went with a different option in the end - I wrote a "GetNextAction" function that returned an ActionResult object based on my flowchart logic and state of objects. The controller processes whatever form inputs it's received, updates the object, then calls GetNextAction and returns the result of that function. Seems to be working out ok!