Why are CQ/AEM workflow being synchronous - workflow

I created a workflow where I need each process step to be asynchronous.
For example, 1st step is create thumbnails in different size that I need. After creating those thumbnails, then I would use those thumbnail to create a folio and package the contents and publish it. This is the reason the thumbnail must be created first.
Another thing I need to do is starting another workflow and I need to wait for that workflow to finish before it goes to another step. How can this be done?
It seems all steps are starting together at the same time and not being executed after one another.

Related

Best Practice approach to schedule playlists using Liquidsoap (equeue alternative)

I'm in search of a best practice approach to schedule playlists using Liquidsoap. My current approach creates plenty of delays, hence not meeting the requirements for seamless playback.
Requirements:
After scheduling a new playlist is due to be scheduled, it should remove all previously scheduled playlist-items.
Avoid any delays when clearing previously queued playlist-items.
My current implementation:
Schedule a bunch of files (represent a playlist) by pushing them to an equeue
This queue starts playing
When the next timeslot is due, a new playlist cannot simply be queued, because it would only start after all tracks queued by the previous playlist are finished playing. Because of this, I'm removing all tracks of the previous playlist first using a Liquidsoap server script. This process is time-consuming and delays the timely execution of step 4.)
Schedule the new files by pushing them to an equeue.
How can I do this more elegantly?
Is it possible to clear an equeue w/o creating delays?
If there are "more correct" Liquidsoap features to achieve this, like a playlist (can I control when it is actualy played?) or request.dynamic (which is deprecated) instead of an equeue, please let me know.
Update: I'm currently using two queues: A and B. One minute before queue A should be playing I populate it with tracks (playlist). When it should be actually playing I turn up the volume. Then, one minute before queue B should be playing I populate this one. When it's actually time to be played I transition the volume from queue A > B. In theory this solution would be fine, but the issue here is that I'm not aware of a way that the queues pause until I turn up the volume. The tracks already seem to start playing at the very moment when the queue/playlist is filled.
It's hard to tell without reading the complete script, but I'm sure it's not possible to pause a queue. At best you can remove an item via the server interface: if it's the currently playing item and it's alone in the queue, then it will stop that queue. You might be interested by the Beets examples, that discuss how an external program can populate sources.
To switch from playlists A to B, the Liquidsoap way is to populate B exactly when it's time and an operator like fallback will make the transition. See also fallback.skip.

How to wait until dam update asset workflow is completed instead of thread dot sleep

In AEM CQ , I am using asset manager api to write content(images uploaded from) in dam. This triggers out of the box Dam Update Asset workflow. I require to read renditions and asset properties that would be written available once the workflow is completed.
My question is how to wait until the workflow is completed for reading the asset properties instead of thread.sleep.
I tried with a recursive function call to iterate while the asset property is present. This gave null pointer exception. But when I put a thread.sleep of 50 ms inside the iteration it works for me.
Another approach I tried to get the workflow object inside the service to read workflow status but found that it takes few milliseconds for the ootb workflow to start after the content is written. Here also had to give thread.sleep.
One more attempt to use a event handler to listen workflow events. We are able to enter the event type as workflow completed. How to notify the service or jsp that the workflow is completed and we can read the asset properties and renditions?
It would be great if someone can share their suggestions feedback on the approach. Thank you.
You have the wrong approach to solve this problem. In my eyes you have exactly 2 reasonable solutions on this.
Create workflow process/step and extend the Dam Update Asset Workflow with your custom step.
OR
Create JCR observation listener and listen for Event.PROPERTY_ADDED for example or use the higher sling APIs and create event handler with the appropriate topic and than execute your business logic as soon the property you look for is added or changed.
Why not to use Thread.sleep() or other similar solution:
you don't know when the workflow is executed exactly. it may be
delayed if many asssets are uploaded or just get stuck
you cannot assure that your thread will be able to execute it's
logic. the instance may be stopped for example
creating a new tread for every resource uploaded may be expensive task. you also waste resources when you create an infinite loop and put those threads to sleep, than wake them and check again and again ... and so on until eventually the thread is able to do it's job

Microservice download queue

I am making a website, where I provide a button which can :
POST a JSON to a route (nothing hard for now)
but the process should also start multiples system commands, and at the end of it provide a zip the user can download
To do that, I think I need a queue. Because two users connected on the same time CANNOT start the process.
Is a queue ok ? But I do not know how to retain the session and send back the zip file...
PS: I am using angular2 & a Python WS.
There are three parts to your question:
First, allow only one execution of your system commands per user at a time.
This could be as simple as maintaining a synchronized flag bit per user, which stores 1 if the request can be processed, 0 otherwise. Whenever a post request comes, first check if this flag is set or not. Continue if it is not 1 return some non-200 status code. Else, set it to 0 and trigger the commands.
Second, handle multiple POST requests that trigger the system commands.
You should use a queue only if your system commands take more time and usually run in the background.
Three, how to retain the session
Retaining a session is not a good idea. You have two options. One, client continuously pools to another end point to check if the zip creation is complete or not. Second, (better than first) use websockets to send the notification back to client once the zip creation is complete.

How should I stop a long-running WF4 workflow?

I'm developing some Workflow 4 activities that will continuously loop and do some work. For example, one may watch an RSS feed and execute some steps as new items are added. I would like to be able to stop and restart this process cleanly (ie, in a windows service or Azure Worker Role). Currently, I have a While loop with an expression that always resolves to true, and just let the instance die when the app closes. But it seems like this is not a very clean way to stop the workflow.
How should I stop and restart the workflow?
The exact system depends a bit in the way you host your workflow but I am assuming you are using the WorkflowApplication. In that case simple option is to use the WorkflowApplication which has a Cancel method you can use to cancel execution of the workflow. You can also create a bookmarked activity and resume a stop bookmark or something similar but that might be overkill.

SAP Background Job: How's it Running?

I have to move an SAP background job (ABAP report for A/P) into Cronacle and can't figure out how to stop the job in SAP so I can start running it with the Cronacle schedule.
The job runs in SAP from user SAPSYS every morning at 7:15am, but if you look it up with sm37 there is no time scheduled for it and it's not triggered by an event; also, it's status is SCHEDULED.
I had our Cronacle team search by job number but they couldn't find any scripts pointing to that job. If you look at the finished job it shows that it's scheduled daily for 7:15am. Also, there is no predecessor or successor jobs listed. Is it possible it's being started from another job? How do I find out without deleting this one?
Some suggestion.
If you don't want to delete the scheduled job. try to rename it, and see if it still runs.
Make sure that the users you are using for sm37 has full authorization for the backround administration.
A previous job can schedule and release and create and whatever a new job. Look at what is running before the problematic job.
Look deeply at the dev traces. They somtimes hints about what is going on in the system.
In addtion to a previous job creating the new job explicitly it is also possible that the job is created manually by an ABAP program that is scheduled in another job. Doing a where-used on the function module OPEN_JOB and looking for Z* or Y* programs may give you a hint.
Another thing: Is this scheduled job ever actually excecuted (i.e. are there any previous "FINISHED" jobs with the same name). A Scheduled job will not run unless it is first released. So if it never runs it may be obsolete.
Thanks for the responses! It turned out to be a case of "newbie ignorance." When using SM37 to view the job I neglected to extend the search date to the next day. I don't know why it doesn't show the released job for the current day, but extending it to the next day showed it. That's a lesson I won't forget!