Is there a way to get estimated time of completion of a currently running Informatica workflow in Infra metadata tables - metadata

I am working with this metadata table REP_WFLOW_RUN currently from Infra DB, to get status about workflows. The column run_status_code shows whether this wf is running, succeeded, stopped, aborted etc..
But for my Business use case I also need to report to Business, the estimated time of completion of this particular work flow.
Example: If suppose the workflow generally started at 6:15, then along with this info that work flow has started I want to convey it is also estimated to complete at so and so time.
Could you please guide me if you have any details on how to get this info from Informatica database.
Many thanks in advance.

This is a very good question but no one can answer correctly :)
Now, you can get some logic like other scheduling tool does.
First calculate average time the workflow takes to complete for a successful run. And output should be a decimal value.
select avg(end_time - start_time )*24 avg_time_in_hr, workflow_name
From REP_WFLOW_RUN
Where run_status_code='succeeded'
Group by workflow_name
You can use above value as estimated time to completion for that workflow. Output should be a datetime.
Select sysdate + avg_time_in_hr/24 est_time_to_complete from dual
Now, this value is an estimated figure and not correct value. So on a bad day, if this takes hours, average value will be bad but we cant do much here.
I assumed, your infa metadata is on oracle.

Related

Measuring system time of specific agent in anylogic

I've got 3 different product types of agent, which each go it's individual path within the fabric. How can i measure the average time the product type spends in the system?
My logic looks like this , and i wanted to implement the measurement in the first service, like this:, it will be completed in the last service like this :
Now I get some really high numbers, which are absolutely wrong. The process itself works fine, if you run the measurement with the code "//agent.enteredSystemP1 = time()", you will get a mean of 24 minutes, per product. But how can i get the mean per product type?
Just use the same if-elseif-else distinction in the 2nd service block as well.
Currently, any agent leaving the system adds time to any systemTimeDistribution

How to stop timeout in service block

I am modeling ticket system with various SLA. The model must contain several service blocks with different reaction time ( from 2 to 32 hours). In the service block only working hours should be taken into account. So in the service block timeout should stop when non-workong hours and on the weekend. Could you please kindly tell me how i can realize it?
Thank you very much in advance!
I can think of two answers, one simplified but works in many cases, the other more advanced and probably more accurate:
Simplified approach: I would set the model in hours and keep everything running as is without any stop. So, at the end of the simulation, if the total time is 100 hours and you know that you have 8 hours/day with 5 days/week, then you'd know the total duration is 2.5 weeks. Of course, this might have limitations or might become more complex later on if you want day-specific actions (e.g. you want to differentiate between Monday, Tuesday, etc.)
Advanced more accurate approach: Create resources whose capacities are defined by schedule and assigned them to your services. Create a schedule and specify the working hours in that schedule. Check the below link to learn more about schedules. I call this the more advanced approach because you need to make sure the schedule is defined correctly and make sure all elements in the model are properly controlled (e.g. non-service blocks such as source, delays, etc.).
https://help.anylogic.com/topic/com.anylogic.help/html/data/schedule.html?resultof=%22%73%63%68%65%64%75%6c%65%73%22%20%22%73%63%68%65%64%75%6c%22%20
I personally would use the first approach if the model is rather simple and modeling working hours is enough for analysis. Otherwise, I'd go for option 2.
Finally, another option I'd like to highlight is the "suspend/resume" functions. I am only adding this because you asked "how to stop timeout". So these functions specifically stop and resume timeout. But you'll need to define the times at which they are executed (through an event for example).

UiPath Orchestrator Triggers - Cron Expression For specific day of month or next working day if not a working day

I've currently got this Cron expression that I'm using to trigger a process in UiPath Orchestrator:
0 0 15 21W * ? *
Runs on the closest working day to the 21st of each month at 3pm.
However I need it to run on the next working day at 3pm if the 21st is a non working day.
Tried searching for an answer and nothing quite fit the brief.
I used this website to build my expression (which is a great tool) but it only had an option for 'nearest day' and not next working day given a specific day of month: https://www.freeformatter.com/cron-expression-generator-quartz.html
As you don't need the nearest day, you can't use the functionality of Orchestrator cronjob. I would recommend creating a wrapper process as follows:
Create a new process, let's call it StartJobByCheckingDate
Now create a trigger that starts StartJobByCheckingDate each day at 3pm
So that process is now your manager of your desired process
Now we need to check if it is the 21th day
Here you have different ways to solve it
You could create a DataTable or even a file in the StartJobByCheckingDate process, that contains all the different days where your desired process should be fired (but this is very manual, you might not want to update this every year, so this might not be the smartest but the easiest solution)
The other idea is to check if the current day is the 21th day. If so check if it is Saturday/Sunday (non-working day).
If true: you could now create a empty dummy file somewhere that tracks that the 21th was a non-working day, and the next day you check that file existing, if it exists you check the current day to be a working day, and if so you delete the file again and start your desired process
If false: just start your desired process directly
I think 2. idea would be that best. Sure you have 365 jobs runs/year. But when you keep that helper process smart this will just be seconds.
Another idea instead of using the dummy file, would be to use Entities. Smarter but need some more time to get familiar with.
We have (had) the exact same issue. Since UiPath doesn't offer a feasible solution out of the box, we will work around the restriction using the following strategy: We trigger the actual job daily, considering a custom-built, static NonWorkingDay-list that will just suppress the execution of the robot every day we don't want it to run.
These steps are needed:
Get a list with of all known bank holidays, saturdays and sundays until 2053 or so...
Build a the static exclusion-list using a script that does something like this (pseudocode. I will update the answer once we have actually implemented the solution):
1. get all valid execution dates
loop through every 28th of the month until end of 2053
if the date is in the bankHolidayList then
loop until the next bankDay is found
add it to the list of valid ExecutionDates
else
add the date to the validExecutionDate-list
2. build exclusion-list
loop through every day until end of 2053
if the date is not in the validExecutionDate-list
add it to the exclusionDate-list
Format the csv accordingly and upload it to the orchestrator tenant as a NonWorkingDay-List
Update your trigger to run daily at your desired time, using the uploaded NonWorkDay-Calendar
While the accepted answer will surely work as well, we prefered to go with this approach because having a separate robot that does nothing but executing a UiPath trigger just doesn't seem right to me. With this approach we have no additional code that we potentially need to maintain.
In my oppinion not having a solution for this concern out of the box is a lack of feature that UiPath will (hopefully) fix until end of 2053 ;-)
Cheers
You can configure your trigger to launch oftener, then manage dates at init of your process, but you must set up a list of "holydays" or check in some way.
Also you can use the calendar option of orchestrator (+info)

JMeter to record results on hourly basis

I have a JMeter project with multiple GET and POST requests and assertions for these. I use Aggregate results and View results tree listeners, but none of these can store results on hourly basis. I tried JMeterPlugins-Standard and JMeterPlugins-Extras packages and jp#gc - Graphs Generator listener, but all of them use aggregated data instead of hourly data. So I would like to get number of successful and failed requests/assertions per hour, maybe a bar chart would be most suitable for this purpose.
I'm going to suggest a non-conventional design-level solution: name your samplers dynamically with hour (or date and hour), so that each hour the name will change, and thus they will appear in different category, i.e.:
The code for such name is:
${__time(dd:hh,)} the rest of sampler name
Such sampler will appear in the following way in Aggregate Report (here I simulated it with minutes/seconds, but same will happen with days/hours, just on larger scale):
Pros and cons of such approach:
Simple, you can aggregate anything by hour, minute, or any other time slice while test is running, and not by analysis after execution.
Not listener-dependant, can be used with pretty much any listener or visualizer
If you want to also have overall stats, it will require to sum up every sub-category. So it alters data, but in the way that it can still can be added back to original relatively easy.
Calculating __time before every sampler will not be unnoticed completely from performance perspective, but I don't think it will add visible overhead to a script.
You could get the same data by properly aggregating JTL or CSV (whichever you use) after execution, so it doesn't provide you with anything that is not possible to achieve using standard methods
Script needs altering to make this happen. if you have 100s of samplers, it's going to take a while. And if you want to change back...
You might want to use Filter Results Tool which has --start-offset and --end-offset parameters, you can "cut" your results file into "interesting" pieces and plot them according to your requirements.
You can install Filter Results Tool using JMeter Plugins Manager
Also be aware that according to JMeter Best Practices you should
Use as few Listeners as possible; if using the -l flag as above they can all be deleted or disabled.
Don't use "View Results Tree" or "View Results in Table" listeners during the load test, use them only during scripting phase to debug your scripts.
You can get whatever information you need from the .jtl results file, you can specify test results location via -l command-line argument
To get summarized results per hour add to your test plan Generate Summary Results:
Generates a summary of the test run so far to the log file and/or standard output
Update interval in jmeter.properties to your needs ,1 hour, 3600 seconds:
summariser.interval=3600
You will get summary per hour of your requests.
You can try with Jmeter backend Listener. It has integration with graphite and Influxdb. After storing the results in these time series database you can display the result in Grafana dashboard. Grafana has its own filtering of showing the results in hourly, monthly, daily basis and so on.

Dynamics CRM workflow failing with infinite loop detection - but why?

I want to run a plug-in every 30 minutes, to poll an external system for changes. I am in CRM Online, so I don't have ready access to a scheduling engine.
To run the plug-in, I have a 'trigger' entity with a timezone independent date-
Updating the field also triggers a workflow, which in pseudocode has this logic:
If (Trigger_WaitUntil >= [Process-Execution Time])
{
Timeout until Trigger:WaitUntil
{
Set Trigger_WaitUntil to [Process-Execution Time] + 30 minutes
Stop Workflow with status of: Succeeded
}
}
If Trigger_WaitUntil < [Process-Execution Time])
{
Send email //Tell an admin that the recurring task has self-terminated
Stop Workflow with status of: Canceled
}
So, the behaviour I expect is that every 30 minutes, the 'WaitUntil' field gets updated (and the Plug-in and workflow get triggered again); unless the WaitUntil date is before the Execution time, in which case stop the workflow.
However, 4 hours or so later (probably 8 executions, although I haven't verified that yet) I get an infinite loop warning "This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow".
My question is why? Do workflows have a correlation id like plug-ins, which is being carried through to the child workflow? If so, is there anyway I can prevent this, whilst maintaining the current basic mechanism of using a single trigger record to manage the schedule (I've seen other solutions in which workflows create new records, but then you've got to go round tidying up the old trigger records as well)
Yes, this behavior is well-known. The only way to implement recurring workflows without issues with infinite loops in Dynamics CRM and using only OOB features is usage of Bulk Deletion functionality. This article describes how to implement it - http://www.crmsoftwareblog.com/2012/08/using-the-bulk-deletion-process-to-schedule-recurring-workflows/
UPD: If you want to run your code every 30 mins then you will have to create 48 bulkdelete jobs with correspond startdatetime like 12:00, 12: 30, 1:00 ...
The current supported method for CRM is to use the Azure Scheduler.
Excerpt:
create a Web API application to communicate with CRM and our external
provider running on a shared (free) Azure web site and also utilize
the Azure Scheduler to manage the recurrence pattern.
The free version of the Azure Scheduler limits us to execution no more
than once an hour and a maximum of 5 jobs. If you have a lot going on
$20 a month will get you executions every minute and up to 50 jobs -
which sounds like a pretty good deal.
so if you wanted every 30 minutes, you could create two jobs, one on the half hour, and one on the hour.
The Bulk Deletion is an interesting work around and something we've used before. It creates extra work and maintenance though so I try to avoid it if possible.
I would generally recommend building a windows application and using the windows scheduling feature (I know you said you don't have a scheduler available but this is often forgotten). This approach works really well and is very easy to troubleshoot. Writing to logs and sending error email alerts is pretty easy to make it robust. The server doesn't need to be accessible externally, it only needs to reach CRM. If you had CRM on-prem, you could just use the same server.
Azure Scheduler is a great suggestion. This keeps you in the cloud which is nice.
SSIS is another option if you already have KingswaySoft or Cozy Roc in place.
You could build a workflow that creates another record and cleans up after itself; however, this is really using the wrong tool for the job. Also, it's very easy for it to fail and then not initiate the next record.
There is a solution called "Scheduled Workflow Runner". You create a FetchXML query to create a record set to run against, and point it at an on-demand workflow that you want it to run on each record.
http://alexanderdevelopment.net/post/2013/05/18/scheduling-recurring-dynamics-crm-workflows-with-fetchxml/