Lead Time and Cycle Time - Connect to Analytics with Power BI Data Connector - azure-devops

I'm trying to validate the data between the widgets Lead Time and Cycle Time with the report that imported from azure devops (https://learn.microsoft.com/en-us/azure/devops/report/powerbi/data-connector-connect?view=azure-devops), but when I do the average the data doesn't match. Is there some place where can I find information about how the calcule is done or which filters to perform?? Isn't that a simple average?
Lead Time Exemple

As far as I know, the lead time and cycle time will also include weekends. So this could affect the average of the data.
From this doc:
Lead time is calculated from work item creation to entering a completed state. Cycle time >is calculated from first entering an In Progress state to entering a Completed state.
In PowerBI, you could check the Completeddate , Cycletimedays column,CompletedDate.
For example:
In chart:
Chart settings:
In Date:
The average Cycle time days:(147.183+133.340+133.340)/5 = 82.77
Here is a doc about creating Power Bi lead time/ cycle time chart.

Related

Calculate daily overtime and daily mean overtime of a resourcePool - anylogic

I'm trying to calculate the overtime of each resourcePool, what I reached is time-consuming and almost a manual calculation, looking for the correct way to do that, please.
What I'm doing is as follows, for example, I have the pharmacists' resource pool (regular hours, and over time pool, I divide them to be easier for me to get statistics of each shift and for optimization later on):
I added the following traceln in onExist realse of the pharmacists:
traceln(date());
traceln(agent.OrderID);
So it returns the date and time of all passed agents. Then I run the model and copy the traceln outputs from the console and process them in Excel:
the overtime hours of the day = last recorded time of a certain date - the start of overtime shift
I repeat the above for each day which takes too much time, especially since there are other resources that also I need their overtime.
Is there any simplified and quick method to get the daily overtime (and the daily mean overtime) of a resourcePool?
Thank you.

Anylogic: Dataset resource_unit_states_log

I have created a simple model (see first attachment) in Anylogic. Resource unit W1 is seized in service and resource unit W2 is seized in service 1. The delay time of service and service 1 is both 5 minutes. The interarrival time of source is 10 minutes and interarrival time of source 1 is 6 minutes.
Now I would like to analyse the usage state of both resource units, but in dataset resource_unit_states_log there is only the state "usage_busy" logged. Is there any possibility to also log the usage state "idle" in this dataset? Later in my evaluation I want to know the exact date and time when the resource was in state "idle". Currently I can only read the exact date and time for the state "busy" from the data set (see screenshot in first attachment). Theoretically, I could manually calculate the date and time of the "idle" state based on the existing values, but that would take a long time with thousands of dates.
Another attempt was to track the "idle" state using a timeplot. If I use W1.time() as x-axis value, I get the model time (e.g. 0, 1, 2 ...) in the dataset. But I want instead as in the dataset "resource_unit_states_log" the exact date like 27-12-2021 00:06:00.
Does anyone have an idea how I can solve either of these problems?
AnyLogic internal tables/logs are not modifiable. They are as they are. If you want data in any other format, you need to do it by using your own data collection functions/codes. In your case, the second approach is pretty good. You are collecting information every minute and you can export it. I usually do the post-processing in Python. I work with millions of rows and it takes a few minutes; in your case thousands of rows should take some seconds. Here is how I would do it:
Export the data (in your second plot approach) into Excel. The data should look like this:
Open Jupyter notebook (or any IDE).
Read the data into Python. Lets say you have saved your data as data.xlsx.
Input your start_datetime, i.e. starting date and time of your simulation.
Then just add the minutes from your data to the start_datetime.
Write the modified data in a new Excel file called data_modified.xlsx. It will look like this:
Here is the full code:
import pandas as pd
import numpy as np
from datetime import timedelta
from datetime import datetime as dt
df=pd.read_excel('data.xlsx')
#Input your start date and hour below:
start_datetime='2021-12-31 00:00:00'
df['datetime']=start_datetime
df['datetime'] = pd.to_datetime(df['datetime'])
df['datetime']=df['datetime'].dt.strftime('%d-%m-%Y %H:%M:%S')
df['datetime'] = pd.to_datetime(df['datetime'])
df['time_added'] = pd.to_timedelta(df['x'],'m')
df['datetime']=df['datetime']+df['time_added']
del df['time_added']
df.to_excel('data_modified.xlsx')
Another approach:
You can use the cells On seize unit and On exit inside the Service block and log the time when the resource is seized and released by using the function time() and write this information into a dataset. Then do the calculations.
You can also use some conversion functions of AnyLogic as below:

Is there a way to get estimated time of completion of a currently running Informatica workflow in Infra metadata tables

I am working with this metadata table REP_WFLOW_RUN currently from Infra DB, to get status about workflows. The column run_status_code shows whether this wf is running, succeeded, stopped, aborted etc..
But for my Business use case I also need to report to Business, the estimated time of completion of this particular work flow.
Example: If suppose the workflow generally started at 6:15, then along with this info that work flow has started I want to convey it is also estimated to complete at so and so time.
Could you please guide me if you have any details on how to get this info from Informatica database.
Many thanks in advance.
This is a very good question but no one can answer correctly :)
Now, you can get some logic like other scheduling tool does.
First calculate average time the workflow takes to complete for a successful run. And output should be a decimal value.
select avg(end_time - start_time )*24 avg_time_in_hr, workflow_name
From REP_WFLOW_RUN
Where run_status_code='succeeded'
Group by workflow_name
You can use above value as estimated time to completion for that workflow. Output should be a datetime.
Select sysdate + avg_time_in_hr/24 est_time_to_complete from dual
Now, this value is an estimated figure and not correct value. So on a bad day, if this takes hours, average value will be bad but we cant do much here.
I assumed, your infa metadata is on oracle.

How to know time remaining until Anylogic Model is executed

Is there any way one can know the time remaining until Anylogic model completes it's execution in real time using traceln.
I am trying to trigger an anylogic model jar file using vba & want to show a progress bar displaying the Progress of experiment & time remaining until model execution is completed.
I think you want to know the real time that the model execution will take and us that to show a progress bar.
Felipe showed how to calculate the simulation time based on starttime and endtime set in the sim experiment and the current simulation time.
I don't think there is an easy way to do what you want. based on trial and error you could insert some traceln inside your model and use that to time the progress bar. Though it would change based on your inputs. Even the progress bars you see on microsoft apps / installations are not too accurate so thats never easy.
double remaingTimeSeconds=getEngine().getStopTime(SECOND)-time(SECOND);
traceln(remaingTimeSeconds);
in a more general way, if your model doesnt start at t=0, then you can do the following to get the remaing progress:
double totalSimTimeSeconds=getEngine().getStopTime(SECOND)-getEngine().getStartTime(SECOND);
double timeSinceStartSeconds=time(SECOND)-getEngine().getStartTime(SECOND);
double remaingingTimeSeconds=totalSimTimeSeconds-timeSinceStartSeconds;
double fractionRemaining=remaingingTimeSeconds/totalSimTimeSeconds;
traceln(fractionRemaining*100+"%");

Grafana taking measurement as specified time

I have a daily count metric being pushed to prometheus. Its important to have the measurement every few minutes, but I also want the measurement at a specified time (end of the day) to see the daily total. Is there a way to specify a time of the measurement?
I have set the min_step (time step) to be 24h. Doing so gives me measurements at 20:00:00 each day. Ideally this would be 23:50:00 through 23:59:59
The chart type is a Graph, and the PromQL query is:
max(table_row_count) by (table)
with min_step = 24h, format = time series, and min time interval = 24h. Relative time is set to 7d to get a weekly view of the tables.
I am expecting some way to be able to set the timestamp of the query that should be run every 24h.
Prometheus doesn't have any cron features. You would have to revert to scheduling it yourself.
This means that the first requirement is to get the data you want at the given time. This can be easily done by a GET on the url of the metric you want. (by example using curl).
Now, the question is how to feed it to prometheus. I see three possibilities:
dump the content in a file and let node exporter expose it to prometheus (and erase it after a time). A careful rewrite of metrics can be used in prometheus to sanitize it.
write your own exporter to expose it (easy to do, especially since you have the right data format)
push it to a push gateway but there is currently no way to make the data expire.