ADO: Analyzing Sprint performance - azure-devops

I am a PO leading a small development team for enhancements to our PeopleSoft Campus Solutions application for a Medical School.
We are using the Sprint functionality in ADO to assign stories from our backlog to the Sprint, create the relevant tasks for each story (mainly development, testing, deployment) and assign the tasks to resources who, in turn, provide effort values (original estimate, remaining, completed). We also make sure our capacity is properly set, with resources OOO time and school holidays configured to get an accurate team and resource capacity. The team updates their effort numbers daily to ensure we are tracking burndown.
While we always start the Sprint with the remaining work hours under team capacity (and the same at the resource level), we have historically left alot of remaining work on the table at the end of the Sprint.
My leadership wants to answer the question "Why was the work left on the table?". Of course, there could be MANY reasons, we underestimated the effort, we were blocked on a task (for example, we can't start the testing task until the development is done), the resource didn't actually have the calculated capacity due to being pulled into other meetings or initiatives, or (and I don't think this is the case) people were just plain lazy.
What reports/analytics can I leverage to help answer this question? Even just seeing a list of remaining tasks per resource with remaining task effort and with a total amount of work remaining per resource overall would be helpful, but I can't seem to find anything.
Any suggestions or guidance is appreciated!

You can use Queries to find the remaining tasks(add column option->Remaining Work) and save the query into Shared Queries.
There is a query results widget in dashboard to display the query in Shared Queries. Do not forget to add Remaining Working in widget.
Remaining Work:
You could refer to the document: Adjust work to fit sprint capacity

Related

Scheduling Requirements instead of Tasks or Features?

I am trying to understand why DevOps does not allow start/finish dates for Requirements (CMMI process) as opposed to seemingly just Features and Tasks? In addition, it's odd that if I add a Requirement to an Iteration (which has dates), I see it on a Delivery plan:
I can move it out of the sprint by dragging the start date and end date out in the Delivery Plan,
But don't see any date information on the ticket itself?
The idea is that since Epics and Features are likely to span sprints, you'd use these dates to build a plan. But Requirements should be small enough in a sprint and would take their dates from there.
The start date and end date of the Requirement (comparable to a User Story or Product Backlog Item) are automatically filled when a Requirement is put to In Progress and Done. The Delivery Plan has no direct relation to these fields, hence why the fields are read-only.
Remember that Delivery Plans works the same across the Agile and Scrum and CMMI templates, as such a number of assumptions are made about your delivery process: you work in sprints and work performed in a sprint is generally finished within the sprint.

Azure DevOps Delivery Plan (Preview) - Not all features are showing

I am discovering the Azure Delivery Plan but I don't understand why I don't see all my Features in there.
This is what I am talking about:
https://learn.microsoft.com/en-us/azure/devops/boards/plans/review-team-plans?view=azure-devops&tabs=plans-preview
I have looked at the Tags, owners, start/end dates, and so on but can not find any criteria which indicate why I see certain Features and not others.
I am also a member of the projects that I do not see.
Can anyone shed some light for me on this one?
In my test, if the dates of two iterations overlap, the features under the iteration will not be displayed.
For example:
If the date does not overlap, the features under the iteration will be displayed normally.
You can check if this is your case.
This one is actually on me and was kind of logical.
The features were on the backlog and didn't have any iteration assigned, hence, there were not showing below any iteration.
Thank you for the suggestions and feedback! Case closed!

Azure Devops Tracking committed vs actuals

My organization is trying to find an out of the box way with Azure DevOps to see which features were 'committed to' at the start of the release, and which are delivered. The Velocity report would be perfect, except Features are assigned to areas that are configured to run off of sprints that are child-iterations of larger release-iterations, and we want the data at the release-iteration level.
We're able to build queries that can mostly deliver this, but that method doesn't track changes, just shows you a current point in time view of how things are.
The goal is to have data we can use to evaluate if we're making commitments we can keep.
How have other organizations tackled this sort of problem? How do you tie committed vs. actuals at the Feature level?
I could understand your requirements. But based on my test, Velocity Report has some limitations:
For example:
If the Iteration Path has Child Iteration, it will show the child Iteration on Velocity Report. As you said , release-iteration will not show in the Report.
So it cannot meet all your needs.
I tested some related extensions and existing charts, and it seems that there is no tool that can improve or replace the Velocity Report .
For a workaround:
For Child Iteration, you still could use the Velocity Report to record the process.
For the Parent Iteration, you could create different queries to show the process(Planned
, Completed,Completed Late and so on). You can use query to get the work item list of the corresponding state.
Here are examples:
Planned :
Completed:
...
Then you could add them to the Dashboards(Query Title Widget):
On the other hand, this requirement is valuable.
You could add your request for this feature on our UserVoice site, which is our main forum for product suggestions.

Do I need to change the remaining hours on a task to zero before marking it as "Done"?

I'd like for the community to help me resolve a disagreement I have with a team mate about moving tasks on the sprint board from "In Progress" to "Done."
Does one need to reduce the remaining hours to zero before moving the task to "Done"? Or is it recommended to keep the hour estimate on each task as it is, before moving the card over to the right?
A quick demo of both approaches: https://share.getcloudapp.com/nOuN42y8
VSTS ticket clipping
Does one need to reduce the remaining hours to zero before moving the
task to "Done"? Or is it recommended to keep the hour estimate on each
task as it is, before moving the card over to the right?
Just personal opinion, I think one member should reduce the remaining hours to zero before moving the task to "Done" if the task is really done!
Please check Update remaining work:
Updating Remaining Work, preferably prior to the daily Scrum meeting, helps the team stay informed of the progress being made. It also ensures a smoother burndown chart.
Each team member can review the tasks they've worked on and estimate the work remaining. If they've discovered that it's taking longer than expected to complete, they should increase the remaining work for the task. Remaining Work should always reflect exactly how much work the team member estimates is remaining to complete the task.
So it's officially recommended to update the work remaining if the work is nearly done. Then the Team manager can manage members' work load accurately!

Scheduling variables sized work items efficiently

(I have also posted this question at math.stackexchange.com because I'm not sure where it should belong.)
I have a system with the following inputs:
Set of work items to be completed. These are variable sized. They do not have to be completed in any particular order.
Historical data as to how long work items have taken to complete in the past. However, past performance is no guarantee of future success! That is, once we come to actually execute a work item, we may find that it takes longer or shorter than it has previously.
There can be work items that I have never seen before and hence have no historical data about.
Work items further have a "classification" of "parallel" or "serial".
Set of "agents" which are capable of picking up a work item and working on it. The number of agents is fixed and known in advance. An agent can only work on one work item at a time.
Set of "servers" against which the agents execute work items. Servers have different capabilities. Specifically, they are capable of handling different numbers of agents simultaneously.
Rules:
If a server is being using to execute a "serial" work item, it cannot simultaneously be used to execute any other work item.
Provided a server isn't being used to execute any "serial" work items, it can simultaneously handle as many agents as it is capable of, all executing "parallel" work items.
There are a handful of work items which must be executed against a specific server (although any agent can do that). These work items are "parallel", if that matters. (It may be easier to ignore this rule for now!)
Requirement:
Given the inputs and rules above, I need to execute the set of work items "as quickly as possible". Since we cannot know how long a work item will take until it is complete, we cannot possibly hope to derive a perfect solution up front (I suppose), so "as quickly as possible" means not manifestly doing something stupid like just using one agent to execute each work item one by one!
Historically, I've had a very simple round-robin algorithm and simply sorted the work items by descending historical duration such that the longest running work items get scheduled sooner and, hopefully, at the end of the cycle I'm able to keep all agents and servers reasonably well loaded with short-duration work items. This has resulted in a pretty good "square" shape to the utilization graph with no long tail of long-duration work items hanging around at the end of the cycle.
This historical algorithm, however, has required me to pre-configure the number of agents and servers and pre-allocate work items to "pools" and assign pools to servers, and lots of other horrible stuff. I now need to support a dynamic number of agents and servers without having to reconfigure things. (Note that the number of servers will be fixed during a cycle - that is, the number will only change between cycles - but the number of agents may increase or decrease in the middle of the cycle.)
Once all work items are complete, we record how long each work item took to feed in to the next cycle and start again from the beginning!