How to define Columns and Status in JIRA to make sense for all issue types (Tasks, Epics, etc.) - workflow

Background:
JIRA offers a single set of statuses for all types of issues in a project.
Problem:
The problem is that the status set for a task is ToDo, InProgress, and Done. While for a UserStory in the same project it might be Designing, Developping, Testing, Releasing, and Done. It can even be different for a bug or an Epic.
Question:
How do you keep track of the workflow of your product and at the same time manage the status of your tasks using the single set of JIRA status.
PS: I know they can be customized for each project, but it doesn't help because you can't customize them for each issue type separately.

I think one of the reasons that JIRA offers the To Do, In Progress, and Done is that these can apply to anything. You either haven't done it, you're doing something, or you finished. That set can apply to any type of item.
That being said, I feel your pain in wanting to have a better view into the true state of an issue. What we have found we use for our OnDemand agile boards is to set up something like the following:
To Do
In Progress
Ready for Review
In Review
Done
For most types of issues, this can work. It adds that bit of extra layer to be able to identify what is ready for testing.
One of the things that is tricky is dependent tasks. For example, I noticed you mentioned "Designing" as a stage, and I'm not sure this makes sense in an agile sense. If the design is emerging from the development, it may be better to allow the design/development to flow within the development team. However, we all know that sometimes you need to get some details ironed out before you can proceed, or there may be some people that need to become involved before a dev can proceed. We made the mistake of trying to turn this into a stage, but what we found was that this was really either a sub-task for part of the team, or an impediment (blocker). By flagging stories, you can identify that a story requires something to be done before the development team can proceed.
If you are using Kanban, and not a Scrum board, the sub-task approach will not be for you. In those cases, you'll just need to make sure you have stages that make sense for all the issues you create. Stages will have to be fairly 'generic'. This sounds bad.
But it is not!
I believe teams generally use the stages for a few reasons:
Checking on status of an iteration
Inform other team members that they can pick up an item
Try to get a visual estimate on how close to Done an issue is.
More stages doesn't necessarily give a better status on an iteration as you really just need to see how many points you've closed and how many are in progress. So, at least for that goal, a more generic set of stages should work.
As for informing team members, too often I've seen teams retreat to the digital board to replace communication with each other. The fewer stages you have, the more you can force your team to talk to each other and work together to get a story to done. Things will work better this way, I guarantee it! Having a bit of a break-down helps, especially if you are working on a lot of items at once or have distributed teams working in different time zones, but keeping it simple is usually better.
Tracking the "how close to Done" is the hardest to do with generic stages. However, the multiple stages can be misleading. An item that is almost all the way across might have a severe bug in it that hasn't been found yet, so no matter how many stages you have your view on this item isn't any more accurate than a single "In Progress" stage. It isn't Done until it's Done :)
This was a long way for me to recommend keeping your workflow simple and letting your team use communication to keep on top of things. Maybe I should have just started with that!

The statuses that are available to each project is determined by the Workflow to which it is assigned. Not only does a workflow define the statuses, but it also defines what statuses you can progress to from a particular status. You can either create your own Workflows or you can download predefined workflows that suite your need.
In order to have separate workflows for different issue types, we need to define a Workflow Scheme:
1- Go to Jira Administration -> Workflow Schemes
2- Edit the Wokflow Scheme that is assigned to your project
3- Click the "Add Workflow" to add a new workflow for the issue types for which you need a different workflow and assign those issue types.

Related

CI pipeline in multi stage environments amongst many Scrum teams

I am looking for a solution for 20 scrum teams, on how to push code in different environments:
Dev (where developers can code and run unit tests)
SIT (integration with stubbed services)
QA (where QA testing happens, with real integration points, no stubs, currently maintained by a separate team, so that they keep track of what's going in)
Stage (similar to Live, with sensitive data, maintained by a separate team)
Live (that's the live game)
The sticky point here is that many teams would try to push to SIT and things could take time to deploy, and potential bottlenecks could be caused. Also, we need to ensure that our code works well with real integration points (QA env).
With respect to Scrum also, when should we call a user story Done, when we push to SIT, or QA?
I'm sure this has been asked before but couldn't find the exact terminology, feel free to point me to.
EDIT: it's a brand new product, clean slate, no code or pipelines as of yet.
OK, your exact question was: When do you call a User Story done? In Scrum, it is Done when it is potentially shippable, so in your setup: Stage.
Now, I expect that will sounds unrealistic to you and your team. And that is because you have a number of snags in your process that you'll have to solve to really accomplish CI/CD and to have potentially releasable code in a sprint:
Continuous Integration. I don't mean the server and platform/tool you use. I mean actually integrating everyone's code on every checkin. If you've got 20 teams who don't do this today, they aren't going to suddenly start tomorrow. As soon as you try, you're going to run into all kinds of practice, process, and architectural challenges. You'll need to work through those in order to achieve this. My best suggestion is start by having teams in common areas continuously integrate with each other, then start breaking down the barriers between those groups. If even that is too much, maybe just have each individual team integrate with each other multiple times a day as a start. Honestly, the rest of the steps aren't very relevant if you haven't got this down.
Testing is something that happens elsewhere. It's happening at a different stage, in a different environment, and probably with a different team. This is a problem for two reasons. If testing happens after the story is called done, it reenforces the idea that the job of the team is to write code, not create working, usable functionality. Second, those bug reports are going to come back, then stuff that was done and integrated has to be reworked and redone and integrated. If integration was painful before, this just adds a multiplier onto it.
Do you have cross-functional teams working on increments of value? It's a bit of a stretch for me to guess here, but service stubs and difficult integrations are often signs that different teams are working on different components. This creates a lot of opportunity for misalignment that can exacerbate your challenges.
Ok, last one. You have whole teams maintaining environments. That's a big red flag. That means your system is either extremely complicated or that people are leaving a lot of loose ends or both. If you hav to build whole teams around synchronizing other teams, you may be putting a band-aid on your problem. Your environment should be predictable and stable. That means that most tasks regarding your environment should be automatable and then the other teams can do the odd task that isn't.
This probably isn't the answer you were hoping for, but these are likely the challenges you'll have to tackle to get to your goal.

Managing volatile changes with TFS

I work in a shop where we maintain numerous .Net projects that require many small changes. We typically get a Service Request from our customer asking for a new feature. We need to ensure that the work we do is checked into TFS and can be related back to the SR in our help desk database, and that the changes to our code can be reviewed in isolation.
There have been a few strategies that we have discussed, but I hope this question isn't considered subjective as I feel there must be a single practice here that we should be employing. TFS has been used primarily as a source control repo for us, but we are looking to leverage more of it.
1) Currently, a developer creates a Task in TFS, and gives it the name of the SR work number. Then, all changes to the codebase are checked-in against that task. I personally am hesitant about this approach as we are co-opting the Task artifact to be used in a way it hasn't been intended for.
2) There has been discussion about branching for each new feature request we receive, and tag the branch with the SR work number. Should we be concerned about the overhead here? My understanding is that branching and merging can lead to complexity.
3) Simply add a comment to the changeset that is prefixed with the SR work item number. This is a simple approach, but when I View History, there doesn't seem to be an easy way to search through the changeset comments for the SR work number.
4) We're not terribly familiar with labelling, but would it be an option? It sounds like we could tag our Team Project with our SR work number once the work has been completed, and that would provide us with the snapshot we would need if we ever needed to refer back to the changes made.
Obviously, if I've missed the boat entirely, I'd be grateful for guidance.
I don't know if you're aware that you can customize TFS work items? You can create a Service Request work item. Make it a kind of Requirement. Make the tasks needed to create the new feature be children of the Service Request work item.
You can then use Branches, but only as a method for isolating the work of one feature request from another. As you check in work to the branch, be certain to associate each check-in with a task. You will be able to track the tasks across changesets and across branches.
As you perform builds, they will be associated to the changesets, and therefore, to the service requests. In the same way, test cases, bugs, and the tasks needed to remediate the bugs will also be associated to the service request. You will be able to track everything that happened with respect to that service request.
I assume you have a separate system for entering Service request and you want to continue using that. I'm also assuming that you are using Agile process template in TFS (http://msdn.microsoft.com/en-us/library/dd997897.aspx) but this should also work if you are using Scrum process template.
I would not suggest creating a custom work-item for Service request but just adding a new field to your user story/bug and name the field "SR work number". Creating custom work items and even adding new fields (adding new field is less painful) is not recommended unless you really need it as it becomes painful when you want to upgrade/migrate your project. You can find out how to customize work-items by going to below link:
http://tedgustaf.com/blog/2011/1/how-to-customize-tfs-2010-work-items-and-workflows/
Based on the info you provided I can suggest following workflow. This might be too much for your needs and if that's the case you can ignore creating user story and bug and directly create tasks.
Workflow:
1) Your helpdesk team creates a Service Request (in a different system) which generates a Service Request number.
2) Helpdek/Product/Dev team decides whether its a new feature or a bug in existing code. Based on that they create a User story(for new feature) or Bug work item in TFS.
3) Tasks are child elements of User story so if you want to break down your user story (feature) into multiple tasks then you can create tasks as child elements to the user story.
4) You enter the Service Request number in the new field you created for it. You can also later use the field for reporting purposes.
5) When developers check-in the code they link it to the appropriate user story/bug/task.
I wouldn't suggest #2, #3 and #4 for the same reasons you mentioned.

How to divide up a complex workflow into a template of Sub Tasks?

Our company is moving over to Jira for all project management and issue resolution
We have a few major uses that i am trying to build templates for. One being a typical issue found and fixed and can easily be handled with a single issue with basically the included jira workflow.
A more complex one is following a Waterfall workflow where Requirements are gathered including an estimate. Then Development kicks off, and in parallel test scripts are made. After Development is done the project is tested and handed off to the client. And finally once all is tested we release the change and re-test. In total I have 30 different steps built across 5 Sub-Tasks (However this is all just mapped out in Visio and not actually in jira yet).
The splitting across Sub-Tasks I hope can accomplish 2 things. First is that we want to track open-close times and efforts (hours works and days needed). And we should have the workflow split to multiple people so the Developer can work while a Tester can build their testing plan. That is able to save a few days, however is not a deal breaker.
So a few questions that I hope can help make this possible, although I am quite new to the various add-ons for Jira, I have no idea if we will get everything we want.
1, Is there an add-on that builds templates of Sub-Tasks, since each Sub-Task needs its own workflow. Currently the rules for Jira is to assign a workflow based on Project+Issue Type. So I believe I can have the proper piece of the workflow assigned to each Sub-Task by creating many Issue Types, like "Custom-Dev-Analsys" for the Sub-Task called Analysis
2, Is it possible to have only 1 or a few of all Sub-Tasks being the "current" one? When the issue starts the first Sub-Task should be the only one worked on, with only 1 of the steps being assigned to someone. After sign-off there should be 2 Sub-Tasks, the Development one and building Test Scripts. But all 5 sub-tasks should not be started since the very beginning, but it seems thats what Jira will do. I have looked at the add-on "Structure" and while that has unlimited hierarchy, I do not think it will let the sub-tasks open up in order. There might be a simple way to make the workflow open the next task (I am very new to workflows and trying to learn as much as possible before messing with our site)
3, If anyone can think of some way to do what I need differently, I am all ears.
Thanks!
I don't know any plugin that does all what asked for, but I had to deal with similar issues, and managed to sort most of them out with the Jira Scripting Suite, but it did require some development (using python).
It's easy to add to your workflow transitions that will create or close a new issue or a new subtask. I use it to create subtasks just by filling some required fields in one of the issue's screens. After the sub-tasks are created, only the automated scripts can close the issue, and that can be done by closing all of the subtasks.
If this kind of solution suits you I will be happy to help with any further inquires.
JIRA doesn't support nested workflows but one useful thing to remember is that if you change the issue type of a JIRA issue, it can have a different workflow. So an issue could start as TEST-123 which is a Requirement. Then after it reaches the end of its workflow it could be Moved to be a Task issue type.
Subtasks should stay as before.

Release timetables in Agile Environments [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We are currently utilising an agile environment in work. One of my tasks involve setting up a release timetable. A part of this is providing a time frame of how long a project would take to go from a development environment, to staging and then live.
I have conflicting thoughts regarding whether such a timetable needs to be done.
For a start, we are quickly moving into a Continuous Integration / Constant Delivery environment where an application is tested amongst all environments when a change is made to the code base. Therefore, there is no time frame, but things should be "just" deployable. (Well, we always need a little bit of contingency as the best laid plans can always go awry)
Can anyone steer my in the right direction on what would be the best way to handle such time tables and timeframes if needed in Release Management in an Agile Product Development Environment.
Regards,
Steve
Can anyone steer my in the right direction on what would be the best way to handle such time tables and timeframes if needed in Release Management in an Agile Product Development Environment.
First of all the Scrum Framework guidelines never guides you to not have a Release Plan or Time table ever. What is leading you to have conflicting thoughts? I would like to know the source which is leading you to this conflict.
Best way to create a Release Plan is like this (this may take a week or so depending on the size of your project):
Get the Stakeholders in a room and get a EPIC user story written on the board using their guidance. The EPIC user story should include the end product vision. (ignore if already done)
List out the type of users.(ignore if already done)
Break the Epic user story into smaller and smaller chunks of user stories till they are small enough to be doable in sprints.(ignore if already done)
Ask the Product Owner(s) of the Scrum Team(s) to prioritize the stories in the uncommitted backlog list(s) Also do some form of effort estimation fairly quickly and do not waste a lot of time estimating.
Get the target end date or Go Live date of the project from Stakeholders.
Divide the time frame from now until the end date into Releases. Ask the stakeholders which features need to be delivered by when and include the appropriate user stories in them, and call them Releases. You can also give those Releases themes if needed.
The Release Plan now is conceptualized.
After this draw it on a white board or put it in a visible and transparent location where everyone can see it - add user story cards to the appropriate release.
Now your initial release plan should be ready
Ideas for implementation:
Form a Scrum Team specifically for Operations Activities. They could follow Scrum or Kanban would be better.
As and when Development teams get "shippable products" put in the shelf, the Operations Kanaban Team can do the deployment and release branching etc tasks as per the Release Plan.
So this way the development Teams don't really focus on the Release plan or work, just the Operations Team does that. The Development Team just focussed on the Sprint Work, it would be the Product Owners headache to make sure the right user stories are in the right Release and in the right order. The direction would be given by the Stakeholders.
To be honest you really don't have to do anything yourself, it's all in the stakeholders and POs hand, I don't know where is is the fuss??
I hope you get the picture.
I usually maintain a release plan for the management that is mainly based on a combination of the estimated & prioritized user stories (I group them to match a main new feature of the product) and velocity.
With a well maintained product backlog it's pretty easy to do your release plan. I usually plan three to four releases a year.
What I like with Scrum is that I can potentially release after each iterations.
If you want to master your release management, you will need more information that few answers of practionners. I highly suggest you this book.
If you currently utilising and agile environment you should check Agile estimating and Planning book for some suggestions. This book also contains small chapter about Release planning.
Some release planning should be always done. Release is a target wich usually covers 3-12 months of development = set of iterations. It something which describes target criteria for project to success. It is usually described as combination of expected features and some date. Features in this case are usually not directly user stories but epics or whole themes because you don't know all user stories several months ahead. Personally, I think release is something that says when the project based on vision can be delivered. It takes high level expectations and constraints from the vision and converts them to some estimation. You can also divide project to several releases.
But remember that three forces works in agile as well. There is direct relation among Feature set, Release date and Resources (+ sometimes also mentioned fourth force: Quality). Pushing one of these forces always move others. It is usually modelled as equilateral triangle (or square).
There are different approaches to plan a release. One is mentioned in the book. It is based on user stories estimation, iteration length selection and velocity estimation but I'm little bit sceptic to this approach because you don't have simple user stories for whole release and estimating epics and themes is inaccurate. On the other hand high level feature definition is exactly what you need for three forces. If you don't have enough time you will implement only basic features from all themes. If you have more time you will implement more advanced features. This is task for product owner to correctly set business priority when dividing epics and themes into small user stories.
The most important part in agile is that you will know more quite soon. After each iteration you will have better knowledge of your velocity and you will also reestimate some planned user stories. For this reason I think the real estimate (accurate) and realease date should be planned after few iterations. As I was told on one training effort should not be estimated, effort should be measured. If anybody complains about it show him Waterfall and ask him when will he get relatively accurate estimate? Hint: Hardly before end of analysis wich should be say after 30% of the project.
It is also important what type of projects do you want to implement using agile / scrum and how long will project be. Some projects are strictly budget or date driven others can be more feature driven. This can affect your release planning. For short projects you usually have small user stories and you can provide much more accurate estimate at the beginning.
This is a very loaded question, and depends on your company to be sure. I first have to ask, why are you using 3 environments and continuous integration (your reason matters)? Are you performing automated tests at all? How are your code branches setup? Do you release for some functionality, or just routine maintenance fixes?
Answering these will give you an idea of why you need a release, and how you should go about it.
For example, if you only have a staging environment for the purpose of integration and perform automated tests, then can't having a separate code branch in which continuous integration tests run be sufficient?
If staging is to perform some sort of user acceptance, does your company have a dedicated testing team or are they members of the agile teams?
As you correctly stated, if the code is always integrated and tested, then why would you need a timetable and moving from environment to environment unless you were unsure about the actual "done" condition of the features? By that, I mean that it's not that you're unsure that the feature was coded correctly, but are you worried it will introduce other bugs? Will it integrate well with code already in production? Address the concerns at the root of the problem. Don't just do it because you think you're supposed to have X environments or testing should be in another group. Maybe the solutions to those problems may be to adjust the definition of "done" accordingly.
As you can see there are many, many factors that will make your organization unique. There is no one right way to answer this, just tradeoffs that you are willing to accept.
I find that having multiple environments with teams of people working at the various layers tends to be anti-agile and counterproductive. The best bet is to analyze your concerns, and try to find ways to solve them (such as expanding the definition of "done", or breaking up the various organizations and putting them on the teams, eliminating as many environments as possible and simplifying the process, etc). That may not be possible in your organization, so you may have to live with tradeoffs.

Speccing out new features

I am curious as to how other development teams spec out new features. The team I have just moved up to lead has no real specification process. I have just implemented a proper development process with CI, auto deployment and logging all bugs using Trac and I am now moving on to deal with changes.
I have a list of about 20 changes to our product to have done over the next 2 months. Normally I would just spec out each change going into detail of what should be done but I am curious as to how other teams handle this. Any suggestions?
I think we had a successful approach in my last job as we delivered the project on time and with only a couple of issues found in production. However, there were only 3 people working on the product, so I'm not entirely sure how it would scale to larger teams.
We wrote specs upfront for the whole product but without going into too much detail and with an emphasis on the UI. This was a means for us to get a feel for what had to be done and for the scope of the project.
When we started implementing things, we had to work everything out in a lot more detail (and inevitably had to do some things differently from the spec). To that end, we got together and worked out the best approach to implementing each feature (sometimes with prototypes). We didn't update the original spec but we did make notes after the meetings as it's very easy to forget the details afterwards.
So in summary, my approach is to treat specs as an exploratory tool and to work out finer details during implementation. Depending on the project, it may also be a good idea to keep the original spec up to date as the application evolves (which we didn't need to do this time).
Good question but its can be subjective. I guess it depends on the strategy of the product, if its to be deployed to multiple clients in the same way or to a single client on a bespoke project, the impact, dependency these changes have on the system and each other and the priority these changes need to be made.
I would look at the priority and the dependency, that will naturally start grouping things?