How does design fit into the kanban method. Should there be a dedicated column for design to be placed in, or is the high & low level design of the story done before it even moves from the story queue into the WIP?
There is no single answer. It all depends how a team works. What you basically do with Kanban board is you map the way you work into the board. It means the answer to your question is within the process you follow when you build software.
To help you a bit - according to rules you should follow when you work with Kanban:
You should visualize workflow, which means all stages which are somehow separate in real life should be visualized in such way on the board.
Your policies should be explicit, which means everyone in the team should understand the process the same. If a sticky note is in a specific column it should mean that a feature is in such and such state and it should be obvious for everyone in the team.
Additionally, it is a typical situation that separate column is introduced when you deal with handoffs, meaning a feature is handed off from one team member to another, e.g. separate columns for development (done by developer) and testing (done by quality engineer).
When talking about design I've seen different approaches: either a single separate column for whole design, more columns when design is kind of extensive or no dedicated column at all when design is merged/mixed with development. Ask yourself, and your team, how it looks like in your case and you will have the answer how the board should look like.
And one final advice: don't be afraid of experimenting. If you aren't sure how you work, which is possible, try different things and check which board design works best in you case.
Related
I'm curious with usage of DevOps boards/sprints/kanban, what is the industry standard for handling usage by multiple stakeholders?
More specifically, DevOps comes with standard states like "Active/Resolved/etc", but what if we want to capture more, such as "Deployed to env X", or even testing statuses. I know we can add additional statuses, but is that the right thing to do?
The challenge we have, is if the Kanban is used, then all those statuses must be represented on the Kanban, because movement on that board actually sets the status, so if we remove a status column from the Kanban, any WI with that status would effectively be hidden.
So effectively, I've added a bunch of statuses to cater for environments, testing, different reasons for on-hold, etc, but its being argued that this makes the Kanban too complex to get an easy snapshot of whats happening.
It's been suggested that we remove all these statuses, and instead use Tags and Swimlanes, which seems much less robust to me, it accomplishes the same thing, but without a single source of truth like the status provides.
Just curious if anyone with more experience with DevOps can shed light on how they've tackled this.
It looks that you want to get feature which is not implemented - Allow multiple states in the KanBan Board Columns
Actually the TFS allows only one state for every column in the KanBan Board which makes the whole developing process messy. We had to create several columns for all the states that our dev team has. It would be great if TFS could allow to assign multiple states to the columns, and doing and done columns to reduce the number of columns in the whole process.
There is no way to overcome this and what you need.
You can create custom field with pick list and then configure Kanban to display this field on the board:
This maybe helps you a bit.
Background:
JIRA offers a single set of statuses for all types of issues in a project.
Problem:
The problem is that the status set for a task is ToDo, InProgress, and Done. While for a UserStory in the same project it might be Designing, Developping, Testing, Releasing, and Done. It can even be different for a bug or an Epic.
Question:
How do you keep track of the workflow of your product and at the same time manage the status of your tasks using the single set of JIRA status.
PS: I know they can be customized for each project, but it doesn't help because you can't customize them for each issue type separately.
I think one of the reasons that JIRA offers the To Do, In Progress, and Done is that these can apply to anything. You either haven't done it, you're doing something, or you finished. That set can apply to any type of item.
That being said, I feel your pain in wanting to have a better view into the true state of an issue. What we have found we use for our OnDemand agile boards is to set up something like the following:
To Do
In Progress
Ready for Review
In Review
Done
For most types of issues, this can work. It adds that bit of extra layer to be able to identify what is ready for testing.
One of the things that is tricky is dependent tasks. For example, I noticed you mentioned "Designing" as a stage, and I'm not sure this makes sense in an agile sense. If the design is emerging from the development, it may be better to allow the design/development to flow within the development team. However, we all know that sometimes you need to get some details ironed out before you can proceed, or there may be some people that need to become involved before a dev can proceed. We made the mistake of trying to turn this into a stage, but what we found was that this was really either a sub-task for part of the team, or an impediment (blocker). By flagging stories, you can identify that a story requires something to be done before the development team can proceed.
If you are using Kanban, and not a Scrum board, the sub-task approach will not be for you. In those cases, you'll just need to make sure you have stages that make sense for all the issues you create. Stages will have to be fairly 'generic'. This sounds bad.
But it is not!
I believe teams generally use the stages for a few reasons:
Checking on status of an iteration
Inform other team members that they can pick up an item
Try to get a visual estimate on how close to Done an issue is.
More stages doesn't necessarily give a better status on an iteration as you really just need to see how many points you've closed and how many are in progress. So, at least for that goal, a more generic set of stages should work.
As for informing team members, too often I've seen teams retreat to the digital board to replace communication with each other. The fewer stages you have, the more you can force your team to talk to each other and work together to get a story to done. Things will work better this way, I guarantee it! Having a bit of a break-down helps, especially if you are working on a lot of items at once or have distributed teams working in different time zones, but keeping it simple is usually better.
Tracking the "how close to Done" is the hardest to do with generic stages. However, the multiple stages can be misleading. An item that is almost all the way across might have a severe bug in it that hasn't been found yet, so no matter how many stages you have your view on this item isn't any more accurate than a single "In Progress" stage. It isn't Done until it's Done :)
This was a long way for me to recommend keeping your workflow simple and letting your team use communication to keep on top of things. Maybe I should have just started with that!
The statuses that are available to each project is determined by the Workflow to which it is assigned. Not only does a workflow define the statuses, but it also defines what statuses you can progress to from a particular status. You can either create your own Workflows or you can download predefined workflows that suite your need.
In order to have separate workflows for different issue types, we need to define a Workflow Scheme:
1- Go to Jira Administration -> Workflow Schemes
2- Edit the Wokflow Scheme that is assigned to your project
3- Click the "Add Workflow" to add a new workflow for the issue types for which you need a different workflow and assign those issue types.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We are currently utilising an agile environment in work. One of my tasks involve setting up a release timetable. A part of this is providing a time frame of how long a project would take to go from a development environment, to staging and then live.
I have conflicting thoughts regarding whether such a timetable needs to be done.
For a start, we are quickly moving into a Continuous Integration / Constant Delivery environment where an application is tested amongst all environments when a change is made to the code base. Therefore, there is no time frame, but things should be "just" deployable. (Well, we always need a little bit of contingency as the best laid plans can always go awry)
Can anyone steer my in the right direction on what would be the best way to handle such time tables and timeframes if needed in Release Management in an Agile Product Development Environment.
Regards,
Steve
Can anyone steer my in the right direction on what would be the best way to handle such time tables and timeframes if needed in Release Management in an Agile Product Development Environment.
First of all the Scrum Framework guidelines never guides you to not have a Release Plan or Time table ever. What is leading you to have conflicting thoughts? I would like to know the source which is leading you to this conflict.
Best way to create a Release Plan is like this (this may take a week or so depending on the size of your project):
Get the Stakeholders in a room and get a EPIC user story written on the board using their guidance. The EPIC user story should include the end product vision. (ignore if already done)
List out the type of users.(ignore if already done)
Break the Epic user story into smaller and smaller chunks of user stories till they are small enough to be doable in sprints.(ignore if already done)
Ask the Product Owner(s) of the Scrum Team(s) to prioritize the stories in the uncommitted backlog list(s) Also do some form of effort estimation fairly quickly and do not waste a lot of time estimating.
Get the target end date or Go Live date of the project from Stakeholders.
Divide the time frame from now until the end date into Releases. Ask the stakeholders which features need to be delivered by when and include the appropriate user stories in them, and call them Releases. You can also give those Releases themes if needed.
The Release Plan now is conceptualized.
After this draw it on a white board or put it in a visible and transparent location where everyone can see it - add user story cards to the appropriate release.
Now your initial release plan should be ready
Ideas for implementation:
Form a Scrum Team specifically for Operations Activities. They could follow Scrum or Kanban would be better.
As and when Development teams get "shippable products" put in the shelf, the Operations Kanaban Team can do the deployment and release branching etc tasks as per the Release Plan.
So this way the development Teams don't really focus on the Release plan or work, just the Operations Team does that. The Development Team just focussed on the Sprint Work, it would be the Product Owners headache to make sure the right user stories are in the right Release and in the right order. The direction would be given by the Stakeholders.
To be honest you really don't have to do anything yourself, it's all in the stakeholders and POs hand, I don't know where is is the fuss??
I hope you get the picture.
I usually maintain a release plan for the management that is mainly based on a combination of the estimated & prioritized user stories (I group them to match a main new feature of the product) and velocity.
With a well maintained product backlog it's pretty easy to do your release plan. I usually plan three to four releases a year.
What I like with Scrum is that I can potentially release after each iterations.
If you want to master your release management, you will need more information that few answers of practionners. I highly suggest you this book.
If you currently utilising and agile environment you should check Agile estimating and Planning book for some suggestions. This book also contains small chapter about Release planning.
Some release planning should be always done. Release is a target wich usually covers 3-12 months of development = set of iterations. It something which describes target criteria for project to success. It is usually described as combination of expected features and some date. Features in this case are usually not directly user stories but epics or whole themes because you don't know all user stories several months ahead. Personally, I think release is something that says when the project based on vision can be delivered. It takes high level expectations and constraints from the vision and converts them to some estimation. You can also divide project to several releases.
But remember that three forces works in agile as well. There is direct relation among Feature set, Release date and Resources (+ sometimes also mentioned fourth force: Quality). Pushing one of these forces always move others. It is usually modelled as equilateral triangle (or square).
There are different approaches to plan a release. One is mentioned in the book. It is based on user stories estimation, iteration length selection and velocity estimation but I'm little bit sceptic to this approach because you don't have simple user stories for whole release and estimating epics and themes is inaccurate. On the other hand high level feature definition is exactly what you need for three forces. If you don't have enough time you will implement only basic features from all themes. If you have more time you will implement more advanced features. This is task for product owner to correctly set business priority when dividing epics and themes into small user stories.
The most important part in agile is that you will know more quite soon. After each iteration you will have better knowledge of your velocity and you will also reestimate some planned user stories. For this reason I think the real estimate (accurate) and realease date should be planned after few iterations. As I was told on one training effort should not be estimated, effort should be measured. If anybody complains about it show him Waterfall and ask him when will he get relatively accurate estimate? Hint: Hardly before end of analysis wich should be say after 30% of the project.
It is also important what type of projects do you want to implement using agile / scrum and how long will project be. Some projects are strictly budget or date driven others can be more feature driven. This can affect your release planning. For short projects you usually have small user stories and you can provide much more accurate estimate at the beginning.
This is a very loaded question, and depends on your company to be sure. I first have to ask, why are you using 3 environments and continuous integration (your reason matters)? Are you performing automated tests at all? How are your code branches setup? Do you release for some functionality, or just routine maintenance fixes?
Answering these will give you an idea of why you need a release, and how you should go about it.
For example, if you only have a staging environment for the purpose of integration and perform automated tests, then can't having a separate code branch in which continuous integration tests run be sufficient?
If staging is to perform some sort of user acceptance, does your company have a dedicated testing team or are they members of the agile teams?
As you correctly stated, if the code is always integrated and tested, then why would you need a timetable and moving from environment to environment unless you were unsure about the actual "done" condition of the features? By that, I mean that it's not that you're unsure that the feature was coded correctly, but are you worried it will introduce other bugs? Will it integrate well with code already in production? Address the concerns at the root of the problem. Don't just do it because you think you're supposed to have X environments or testing should be in another group. Maybe the solutions to those problems may be to adjust the definition of "done" accordingly.
As you can see there are many, many factors that will make your organization unique. There is no one right way to answer this, just tradeoffs that you are willing to accept.
I find that having multiple environments with teams of people working at the various layers tends to be anti-agile and counterproductive. The best bet is to analyze your concerns, and try to find ways to solve them (such as expanding the definition of "done", or breaking up the various organizations and putting them on the teams, eliminating as many environments as possible and simplifying the process, etc). That may not be possible in your organization, so you may have to live with tradeoffs.
We're currently researching different source control tools, and want to test each one with some light-weight but meaningful scenario to get a feel for the capabilities of each one.
Terminology and internal logic varying wildly between some tools, it would be nice to have the scenario expressed in terms of use cases ("We have to correct a bug on Release 1.3"), rather than in potentially tool-specific terms ("Create a branch named Release 1.3").
It's true that different things are important for different teams, but it would be interesting to have some kind of canonical test case from which different scenarios could be picked. Or am I being too optimistic?
Is anyone aware of something like this? Have you used a similar approach when investigating source control tools?
These are the requirements that Mozilla had when they set out evaluate version control systems for internal use in 2006. You might find a similar approach useful.
If you find scenarios that are specific to your company, perhaps you can translate them to requirements like the ones above.
You have some general criteria with Google DVCS analysis which can give some ideas.
But you need first to see if you want to evaluate:
CVCS (Central Version Control): update-merge-commit
DVCS (Distributed Version Control): commit-rebase/merge
For more on the different scenario to test (one answer for CVCS, one for DVCS), see the SO question:
" Describe your workflow of using version control (VCS or DVCS) "
You have to question things like: Do you have only a single line of Release/Development or do we create multiple releases in parallel? No only the mentioned scenario is needed you need to think about upon that one like merging back the changes into dev line or multiple other lines as well. This can influence the approach. The approach you selected sounds very good cause you try to understand the process and not using tool terms. I've done that for multiple times for customers of mine. In different teams/companies different things are handled different. So the problem is to figure out what the process of yours is (sometime people are not aware of this).
I am curious as to how other development teams spec out new features. The team I have just moved up to lead has no real specification process. I have just implemented a proper development process with CI, auto deployment and logging all bugs using Trac and I am now moving on to deal with changes.
I have a list of about 20 changes to our product to have done over the next 2 months. Normally I would just spec out each change going into detail of what should be done but I am curious as to how other teams handle this. Any suggestions?
I think we had a successful approach in my last job as we delivered the project on time and with only a couple of issues found in production. However, there were only 3 people working on the product, so I'm not entirely sure how it would scale to larger teams.
We wrote specs upfront for the whole product but without going into too much detail and with an emphasis on the UI. This was a means for us to get a feel for what had to be done and for the scope of the project.
When we started implementing things, we had to work everything out in a lot more detail (and inevitably had to do some things differently from the spec). To that end, we got together and worked out the best approach to implementing each feature (sometimes with prototypes). We didn't update the original spec but we did make notes after the meetings as it's very easy to forget the details afterwards.
So in summary, my approach is to treat specs as an exploratory tool and to work out finer details during implementation. Depending on the project, it may also be a good idea to keep the original spec up to date as the application evolves (which we didn't need to do this time).
Good question but its can be subjective. I guess it depends on the strategy of the product, if its to be deployed to multiple clients in the same way or to a single client on a bespoke project, the impact, dependency these changes have on the system and each other and the priority these changes need to be made.
I would look at the priority and the dependency, that will naturally start grouping things?