Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We are currently utilising an agile environment in work. One of my tasks involve setting up a release timetable. A part of this is providing a time frame of how long a project would take to go from a development environment, to staging and then live.
I have conflicting thoughts regarding whether such a timetable needs to be done.
For a start, we are quickly moving into a Continuous Integration / Constant Delivery environment where an application is tested amongst all environments when a change is made to the code base. Therefore, there is no time frame, but things should be "just" deployable. (Well, we always need a little bit of contingency as the best laid plans can always go awry)
Can anyone steer my in the right direction on what would be the best way to handle such time tables and timeframes if needed in Release Management in an Agile Product Development Environment.
Regards,
Steve
Can anyone steer my in the right direction on what would be the best way to handle such time tables and timeframes if needed in Release Management in an Agile Product Development Environment.
First of all the Scrum Framework guidelines never guides you to not have a Release Plan or Time table ever. What is leading you to have conflicting thoughts? I would like to know the source which is leading you to this conflict.
Best way to create a Release Plan is like this (this may take a week or so depending on the size of your project):
Get the Stakeholders in a room and get a EPIC user story written on the board using their guidance. The EPIC user story should include the end product vision. (ignore if already done)
List out the type of users.(ignore if already done)
Break the Epic user story into smaller and smaller chunks of user stories till they are small enough to be doable in sprints.(ignore if already done)
Ask the Product Owner(s) of the Scrum Team(s) to prioritize the stories in the uncommitted backlog list(s) Also do some form of effort estimation fairly quickly and do not waste a lot of time estimating.
Get the target end date or Go Live date of the project from Stakeholders.
Divide the time frame from now until the end date into Releases. Ask the stakeholders which features need to be delivered by when and include the appropriate user stories in them, and call them Releases. You can also give those Releases themes if needed.
The Release Plan now is conceptualized.
After this draw it on a white board or put it in a visible and transparent location where everyone can see it - add user story cards to the appropriate release.
Now your initial release plan should be ready
Ideas for implementation:
Form a Scrum Team specifically for Operations Activities. They could follow Scrum or Kanban would be better.
As and when Development teams get "shippable products" put in the shelf, the Operations Kanaban Team can do the deployment and release branching etc tasks as per the Release Plan.
So this way the development Teams don't really focus on the Release plan or work, just the Operations Team does that. The Development Team just focussed on the Sprint Work, it would be the Product Owners headache to make sure the right user stories are in the right Release and in the right order. The direction would be given by the Stakeholders.
To be honest you really don't have to do anything yourself, it's all in the stakeholders and POs hand, I don't know where is is the fuss??
I hope you get the picture.
I usually maintain a release plan for the management that is mainly based on a combination of the estimated & prioritized user stories (I group them to match a main new feature of the product) and velocity.
With a well maintained product backlog it's pretty easy to do your release plan. I usually plan three to four releases a year.
What I like with Scrum is that I can potentially release after each iterations.
If you want to master your release management, you will need more information that few answers of practionners. I highly suggest you this book.
If you currently utilising and agile environment you should check Agile estimating and Planning book for some suggestions. This book also contains small chapter about Release planning.
Some release planning should be always done. Release is a target wich usually covers 3-12 months of development = set of iterations. It something which describes target criteria for project to success. It is usually described as combination of expected features and some date. Features in this case are usually not directly user stories but epics or whole themes because you don't know all user stories several months ahead. Personally, I think release is something that says when the project based on vision can be delivered. It takes high level expectations and constraints from the vision and converts them to some estimation. You can also divide project to several releases.
But remember that three forces works in agile as well. There is direct relation among Feature set, Release date and Resources (+ sometimes also mentioned fourth force: Quality). Pushing one of these forces always move others. It is usually modelled as equilateral triangle (or square).
There are different approaches to plan a release. One is mentioned in the book. It is based on user stories estimation, iteration length selection and velocity estimation but I'm little bit sceptic to this approach because you don't have simple user stories for whole release and estimating epics and themes is inaccurate. On the other hand high level feature definition is exactly what you need for three forces. If you don't have enough time you will implement only basic features from all themes. If you have more time you will implement more advanced features. This is task for product owner to correctly set business priority when dividing epics and themes into small user stories.
The most important part in agile is that you will know more quite soon. After each iteration you will have better knowledge of your velocity and you will also reestimate some planned user stories. For this reason I think the real estimate (accurate) and realease date should be planned after few iterations. As I was told on one training effort should not be estimated, effort should be measured. If anybody complains about it show him Waterfall and ask him when will he get relatively accurate estimate? Hint: Hardly before end of analysis wich should be say after 30% of the project.
It is also important what type of projects do you want to implement using agile / scrum and how long will project be. Some projects are strictly budget or date driven others can be more feature driven. This can affect your release planning. For short projects you usually have small user stories and you can provide much more accurate estimate at the beginning.
This is a very loaded question, and depends on your company to be sure. I first have to ask, why are you using 3 environments and continuous integration (your reason matters)? Are you performing automated tests at all? How are your code branches setup? Do you release for some functionality, or just routine maintenance fixes?
Answering these will give you an idea of why you need a release, and how you should go about it.
For example, if you only have a staging environment for the purpose of integration and perform automated tests, then can't having a separate code branch in which continuous integration tests run be sufficient?
If staging is to perform some sort of user acceptance, does your company have a dedicated testing team or are they members of the agile teams?
As you correctly stated, if the code is always integrated and tested, then why would you need a timetable and moving from environment to environment unless you were unsure about the actual "done" condition of the features? By that, I mean that it's not that you're unsure that the feature was coded correctly, but are you worried it will introduce other bugs? Will it integrate well with code already in production? Address the concerns at the root of the problem. Don't just do it because you think you're supposed to have X environments or testing should be in another group. Maybe the solutions to those problems may be to adjust the definition of "done" accordingly.
As you can see there are many, many factors that will make your organization unique. There is no one right way to answer this, just tradeoffs that you are willing to accept.
I find that having multiple environments with teams of people working at the various layers tends to be anti-agile and counterproductive. The best bet is to analyze your concerns, and try to find ways to solve them (such as expanding the definition of "done", or breaking up the various organizations and putting them on the teams, eliminating as many environments as possible and simplifying the process, etc). That may not be possible in your organization, so you may have to live with tradeoffs.
Related
I am looking for a solution for 20 scrum teams, on how to push code in different environments:
Dev (where developers can code and run unit tests)
SIT (integration with stubbed services)
QA (where QA testing happens, with real integration points, no stubs, currently maintained by a separate team, so that they keep track of what's going in)
Stage (similar to Live, with sensitive data, maintained by a separate team)
Live (that's the live game)
The sticky point here is that many teams would try to push to SIT and things could take time to deploy, and potential bottlenecks could be caused. Also, we need to ensure that our code works well with real integration points (QA env).
With respect to Scrum also, when should we call a user story Done, when we push to SIT, or QA?
I'm sure this has been asked before but couldn't find the exact terminology, feel free to point me to.
EDIT: it's a brand new product, clean slate, no code or pipelines as of yet.
OK, your exact question was: When do you call a User Story done? In Scrum, it is Done when it is potentially shippable, so in your setup: Stage.
Now, I expect that will sounds unrealistic to you and your team. And that is because you have a number of snags in your process that you'll have to solve to really accomplish CI/CD and to have potentially releasable code in a sprint:
Continuous Integration. I don't mean the server and platform/tool you use. I mean actually integrating everyone's code on every checkin. If you've got 20 teams who don't do this today, they aren't going to suddenly start tomorrow. As soon as you try, you're going to run into all kinds of practice, process, and architectural challenges. You'll need to work through those in order to achieve this. My best suggestion is start by having teams in common areas continuously integrate with each other, then start breaking down the barriers between those groups. If even that is too much, maybe just have each individual team integrate with each other multiple times a day as a start. Honestly, the rest of the steps aren't very relevant if you haven't got this down.
Testing is something that happens elsewhere. It's happening at a different stage, in a different environment, and probably with a different team. This is a problem for two reasons. If testing happens after the story is called done, it reenforces the idea that the job of the team is to write code, not create working, usable functionality. Second, those bug reports are going to come back, then stuff that was done and integrated has to be reworked and redone and integrated. If integration was painful before, this just adds a multiplier onto it.
Do you have cross-functional teams working on increments of value? It's a bit of a stretch for me to guess here, but service stubs and difficult integrations are often signs that different teams are working on different components. This creates a lot of opportunity for misalignment that can exacerbate your challenges.
Ok, last one. You have whole teams maintaining environments. That's a big red flag. That means your system is either extremely complicated or that people are leaving a lot of loose ends or both. If you hav to build whole teams around synchronizing other teams, you may be putting a band-aid on your problem. Your environment should be predictable and stable. That means that most tasks regarding your environment should be automatable and then the other teams can do the odd task that isn't.
This probably isn't the answer you were hoping for, but these are likely the challenges you'll have to tackle to get to your goal.
Background:
JIRA offers a single set of statuses for all types of issues in a project.
Problem:
The problem is that the status set for a task is ToDo, InProgress, and Done. While for a UserStory in the same project it might be Designing, Developping, Testing, Releasing, and Done. It can even be different for a bug or an Epic.
Question:
How do you keep track of the workflow of your product and at the same time manage the status of your tasks using the single set of JIRA status.
PS: I know they can be customized for each project, but it doesn't help because you can't customize them for each issue type separately.
I think one of the reasons that JIRA offers the To Do, In Progress, and Done is that these can apply to anything. You either haven't done it, you're doing something, or you finished. That set can apply to any type of item.
That being said, I feel your pain in wanting to have a better view into the true state of an issue. What we have found we use for our OnDemand agile boards is to set up something like the following:
To Do
In Progress
Ready for Review
In Review
Done
For most types of issues, this can work. It adds that bit of extra layer to be able to identify what is ready for testing.
One of the things that is tricky is dependent tasks. For example, I noticed you mentioned "Designing" as a stage, and I'm not sure this makes sense in an agile sense. If the design is emerging from the development, it may be better to allow the design/development to flow within the development team. However, we all know that sometimes you need to get some details ironed out before you can proceed, or there may be some people that need to become involved before a dev can proceed. We made the mistake of trying to turn this into a stage, but what we found was that this was really either a sub-task for part of the team, or an impediment (blocker). By flagging stories, you can identify that a story requires something to be done before the development team can proceed.
If you are using Kanban, and not a Scrum board, the sub-task approach will not be for you. In those cases, you'll just need to make sure you have stages that make sense for all the issues you create. Stages will have to be fairly 'generic'. This sounds bad.
But it is not!
I believe teams generally use the stages for a few reasons:
Checking on status of an iteration
Inform other team members that they can pick up an item
Try to get a visual estimate on how close to Done an issue is.
More stages doesn't necessarily give a better status on an iteration as you really just need to see how many points you've closed and how many are in progress. So, at least for that goal, a more generic set of stages should work.
As for informing team members, too often I've seen teams retreat to the digital board to replace communication with each other. The fewer stages you have, the more you can force your team to talk to each other and work together to get a story to done. Things will work better this way, I guarantee it! Having a bit of a break-down helps, especially if you are working on a lot of items at once or have distributed teams working in different time zones, but keeping it simple is usually better.
Tracking the "how close to Done" is the hardest to do with generic stages. However, the multiple stages can be misleading. An item that is almost all the way across might have a severe bug in it that hasn't been found yet, so no matter how many stages you have your view on this item isn't any more accurate than a single "In Progress" stage. It isn't Done until it's Done :)
This was a long way for me to recommend keeping your workflow simple and letting your team use communication to keep on top of things. Maybe I should have just started with that!
The statuses that are available to each project is determined by the Workflow to which it is assigned. Not only does a workflow define the statuses, but it also defines what statuses you can progress to from a particular status. You can either create your own Workflows or you can download predefined workflows that suite your need.
In order to have separate workflows for different issue types, we need to define a Workflow Scheme:
1- Go to Jira Administration -> Workflow Schemes
2- Edit the Wokflow Scheme that is assigned to your project
3- Click the "Add Workflow" to add a new workflow for the issue types for which you need a different workflow and assign those issue types.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
What kind of software development (projects) can kanban be used for and what are the requirements to implement it? I was reading a lot about kanban and how great it is. But now i have to write a paper about it that focuses of the requirements for kanban, and especially for what kind of projects kanban doesen't fit. I couldn't figure it out yet.
KarlM gave a good overview.
I think Kanban can be used in any project, because it takes your existing process and visualizes it, introduces WIP (multitasking) limits, and uses pull to maximize flow and minimize lead time. My team recently migrated to Scrum and it's been a very smooth transition so far.
Kanban is especially good for situations in which a standard iteration doesn't make sense.
For example, you might not have frequent releases. Maybe you want to decouple one or more of your planning, demo, retrospective, or release schedules.
Good examples:
Maintenance project. Even though you might want to have 2-week (or whatever) meetings to discuss priority, retrospectives, etc., you're probably not going to demo or release every 2 weeks, and it's not likely you'll be able to commit to everything in the next 2 weeks anyway. Situations like this are so dynamic that the priorities shift every day, as new feedback comes from customers. Scrum or other iterative processes don't make sense in this case.
There is a real need for stories that are longer than your iteration length. Kanban, like Scrum, thrives with small stories (better flow, slack, etc.), BUT, unlike Scrum, it DOES at least allow large stories if really necessary.
Extremely fast-paced development. Continuous deployment. Iterations no longer make sense, because maybe you're responding to change lighting-quick, and maybe releasing multiple times a day!
See code.flickr.com:
Flickr was last deployed 4 hours ago, including 8 changes by 2 people.
In the last week there were 85 deploys of 588 changes by 19 people.
Do you think Flickr is doing 2-week iterations, or even 1-day iterations? I doubt it. Looks like they're in super-speed dynamic flow mode... Maybe Kanban, but definitely looks like they're in the Lean umbrella. (Kanban falls under the umbrella of Lean thinking, and continuous deployment was made popular by last year's book by Eric Ries, "The Lean Startup".)
It might not fit in the following environments:
Organizational culture can't get away from up-front planning, overwork/commitment, push instead of pull, fixing all of schedule, scope, and cost, etc. Kanban will start to provoke continuous improvement in an organization, and many are simply opposed to anything but the traditional overengineering, overdocumented, siloed, non-Lean, non-Agile approach that they know and love, which is waterfall. Some government contracts might also fall under this category, although I believe at least DoD is trying to advance Agile in its projects now. But some companies, if you tell them they need to do LESS (i.e., limit work in progress, have a clearer vision, get less stuff done faster, but therefore more stuff done overall), will have a heart attack. Many (most?) companies are addicted to overwork, and think SLACK (which is a fundamental Lean principle) is a 4-letter word. Unfortunately, queuing theory and theory of constraints is hard to get through some people's heads. :) So Kanban might not fit in those kinds of places. ;)
The requirements are, that everyone on the project agrees to use the principles and practices:
Principles
1. Start with what you do now
2. Agree to pursue incremental, evolutionary change
3. Initially, respect current processes, roles, responsibilities and job titles
4. Encourage acts of leaderships at all levels
Practices
1. Visualise what you do / knowledge discovery
2. Limit work in progress
3. Measure and manage flow
4. Make policies explicit
5. Develop feedback mechanisms
6. Improve collaboratively using models and the scientific method
If they don't agree to do that, they can't use kanban. Pretty clear.
Kanban is a tool for visualizing and improving existing processes. If there's a scenario in which it won't work, that scenario would probably have one or both of two properties.
1) There is no existing process or the existing process is such a disaster that it is not working at all and/or is constantly changing in a chaotic manner.
2) There is no will or opportunity to improve.
Lacking the second is not a deal-breaker. Kanban could still help with communication and coordination, and increasing clarity might lead to an increased desire to improve or help establish trust necessary to empower improvement experiments. That would be an example of a low-maturity Kanban implementation leading to a higher level of team maturity.
Kanban is a simple process tool. Applied well, it is good for any project, not just software - any.
In my opinion Kanban can be implement in any kind of organization and in any industry.
But...
the team must be ready for a change
the change must has an evolutionary character
the organization must be focused on cooperation instead of performing routine work.
How does design fit into the kanban method. Should there be a dedicated column for design to be placed in, or is the high & low level design of the story done before it even moves from the story queue into the WIP?
There is no single answer. It all depends how a team works. What you basically do with Kanban board is you map the way you work into the board. It means the answer to your question is within the process you follow when you build software.
To help you a bit - according to rules you should follow when you work with Kanban:
You should visualize workflow, which means all stages which are somehow separate in real life should be visualized in such way on the board.
Your policies should be explicit, which means everyone in the team should understand the process the same. If a sticky note is in a specific column it should mean that a feature is in such and such state and it should be obvious for everyone in the team.
Additionally, it is a typical situation that separate column is introduced when you deal with handoffs, meaning a feature is handed off from one team member to another, e.g. separate columns for development (done by developer) and testing (done by quality engineer).
When talking about design I've seen different approaches: either a single separate column for whole design, more columns when design is kind of extensive or no dedicated column at all when design is merged/mixed with development. Ask yourself, and your team, how it looks like in your case and you will have the answer how the board should look like.
And one final advice: don't be afraid of experimenting. If you aren't sure how you work, which is possible, try different things and check which board design works best in you case.
I am curious as to how other development teams spec out new features. The team I have just moved up to lead has no real specification process. I have just implemented a proper development process with CI, auto deployment and logging all bugs using Trac and I am now moving on to deal with changes.
I have a list of about 20 changes to our product to have done over the next 2 months. Normally I would just spec out each change going into detail of what should be done but I am curious as to how other teams handle this. Any suggestions?
I think we had a successful approach in my last job as we delivered the project on time and with only a couple of issues found in production. However, there were only 3 people working on the product, so I'm not entirely sure how it would scale to larger teams.
We wrote specs upfront for the whole product but without going into too much detail and with an emphasis on the UI. This was a means for us to get a feel for what had to be done and for the scope of the project.
When we started implementing things, we had to work everything out in a lot more detail (and inevitably had to do some things differently from the spec). To that end, we got together and worked out the best approach to implementing each feature (sometimes with prototypes). We didn't update the original spec but we did make notes after the meetings as it's very easy to forget the details afterwards.
So in summary, my approach is to treat specs as an exploratory tool and to work out finer details during implementation. Depending on the project, it may also be a good idea to keep the original spec up to date as the application evolves (which we didn't need to do this time).
Good question but its can be subjective. I guess it depends on the strategy of the product, if its to be deployed to multiple clients in the same way or to a single client on a bespoke project, the impact, dependency these changes have on the system and each other and the priority these changes need to be made.
I would look at the priority and the dependency, that will naturally start grouping things?