Kanban - What's the data behind cumulative flow diagram? [closed] - kanban

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a simple question that somehow refuses to get answered. All resources online point to contradictory ways of doing a cumulative flow diagram. Hence, here's my query to you -
How do we add Stories per day to a cumulative flow diagram in Kanban?
Do we add the Stories that were worked on any day in any of the queues? - This implies that the numbers can drop over time and there is considerable fluctuation.
Do we add the Stories that were ever worked in any of the queues + Stories that were worked today? This means that the numbers in any queue can never drop down. They will grow over time or remain the same as yesterday at the least.
I understand that cumulative means approach# 2 but I do see examples which take the first approach.

Lets assume you have such flow on count of tasks:
Day 1 - 5 tasks in backlog 2 in WIP
Day 2 - 4 tasks in backlog 3 in WIP
Day 3 - 6 tasks in backlog 2 in WIP 2 in DONE
Day 4 - 2 tasks in backlog 4 in WIP 4 in DONE
Day 5 - 0 tasks in backlog 3 in WIP 7 in DONE
Day 6 - 0 tasks in backlog 0 in WIP 10 in DONE
So first of all whenever new tasks is added to Backlog, you increase you total effort value. Thus the cumulative flow chart will grow not burn like burndowns. Your cumulative data should be like this:
Day 1 - Total effort: 7, Partial effort: 2, Effort spent: 0
Day 2 - Total effort: 7, Partial effort: 3, Effort spent: 0
Day 3 - Total effort: 10, Partial effort: 4, Effort spent: 2
Day 4 - Total effort: 10, Partial effort: 8, Effort spent: 4
Day 5 - Total effort: 10, Partial effort: 10, Effort spent: 7
Day 6 - Total effort: 10, Partial effort: 10, Effort spent: 10

Related

Group areas into continuous units

The observations in my dataset were made for 7 years and come from different geographic areas. There are almost 4,000 areas, and the challenge is that each year the grouping of observations was different. I attach the first 30 lines of the table.
For example, in year 1 the observations for areas 3, 4, and 5, were groped together and shared the same reference number 10001, and observations for areas 6, 7, and 8 have a common reference number 10002. I want to compare observations across years, and to do this I need 'continuous' spatial units which remain fixed over time. In my example, the grouping of areas 3 to 5, and of areas 6 to 8 is a starting point; I could continue with these two groups for five years but in year 6 area number 8 has the same reference number as areas 9 to 11, hence I need to dissolve all areas from 6 to 11 into a larger continuous unit. I can do this manually, but given the size of my table, this is impractical. I wonder if somebody has already met this problem and could suggest an approach in R to solve it.

Rank dates in Tableau

How can I rank dates on Tableau?
By customer, I have a list of codes banked (every code has an ID and an issue day) and I am interested in calculating the number of days between the first banked code and (as exemple) the 10th code (the difference will be calculated on the issue days).
Some people may have 1 code, some 2, some 10, some 100, etc. I'm only interested in calculating this metric when the number of codes banked is > 9.
The result will be, by customer, Code 10 Issue Date - Code 1 Issue Date.
So I expect that most engaged customers will bank 10 codes in 10 days, less engaged customers will bank 10 in more days.
EDIT: added below an example of the data source (first three columns) and the missing fields to be calculated (last two columns)

Rolling Count of Values BETWEEN two dates, 12 to 24 months ago (SPOTFIRE Custom Expression)

I am struggling to create this calculation.
I need to create a rolling count of all of a columns values BETWEEN two dates. 12 to 24 months ago.
I do not want to do this by limiting data, I need it in the custom expression due to other work.
Currently I have this equation. I thought this would at least calculate all the values since two years ago but it fails to do that as well. Does anyone have a simpler way to calculate 12 to 24 months ago?
(((Count(If(((Month([DATE])>=Month(DateAdd("mm",-24,DateTimeNow())))
and (Year([DATE])>=Year(DateAdd("yy",-2,DateTimeNow())))),
[EXTRAPOLATEDPRESSURE],null)))))
Solved. I was making it to complex with the Month and Year aspects.
Count(If(([DATE]>=dateadd("mm",-24,DateTimeNow())) and ([DATE]<=dateadd("mm",-12,DateTimeNow())),
[EXTRAPOLATEDPRESSURE],null))

Tableau_sales calculation 7 days ago

Could you help me with one question re Sales: now I have the amount of Sales by months, but I also would like to add column to see the amount 7 days ago to compare progress and difference. How could I do it?
enter image description here

Round Robin Scheduling : Two different solutions - How is that possible?

Problem :
Five batch jobs A through E, arrive at a computer center at almost the same time. They have estimated running times 10, 6, 2, 4, and 8 minutes. Their (externally determined) priorities are 3, 5, 2, 1, and 4, respectively, with 5 being the highest priority. Determine the mean process turn around time. Ignore process switching overhead. For Round Robin Scheduling, assume that the system is multiprogramming, and that each job gets it fair share of the CPU.All jobs are completely CPU bound.
Solution #1 The following solution comes from this page :
For round robin, during the first 10 minutes, each job gets 1/5 of the
CPU. At the end of the 10 minutes, C finishes. During the next 8
minutes, each job gets 1/4 of the CPU, after which time D finishes.
Then each of the three remaining jobs get 1/3 of the CPU for 6
minutes, until B finishes and so on. The finishing times for the five
jobs are 10, 18, 24. 28, 30, for an average of 22 minutes.
Solution #2 the following solution comes from Cornell University, can be found here, and is obviously different from the previous one even though the problem is given in exactly the same form (this solution, by the way, makes more sense to me) :
Remember that the turnaround time is the amount of time that elapses
between the job arriving and the job completing. Since we assume that
all jobs arrive at time 0, the turnaround time will simply be the time
that they complete. (a) Round Robin: The table below gives a break
down of which jobs will be processed during each time quantum. A *
indicates that the job completes during that quantum.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
A B C D E A B C* D E A B D E A B D* E A B E A B* E A E A E* A A*
The results are different: In the first one C finishes after 10 minutes, for example, whereas in the second one C finishes after 8 minutes.
Which one is the correct one and why? I'm confused.. Thanks in advance!
The problems are different. The first problem does not specify a time quantum, so you have to assume the quantum is very small compared to a minute. The second problem clearly specifies a one minute scheduler quantum.
The mystery with the second solution is why it assumes the tasks run in letter order. I can only assume that this an assumption made throughout the course and so students would be expected to know to make it here.
In fact, there is no such thing as a 'correct' RR algorithm. RR is merely a family of algorithms, based on the common concept of scheduling several tasks in a circular order. Implementations may vary (for example, you may consider task priorities or you may discard them, or you may manually set the priority as a function of task length or whatever else).
So the answer is - both algorithms seem to be correct, they are just different.