I'm trying to create a column to correctly label changes in effort by Estimator.
I've been able to get close using the below code with DENSE_RANK() function, but it's not quite what I'm looking for. I'm having trouble identifying the start and end points to organize by. I've include the current and desired output below.
Current Code:
SELECT *,
DENSE_RANK() OVER (ORDER BY Estimator, Effort) AS Group
FROM #Estimating_with_Breakpoints
ORDER BY Estimator, Date, DateType
Current output:
Job Name DateType Date Effort Group
Hidden Lakes Apartments Start 3/8/2017 50 6
Hidden Lakes Apartments Breakpoint 4/13/2017 50 6
Hidden Lakes Apartments Finish 4/13/2017 0 4
Dr. Biggs Joint Institute Breakpoint 5/1/2017 0 4
Dr. Biggs Joint Institute Start 5/1/2017 33 5
Bonita Springs Library Breakpoint 5/22/2017 33 5
North Ft. Myers Library Breakpoint 5/22/2017 83 7
Bonita Springs Library Start 5/22/2017 83 7
North Ft. Myers Library Start 5/22/2017 133 9
Dr. Biggs Joint Institute Breakpoint 6/5/2017 133 9
Dr. Biggs Joint Institute Finish 6/5/2017 100 8
Bonita Springs Library Breakpoint 6/19/2017 100 8
North Ft. Myers Library Breakpoint 6/19/2017 50 6
Bonita Springs Library Finish 6/19/2017 50 6
North Ft. Myers Library Finish 6/19/2017 0 4
Desired output:
Job Name DateType Date Effort Group
Hidden Lakes Apartments Start 3/8/2017 50 1
Hidden Lakes Apartments Breakpoint 4/13/2017 50 1
Hidden Lakes Apartments Finish 4/13/2017 0 2
Dr. Biggs Joint Institute Breakpoint 5/1/2017 0 2
Dr. Biggs Joint Institute Start 5/1/2017 33 3
Bonita Springs Library Breakpoint 5/22/2017 33 3
North Ft. Myers Library Breakpoint 5/22/2017 83 3
Bonita Springs Library Start 5/22/2017 83 3
North Ft. Myers Library Start 5/22/2017 133 4
Dr. Biggs Joint Institute Breakpoint 6/5/2017 133 4
Dr. Biggs Joint Institute Finish 6/5/2017 100 5
Bonita Springs Library Breakpoint 6/19/2017 100 5
North Ft. Myers Library Breakpoint 6/19/2017 50 5
Bonita Springs Library Finish 6/19/2017 50 5
North Ft. Myers Library Finish 6/19/2017 0 5
Related
I have a data set as following:
week number
date
item
location
Out of stock %
23
2022-06-05
apple
Seattle
55%
23
2022-06-06
apple
Seattle
60%
23
2022-06-07
apple
Seattle
50%
23
2022-06-08
apple
Seattle
50%
23
2022-06-09
apple
Seattle
50%
23
2022-06-10
apple
Seattle
50%
23
2022-06-11
apple
Seattle
60%
23
2022-06-06
orange
California
10%
23
2022-06-07
orange
California
5%
23
2022-06-08
orange
California
5%
23
2022-06-09
orange
California
30%
23
2022-06-06
orange
California
20%
23
2022-06-07
orange
California
10%
23
2022-06-08
orange
California
2%
My desired output is to have an Out of stock filter for the viewers so that when they enter a certain value, it returns a list of a certain week in which the out of stock is no lesser than the certain value.
For example, if I enter 40% in the filter, then apple at Seattle would only show up.
This apple would be then marked as continuously out of stock. Please help me!
I have to send an email out to all the team managers of my company providing the individual stats for each member of their team. Unfortunately I am not very well acquainted with mail merge and have been running into multiple knowledge gaps. I was hoping somebody here could help me understand how I can do this. If the sample data looks like this:
TM Email
Employee Name
Call Goal
Actual
% Goal Met
# of Audits
Accuracy
email1#fakeemail.com
John Doe
100
50
50%
4
92%
email1#fakeemail.com
Jane Doe
100
50
50%
4
92%
email1#fakeemail.com
Eric Stultz
100
50
50%
4
92%
email1#fakeemail.com
Christian Noname
100
50
50%
4
92%
email1#fakeemail.com
Fakename Mcgee
100
50
50%
4
92%
email1#fakeemail.com
senor chapo
100
50
50%
4
92%
email2#mail.com
Duck Werthington
100
50
50%
4
92%
email2#mail.com
Myster Eeman
100
50
50%
4
92%
email2#mail.com
Ion No
100
50
50%
4
92%
email2#mail.com
No Idea
100
50
50%
4
92%
email2#mail.com
Mail Man
100
50
50%
4
92%
Assume that there are over 2 dozen Team Managers with varying team sizes. and the email will be sent in the same format as listed above. How would I go about this, I don't even know where to begin. Please help.
I have a table like this:
2019.03m Bolts 100
2019.03m Nuts 50
2019.02m Bolts 10
2019.02m Nuts 100
2019.01m Bolts 50
2019.01m Nuts 10
2018.12m Bolts 10
2018.12m Nuts 10
2018.11m Bolts 20
2018.11m Nuts 30
I would like to introduce a new column called the year to date column
2019.03m Bolts 100 160
2019.03m Nuts 50 160
2019.02m Bolts 10 60
2019.02m Nuts 100 110
2019.01m Bolts 50 50
2019.01m Nuts 10 10
2018.12m Bolts 10 30
2018.12m Nuts 10 40
2018.11m Bolts 20 20
2018.11m Nuts 30 30
This sums the previous year-to-date row and resets when it reaches a new year.
I have an idea of using sums but how can I reset when I get to a new year?
I believe the below is what you are after. Note I have reversed the table in order put in time ascending order initially.
reverse update YTD:sums Number by tool,date.year from reverse t
date tool Number YTD
------------------------
2019.03 Bolts 100 160
2019.03 Nuts 50 160
2019.02 Bolts 10 60
2019.02 Nuts 100 110
2019.01 Bolts 50 50
2019.01 Nuts 10 10
2018.12 Bolts 10 30
2018.12 Nuts 10 40
2018.11 Bolts 20 20
2018.11 Nuts 30 30
If your table is ordered by date(descending in your example) you can use below query. Else you can just order it using date xdesc before running the query.
q) update ytd:reverse sums reverse num by date.year,name from t
I am having hard time understanding the working of SCAN and CSCAN algorithm of disk scheduling.I understood FCFS,Closest Cylinder Next but heard that SCAN resembles elevator mechanism and got confused.
My book says that for the incoming order :[10 22 20 2 40 6 38] (while the disk currently at 20) the SCAN moving at the start serves [(20) 20 22 38 40 10 6 2]; this requires moves of [0 2 16 2 30 4 4] cylinders, a total of 58 cylinders.
How does the pattern [(20) 20 22 38 40 10 6 2] came?
Let's know what the SCAN(Elevator) disk-scheduling algorithm says:-
It scans down towards the nearest end and then when it hits the bottom
it scans up servicing the requests that it didn't get going down. If a
request comes in after it has been scanned it will not be serviced
until the process comes back down or moves back up.
So, in your case, the current position of disk is at 20. So, as per the SCAN algorithm, it'll scan towards the nearest end, and just after hitting bottom, it scans up servicing the requests back up.
The order is :-
| |
| * current position | * move back up to upside
|---> nearest disk is this one |
| so it'll move down and so on. |
| as it hit the bottom _______
____
Fig :- Demonstration of SCAN algorithm
So, as per the given data, the order will be [(20) 20 22 38 40 10 6 2];
EDIT :-
The only difference between SCAN and CSCAN is that in CSCAN,
it begins its scan toward the nearest end and works it way all the way
to the end of the system. Once it hits the bottom or top it jumps to
the other end and moves in the same direction,unlike the SCAN which
moves back to upside using the same path.
As per CSCAN,the direction of movement would be same uptil bottom and then it'll reverse the path.
So, as per the given data, the order will be [(20) 20 22 38 40 2 6 10]; Notice the change in last three disk positions.
I hope it is clear. Feel free to ask remaining doubts.
I'm using Apple's Large Image Downsizing example code in my project to load large images that can be zoomed.
The sample project can be downloaded here: Apple Large Image Downsizing
The UIScrollView source can be viewed directly here:
ImageScrollView.m
It works well, apart from the fact that the user can zoom-in till infinity. It seems that while Apple is using the UIScrollView's zoom functionality the actual zooming is performed by rescaling the source image and not by transforming a UIView. (though my understanding of how it works is a bit flaky!)
I'm looking for the maximum zoom to be limited to the full resolution of the image.
I was unaware of that project, but it would seem to not do what you really want. It lets you take a very large image file and downsize it.
There is github project PhotoScrollerNetwork that lets you download huge JPEG images (one is a NASA 18,000 x 18,000) and decodes them incrementally as they arrive. It then uses CATiledLayers to display the image at a reduction small enough to fit on the window, but can zoom out to the full image resolution. This might be more in line with your objective.
The project is based on Apple's PhotoScroller project, which only works with pre-tiled images.
EDIT: I downloaded the Large Image Downsizing project. It has much in common with Apple's PhotoScroller, and if you poke around the later project you can probably figure out how to limit the zooming. I suspect it has to do with these lines:
self.maximumZoomScale = 5.0f;
self.minimumZoomScale = 0.25f;
That said, I took the leaves image and stuck it into PhotoScrollerNetwork's bundle and did a comparison on an iPhone 4. The Large Image Downsizing project took one minute to decode the image, and you get to see an incremental view of the image while it render, but it (I believe) it requires the whole image on disk before you can proceed.
PhotoScrollerNetwork was able to decode the image in 32 seconds - just about half the time. If you download from the network, it will decode the image as it receives the data, so the delay from the last chunk of data and when you see the image is small.
PhotoScrollerNetwork Offers:
concurrent image downloads and rendering
levels of detail automatically set to optimize showing image at full size and all-on-one-screen
very smooth zooming and panning (due to pre-rendered tiles)
no files are saved on disk (but it uses the disk cache): this means if the app crashes no cleanup required.
The log messages below:
2012-09-05 11:46:11.784 LargeImage[2242:3107] beginning downsize. iterations: 14, tile height: 754.000000, remainder height: 425
2012-09-05 11:46:11.788 LargeImage[2242:3107] iteration 1 of 14
2012-09-05 11:46:13.132 LargeImage[2242:3107] iteration 2 of 14
2012-09-05 11:46:15.148 LargeImage[2242:3107] iteration 3 of 14
2012-09-05 11:46:17.526 LargeImage[2242:3107] iteration 4 of 14
2012-09-05 11:46:20.627 LargeImage[2242:3107] iteration 5 of 14
2012-09-05 11:46:24.017 LargeImage[2242:3107] iteration 6 of 14
2012-09-05 11:46:27.696 LargeImage[2242:3107] iteration 7 of 14
2012-09-05 11:46:31.823 LargeImage[2242:3107] iteration 8 of 14
2012-09-05 11:46:36.638 LargeImage[2242:3107] iteration 9 of 14
2012-09-05 11:46:41.791 LargeImage[2242:3107] iteration 10 of 14
2012-09-05 11:46:47.309 LargeImage[2242:3107] iteration 11 of 14
2012-09-05 11:46:53.299 LargeImage[2242:3107] iteration 12 of 14
2012-09-05 11:46:59.832 LargeImage[2242:3107] iteration 13 of 14
2012-09-05 11:47:06.800 LargeImage[2242:3107] iteration 14 of 14
2012-09-05 11:47:13.666 LargeImage[2242:3107] downsize complete.
2012-09-05 11:57:24.465 PhotoScrollerNetworkTurbo[2262:1c03] Initialize: total: 270237696 used: 163041280 FREE: 107196416 [resident=6574080 virtual=346882048]
2012-09-05 11:57:24.532 PhotoScrollerNetworkTurbo[2262:1c03] ORIENTATION=1 string=1
2012-09-05 11:57:24.535 PhotoScrollerNetworkTurbo[2262:1c03] ZLEVELS=5
2012-09-05 11:57:57.463 PhotoScrollerNetworkTurbo[2262:1c03] FINISH-I: 32974 milliseconds
2012-09-05 11:57:57.946 PhotoScrollerNetworkTurbo[2262:1c03] FINISHED: total: 260521984 used: 219987968 FREE: 40534016 [resident=3469312 virtual=349683712]