I have a SQL table which has two columns called seq and sub_seq as seen below. I would like to add a third column called id, which goes up by 1 every time the sub_seq starts again at 1 as shown in the table below.
seq
sub_seq
id
1
1
1
2
2
1
3
3
1
4
4
1
5
5
1
6
1
2
7
2
2
8
3
2
9
1
3
10
2
3
11
3
3
12
4
3
13
5
3
14
6
3
15
7
3
I could write a solution using plpgsql, however I would like to know if there is a way of doing this in standard SQL. Any help would be greatly appreciated.
If sub_seq is always a running sequence then you can use the DENSE RANK function and order over the differences of two columns, assuming it will consistently uniform.
SELECT seq, sub_Seq, DENSE_RANK() OVER (ORDER BY seq-sub_Seq) AS id
FROM tableDemo
This solution is based on the sample data you have provided, I think more sample data would be helpful to check the whole scenario.
Can you tell me as how to divide rows in bucket given a bin size in postgresql. I want to process every 100 row numbers (of lakhs of records) of a column.
Using ntile(value) we can distribute rows into (value) buckets used evenly.
For example,
unit_no
ntile(2)
ntile(3)
4566
1
1
4322
1
1
6777
1
2
8755
2
2
9765
2
3
3235
2
3
But what I am looking for is to define size for every bin to group row records, without me having to define how many buckets in total is needed, as there are lakhs of records that's coming in and have to process every 100 records at a time.
So as to assign bucket as 1 for records 1-100 rows, bucket 2 for 101-200 rows and so on, just given a size of bin say 100 here.
The control argument is required on the size of bin (group size) rather than specified number of buckets in total.
you can calculate it by row_number, eg. for 100 items divide by 100 :
with temp_data as (
select * from generate_series(1,1000)
)
select
*,
((row_number() over() -1) / 100)::int +1 as bucket_nr
from temp_data
https://www.db-fiddle.com/f/uBQ7stu6rvN9kdBSRLTFb6/0
I have this table and I want to add another column to calculate the average of the seconds column.
For example:
my table:
id
avg
1
2.5
2
3.2
3
4.1
4
0.8
my desired table:
id
avg
daily_avg
1
2.5
2.65
2
3.2
2.65
3
4.1
2.65
4
0.8
2.65
Is there any simple and short way to do it?
Im using postgreSQL
Thanks
demo: db<>fiddle
You can use the AVG() window function:
SELECT
id, avg,
AVG(avg) OVER () AS daily_avg
FROM mytable
Using Postgres 11.6, I'm trying to analyze some event data. The goal is to find the durations for all events with a specific name, and then split each one out into evenly sized buckets. We're looking for any times that "clump" for a specific event. I'm editing my question as the specific case may be obscuring what I'm trying to ask.
Simple example
The question is "how do you group rows by a value, then split occurrences by frequency into buckets with count and average for each of those buckets." Here's a hand-done toy example with rounded averages:
Months with values, each number here represents a row.
Jan 12 24 60 150 320 488
Feb 8 16 40 100 220
Mar 4 8 20 310
Overall figures
Month Count Avg Min Max
Jan 6 176 12 488
Feb 5 77 8 220
Mar 4 86 4 310
The same original data, but with more data, including repeated values and a wider range.
Jan 12 12 12 12 24 24 60 60 150 320 488 500
Feb 8 8 8 8 8 16 40 100 220 440 1100
Mar 4 8 8 8 8 20 20 20 20 310
Overall figures
Month Count Avg Min Max
Jan 12 140 12 500
Feb 11 178 8 1100
Mar 10 43 4 310
Mock-up of one of the sets of data split out into 3 buckets
Month Count Avg Min Max Bucket
Jan 4 12 12 12 0
Jan 4 42 24 60 1
Jan 4 365 150 500 2
...and so on for Feb and Mar
I'm just guessing at how the buckets would split in the mock-up above.
That pretty much captures what I'm trying to do. Group by month name (from_to_node in my real case), split the resulting rows into buckets, and then get min, max, avg, and count for each bucket. It's starting to sound like a pivot (?)
Real Table Setup
Here's the structure of table I'm getting a feed for:
CREATE TABLE IF NOT EXISTS data.edge_event (
id uuid,
inv_id uuid,
facility_id uuid,
from_node citext,
to_node citext,
from_to_node citext,
from_node_dts timestamp without time zone,
to_node_dts timestamp without time zone,
seconds integer,
cycle_id uuid
);
The duration is pre-calculated in seconds, and the area of interest for now is only the from_to_node name. So, it's fair to think of the example as
CREATE TABLE IF NOT EXISTS data.edge_event (
from_to_node citext,
seconds integer
);
Raw Data
Within the edge_event table, there are 159 distinct from_to_node values over around 300K event rows. Some are found in only a handful of edge_event records, some are found in thousands, or tens of thousands. That's too much to provide a good sample for. But to make the problem simpler to follow, a from_to_node might be
Boxing_Assembly 1256
Meaning "it took 1256 seconds to move this part from the Boxing phase to the Assembly phase." And here we might have 10,000 other records for "Boxing_Assembly" with different durations.
Goal
We're looking for two things out of each from_to_node. For something like Boxing_Assembly, I'm trying to do this:
Sort the seconds taken into buckets, say 20 buckets. This is for a histogram.
For each bucket get the
count of edge_event rows
avg(seconds) within the bucket
min/first_value(seconds) within the bucket
max/last_value(seconds) within the bucket
So, we're looking to chart durations to look for clusters, and then get the raw seconds out of any common clusters.
What I've tried
I've tried a lot of different code, and I've not succeeded. It seems like a problem for GROUP BY and/or window functions. There's something I'm not getting, as my results are far from the mark.
I know that I haven't provided sample data, which makes it harder to help. But I'm guessing that what I'm missing is one++ concepts. Pretty much, I want to know how to split out the edge_event data by from_to_node and then by seconds. Given the huge ranges across from_to_nodes, I'm trying to bucket each individually based on their own min/max.
Thanks very much for any help.
Draft Attempt
I've developed a query that works a bit, but not entirely. This is an edit from my original post with broken code.
WITH
min_max AS
(
SELECT from_to_node,
min(seconds),
max(seconds)
FROM edge_event
GROUP BY from_to_node
)
SELECT edge_event.from_to_node,
width_bucket (seconds, min_max.min, min_max.max, 99) as bucket, -- Bucket are counted from 0, so 9 gets you 10 buckets, if you have enough data.
count(*) as frequency,
min(seconds) as seconds_min,
max(seconds) as seconds_max,
max(seconds) - min(seconds) as bucket_width,
round(avg(seconds)) as seconds_avg
FROM edge_event
JOIN min_max ON (min_max.from_to_node = edge_event.from_to_node)
WHERE min_max.min <> min_max.max AND -- Can't have a bucket with an upper and lower bound that are the same.
edge_event.from_to_node IN (
'Boxing_Assembly',
'Assembly_Waiting For QA')
GROUP BY edge_event.from_to_node,
bucket
ORDER BY from_to_node,
bucket
What I'm getting back looks pretty good:
from_to_node bucket frequency seconds_min seconds_max bucket_width seconds_avg
Boxing_Assembly 1 912 17 7052 7035 3037
Boxing_Assembly 2 226 7058 13937 6879 9472
Boxing_Assembly 3 41 14151 21058 6907 16994
Boxing_Assembly 4 16 21149 27657 6508 23487
Boxing_Assembly 5 4 28926 33896 4970 30867
Boxing_Assembly 6 1 37094 37094 0 37094
Boxing_Assembly 7 1 43228 43228 0 43228
Boxing_Assembly 10 2 63666 64431 765 64049
Boxing_Assembly 14 1 94881 94881 0 94881
Boxing_Assembly 16 1 108254 108254 0 108254
Boxing_Assembly 37 1 257226 257226 0 257226
Boxing_Assembly 40 1 275140 275140 0 275140
Boxing_Assembly 68 1 471727 471727 0 471727
Boxing_Assembly 100 1 696732 696732 0 696732
Assembly_Waiting For QA 1 41875 1 18971 18970 726
Assembly_Waiting For QA 9 1 207457 207457 0 207457
Assembly_Waiting For QA 15 1 336711 336711 0 336711
Assembly_Waiting For QA 38 1 906519 906519 0 906519
Assembly_Waiting For QA 100 1 2369669 2369669 0 2369669
One problem here is that the buckets aren't evenly sized...they seem kind of weird. I've also tried specifying 10, 20, or 100 buckets, and get similar results. I'm hoping that there is a better way to allocate the data to buckets that I'm missing, and that there's a way to have zero-entry buckets instead of gaps.
I would use the PostgreSQL optimizer for that. It collects exactly the information you want.
Create a temporary table with the values you are interested in and ANALYZE it. Then look into pg_stats for the following:
if there are "most common values", you have them and their frequency right there.
Otherwise, look for adjacent histogram boundaries that are close together. Such a bucket is an interval where values are "lumped".
I am trying to merge two data tables (tables A and B) in Spotfire 7.10 using insert columns to give the resultant table C. My problem is i cannot get the join i need on Depth because Depth in tables A and B are not exact matches. What i need is to match Table B to Table A based on a match using Depth to its nearest value i.e Depth 10.5 (table B) matches Depth 10 (Table A). Is this possible in Spotfire or using an TERR R script?
Table A
Depth data
10 2
20 4
30 3
40 5
50 7
Table B
Depth data 2
10.5 100
30.5 112
50.5 125
Table C
Depth data data 2
10 2 100
20 4
30 3 112
40 5
50 7 125
many thanks for any help
It depends on the range of values you may have in both tables for Depth, but you may find that simply rounding the result to the nearest 10 in Table B will suffice. Then you can join based on this.
Round([Depth]/10,0)*10