Is there a built in function in PostgreSQL to sum the alternate digits starting from right hand side?
Input: 890400021003
Output:
3 + 0 + 2 + 0 + 4 + 9 = 18
0 + 1 + 0 + 0 + 0 + 8 = 9
Basically I want to print each alternate numbers and sum it up as above, please advice for any solution in Postgres
In Postres 9.4, you can do this easily with a string, using string_to_array() and unnest() with ordinality:
select ord % 2, sum(val::numeric)
from (select reverse('890400021003'::text) as x) x, lateral
unnest(string_to_array(x, NULL)) with ordinality u(val, ord)
group by ord % 2;
In 9.3 you can do this with a lateral join:
select i % 2, sum(substring(x.x, g.i, 1)::numeric)
from (select reverse('890400021003'::text) as x) x, lateral
generate_series(1, length(x.x)) g(i)
group by i % 2;
And you can apply the same idea using a subquery in earlier versions.
Related
I have table with:
1
2
3
4
5
6
9
10
11
12
and I need to receive:
1-6
9-12
How I can do that?
I need to see that I have two or more range of number i table and that from 1 to 6 and from 9 to 12.
SELECT
CONCAT(MIN(A.b), '-', max(A.b))
FROM
(
SELECT
*,
ROW_NUMBER() OVER (ORDER BY b) RowId
FROM
(VALUES (1), (2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12)) a(b)
--WHERE
--(a.b >= 1 AND a.b <= 6) OR
--(a.b >= 9 AND a.b <= 12)
) A
GROUP BY
A.b - A.RowId
I want to get a table that constructs a column that tracks how many times an id appears in a given week. If the id appears once it is given a 1, if it appears twice it is given a 2, but if it appears more than two times it is given a 0.
id date
a 2015-11-10
a 2015-11-25
a 2015-11-09
b 2015-11-10
b 2015-11-09
a 2015-11-05
b 2015-11-23
b 2015-11-28
b 2015-12-04
a 2015-11-10
b 2015-12-04
a 2015-12-07
a 2015-12-09
c 2015-11-30
a 2015-12-06
c 2015-10-31
c 2015-11-04
b 2015-12-01
a 2015-10-30
a 2015-12-14
the one week intervals are given as follows
1 - 2015-10-30 to 2015-11-05
2 - 2015-11-06 to 2015-11-12
3 - 2015-11-13 to 2015-11-19
4 - 2015-11-20 to 2015-11-26
5 - 2015-11-27 to 2015-12-03
6 - 2015-12-04 to 2015-12-10
7 - 2015-12-11 to 2015-12-17
The table should look like this.
id interval count
a 1 2
b 1 0
c 1 2
a 2 0
b 2 2
c 2 0
a 3 0
b 3 0
c 3 0
a 4 1
b 4 1
c 4 0
a 5 0
b 5 2
c 5 1
a 6 0
b 6 2
c 6 0
a 7 1
b 7 0
c 7 0
The interval column doesn't have to be there, I simply added it for clarity.
I am new to sql and am unsure how to break the dates into intervals. The only thing I have is grouping by date and counting.
Select id ,date, count (*) as frequency
from data_1
group by id, date having frequency <= 2;
Looking at just the data you provided, this does the trick:
SELECT v.id,
i.interval,
coalesce((CASE WHEN sub.cnt < 3 THEN sub.cnt ELSE 0 END), 0) AS count
FROM (VALUES('a'), ('b'), ('c')) v(id)
CROSS JOIN generate_series(1, 7) i(interval)
LEFT JOIN (
SELECT id, ((date - '2015-10-30')/7 + 1)::int AS interval, count(*) AS cnt
FROM my_table
GROUP BY 1, 2) sub USING (id, interval)
ORDER BY 2, 1;
A few words of explanation:
You have three id values which are here recreated with a VALUES clause. If you have many more or don't know beforehand which id's to enumerate, you can always replace the VALUES clause with a sub-query.
You provide a specific date range over 7 weeks. Since you might have weeks where a certain id is not present you need to generate a series of the interval values and CROSS JOIN that to the id values above. This yields the 21 rows you are looking for.
Then you calculate the occurrences of ids in intervals. You can subtract a date from another date which will give you the number of days in between. So subtract the date of the row from the earliest date, divide that by 7 to get the interval period, add 1 to make the interval 1-based and convert to integer. You can then convert counts of > 2 to 0 and NULL to 0 with a combination of CASE and coalesce().
The query outputs the interval too, otherwise you will have no clue what the data refers to. Optionally, you can turn this into a column which shows the date range of the interval.
More flexible solution
If you have more ids and a larger date range, you can use the below version which first determines the distinct ids and the date range. Note that the interval is now 0-based to make calculations easier. Not that it matters much because instead of the interval number, the corresponding date range is displayed.
WITH mi AS (
SELECT min(date) AS min, ((max(date) - min(date))/7)::int AS intv FROM my_table)
SELECT v.id,
to_char((mi.min + i.intv * 7)::timestamp, 'YYYY-mm-dd') || ' - ' ||
to_char((mi.min + i.intv * 7 + 6)::timestamp, 'YYYY-mm-dd') AS period,
coalesce((CASE WHEN sub.cnt < 3 THEN sub.cnt ELSE 0 END), 0) AS count
FROM mi,
(SELECT DISTINCT id FROM my_table) v
CROSS JOIN LATERAL generate_series(0, mi.intv) i(intv)
LEFT JOIN LATERAL (
SELECT id, ((date - mi.min)/7)::int AS intv, count(*) AS cnt
FROM my_table
GROUP BY 1, 2) sub USING (id, intv)
ORDER BY 2, 1;
SQLFiddle with both solutions.
Assuming you have a table of all users, this will do the trick.
select
users.id,
interval_table.id,
CASE
WHEN count(log_table.user_id)>2 THEN 0
ELSE count(log_table.user_id)
END
from users
cross join interval_table
left outer join log_table
on users.id = log_table.user_id
and log_table.event_date >= interval_table.start_interval
and log_table.event_date < interval_table.stop_interval
group by users.id, interval_table.id
order by interval_table.id, users.id
Check it out: http://sqlfiddle.com/#!15/1a822/21
Seems simple, but... I have data like so:
pid score1 score2 score3
1 1 3 2
2 3 1
3 4
I want to do an average score for the three only where there are non null values. Sort of like sum(score1+score2+score3)/3 but the denominator essentially needs to be a total of the non-null values for the given row, so 3 for pid 1, 2 for 2, and 1 for 3.
Is there a simple thing I'm missing here?
with t(pid, score1, score2, score3) as (
values (1,1,3,2), (2,3,null,1), (3,4,null,null)
)
select
(sum(score1) + sum(score2) + sum(score3))::numeric /
(count(score1) + count(score2) + count(score3))
as avg,
avg(coalesce(score1, 0) + coalesce(score2, 0) + coalesce(score3, 0))
as avg2
from t;
avg | avg2
--------------------+--------------------
2.3333333333333333 | 4.6666666666666667
Is there a way to detect subseries of zeros of length at least 3 within a time series in Postgres?
year value
--------------
1 0
2 0
3 0
4 33
5 72
6 0
7 0
8 0
9 0
10 25
11 0
12 56
13 37
So in this example I'd like to return years 1-3 and 6-9, but not year 11.
This one will do it:
WITH d(y,v) AS (VALUES
(1,0),(2,0),(3,0),(4,33),(5,72),
(6,0),(7,0),(8,0),(9,0),(10,25),
(11,0),(12,56),(13,37)
)
SELECT grp, numrange(min(y),max(y),'[]') as ys, count(*) as len
FROM (
/* group identifiers via running total */
SELECT y, v, g, sum(g) OVER (ORDER BY y) grp
FROM (
/* group boundaries */
SELECT y, v, CASE WHEN
v IS DISTINCT FROM lag(v) OVER (ORDER BY y) THEN 1
END g
FROM d) s
WHERE v=0) s
GROUP BY grp
HAVING count(*) >= 3;
I am a beginner with SQL and I was looking for more experiences with SQL hence I decided to design a procedure to generate X amount of random lotto picks. The lottery here in my area allows you to pick 5 numbers from 1-47 and 1 "mega" number from 1-27. The trick is the "mega" number could repeat with the 5 numbers previously, i.e. 1, 2, 3, 4, 5, mega 1.
I created the following procedure to generate 10 million lottery picks, and it took 12 hours and 57 minutes for the process to finish. While my friends tested the same thing with java and it took seconds. I was wondering if there's any improvements I can make to the code or if there's any mistakes that I've made? I'm new at this hence I am trying to learn better approaches etc, all comments welcome.
USE lotto
DECLARE
#counter INT,
#counter1 INT,
#pm SMALLINT,
#i1 SMALLINT,
#i2 SMALLINT,
#i3 SMALLINT,
#i4 SMALLINT,
#i5 SMALLINT,
#sort int
SET #counter1=0
TRUNCATE TABLE picks
WHILE #counter1<10000000
BEGIN
TRUNCATE TABLE sort
SET #counter = 1
WHILE #counter < 6
BEGIN
INSERT INTO sort (pick)
SELECT CAST(((47+ 1) - 0) * RAND() + 1 AS TINYINT)
IF (SELECT count(distinct pick) FROM sort)<#counter
BEGIN
TRUNCATE TABLE sort
SET #counter=1
END
ELSE IF (SELECT COUNT(DISTINCT pick) FROM sort)=#counter
BEGIN
SET #counter = #counter + 1
END
END
SET #sort = 0
WHILE #sort<5
BEGIN
UPDATE sort
SET sort=#sort
WHERE pick = (SELECT min(pick) FROM sort WHERE sort is null)
SET #sort=#sort + 1
END
SET #i1 = (SELECT pick FROM sort WHERE sort = 0)
SET #i2 = (SELECT pick FROM sort WHERE sort = 1)
SET #i3 = (SELECT pick FROM sort WHERE sort = 2)
SET #i4 = (SELECT pick FROM sort WHERE sort = 3)
SET #i5 = (SELECT pick FROM sort WHERE sort = 4)
SET #pm = (CAST(((27+ 1) - 0) * RAND() + 1 AS TINYINT))
INSERT INTO picks(
First,
Second,
Third,
Fourth,
Fifth,
Mega,
Sequence
)
Values(
#i1,
#i2,
#i3,
#i4,
#i5,
#pm,
#counter1
)
SET #counter1 = #counter1+1
END
I generated 10000 rows in 0 sec. I did it i another way. Hope this will help you
;WITH Nbrs ( n ) AS (
SELECT 1 UNION ALL
SELECT 1 + n FROM Nbrs WHERE n < 10000 )
SELECT
(ABS(CHECKSUM(NewId())) % 47 + 1) AS First,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Second,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Third,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Fourth,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Fifth,
(ABS(CHECKSUM(NewId())) % 27 + 1) AS Mega,
Nbrs.n AS Sequence
FROM
Nbrs
OPTION ( MAXRECURSION 0 )
10000 rows 0 sec
100000 rows 1 sec
1000000 rows 13 sec
10000000 rows 02 min 21 sec
Or with cross joins
WITH E00(N) AS (SELECT 1 UNION ALL SELECT 1),
E02(N) AS (SELECT 1 FROM E00 a, E00 b),
E04(N) AS (SELECT 1 FROM E02 a, E02 b),
E08(N) AS (SELECT 1 FROM E04 a, E04 b),
E16(N) AS (SELECT 1 FROM E08 a, E08 b),
E32(N) AS (SELECT 1 FROM E16 a, E16 b),
Nbrs(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY N) FROM E32)
SELECT
(ABS(CHECKSUM(NewId())) % 47 + 1) AS First,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Second,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Third,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Fourth,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Fifth,
(ABS(CHECKSUM(NewId())) % 27 + 1) AS Mega,
Nbrs.n AS Sequence
FROM Nbrs
WHERE N <= 10000000;
10000 rows 0 sec
100000 rows 1 sec
1000000 rows 14 sec
10000000 rows 03 min 29 sec
I should also mention that the reason I am using
(ABS(CHECKSUM(NewId())) % 47 + 1)
is that it returns a random number per row. The solution with
CAST(((47+ 1) - 0) * RAND() + 1 AS TINYINT)
return the same random number for each row if you select them in one go. To test this run this example:
;WITH Nbrs ( n ) AS (
SELECT 1 UNION ALL
SELECT 1 + n FROM Nbrs WHERE n < 5 )
SELECT
CAST(((47+ 1) - 0) * RAND() + 1 AS TINYINT) AS Random,
(ABS(CHECKSUM(NewId())) % 47 + 1) AS RadomCheckSum,
Nbrs.n AS Sequence
FROM Nbrs
Ok. So I did see your comment and I have a solution for that as well. If you really want to order the numbers. The complexity of the algorithm elevates and that also means that the time of the algorithm increases. But i still think it is doable. But not in the same neat way.
--Yeah declaring a temp table for just the random order number
DECLARE #tbl TABLE(value int)
--The same function but with the number of the random numbers
;WITH Nbrs ( n ) AS (
SELECT 1 UNION ALL
SELECT 1 + n FROM Nbrs WHERE n < 5 )
INSERT INTO #tbl
(
value
)
SELECT
Nbrs.n AS Sequence
FROM Nbrs
;WITH Nbrs ( n ) AS (
SELECT CAST(1 as BIGINT) UNION ALL
SELECT 1 + n FROM Nbrs WHERE n < 100000 )
SELECT
tblOrderRandomNumbers.[1] AS First,
tblOrderRandomNumbers.[2] AS Second,
tblOrderRandomNumbers.[3] AS Third,
tblOrderRandomNumbers.[4] AS Fourth,
tblOrderRandomNumbers.[5] AS Fifth,
(ABS(CHECKSUM(NewId())) % 27 + 1) AS Mega,
Nbrs.n AS Sequence
FROM
Nbrs
--This cross join. Joins with the declared table
CROSS JOIN
(
SELECT
[1], [2], [3], [4], [5]
FROM
(
SELECT
Random,
ROW_NUMBER() OVER(ORDER BY tblRandom.Random ASC) AS RowNumber
FROM
(
SELECT
(ABS(CHECKSUM(NewId())) % 47 + 1) AS Random
FROM
#tbl AS tblNumbers
) AS tblRandom
)AS tblSortedRadom
--A pivot makes the rows to columns. Using the row index over order of the random number
PIVOT
(
AVG(Random)
FOR RowNumber IN ([1], [2], [3], [4],[5])
) as pivottable
) AS tblOrderRandomNumbers
OPTION ( MAXRECURSION 0 )
But still i manage to do it in a little time
10000 Rows : 0 sec
100000 Rows : 4 sec
1000000 Rows : 43 sec
10000000 Rows : 7 min 9 sec
I Hope this help
I wrote this script just out of curiousity. It should do better than your script, but I cant tell for sure.
Beware that I use a declared table, and if you use a real table performance should be better when generating larger amounts of rows.
I generated 10000 rows on about 13 seconds, that counts to about 3.5 hours to generate 10 000 000 rows. Still far worse than the Java-case you described.
set nocount on
go
declare #i int = 1
declare #t table(nr1 int, nr2 int, nr3 int, nr4 int, nr5 int, mega int, seq int)
while #i <= 10000
begin
;with numbers(nr)
as
(
select 1
union all
select nr+1
from numbers
where nr < 47
)
,mega(nr)
as
(
select 1
union all
select nr+1
from mega
where nr < 27
)
,selectednumbers(nr)
as
(
select top 5 nr
from numbers
order by newid()
)
,selectedmega(mega)
as
(
select top 1 nr
from mega
order by newid()
)
,tmp
as
(
select *
,row_number() over(order by nr) as rownr
from selectednumbers
)
insert into #t
select max(nr1) as nr1
,max(nr2) as nr2
,max(nr3) as nr3
,max(nr4) as nr4
,max(nr5) as nr5
,(select mega from selectedmega) as mega
,#i as seq
from (
select case when rownr = 1 then nr else 0 end as nr1
,case when rownr = 2 then nr else 0 end as nr2
,case when rownr = 3 then nr else 0 end as nr3
,case when rownr = 4 then nr else 0 end as nr4
,case when rownr = 5 then nr else 0 end as nr5
from tmp
) x
set #i = #i + 1
end
select * from #t