SELECT REPLACE('245 289 252 722 265,420 (22,791) (23,482) (24,662)', '^[0-9]', ',')
Raw Result : 245 289 252 722 265,420 (22,791) (23,482) (24,662)
Need Result : 245,289 252,722 265,420 (22,791) (23,482) (24,662)
I assume this question is being asked with the assumption that the data is in a given, particular, and consistent format (otherwise this formatting is pointless). Below is shown an implementation providing the desired results:
SELECT
STUFF(
(
SELECT
CASE WHEN (ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) % 2 = 0) AND LEFT([value], 1) <> '(' AND RIGHT([value], 1) <> ')' THEN CONCAT(',', value)
ELSE CONCAT(' ', value) END
FROM
STRING_SPLIT('245 289 252 722 265,420 (22,791) (23,482) (24,662)', ' ')
FOR XML PATH('')
),
1,
1,
''
) AS [value]
Input: 245 289 252 722 265,420 (22,791) (23,482) (24,662)
Output: 245,289 252,722 265,420 (22,791) (23,482) (24,662)
Compatibility level 130 must be set in order to use STRING_SPLIT
Just replace space with a comma. To deal with commas between parenthesis, use another replace:
SELECT REPLACE(REPLACE('245 289 252 722 265,420 (22,791) (23,482) (24,662)', ' ', ','), '),(', ') (')
Result:
245,289,252,722,265,420,(22,791) (23,482) (24,662)
Related
I have a dataset like this in the table my schedules, and I have to check if the
consecutive data (in order) are separated by exactly 1 hour
ID Time(varchar) Time(varchar) Date
129 "08:30:00" "15:45:00" "2022-06-22"
139 "08:30:00" "16:45:00" "2022-06-22"
149 "08:30:00" "17:45:00" "2022-06-22"
159 "08:30:00" "18:45:00" "2022-06-22"
169 "08:30:00" "19:45:00" "2022-06-22"
179 "08:30:00" "20:45:00" "2022-06-22"
189 "08:30:00" "21:30:00" "2022-06-22" // invalid case
199 "08:30:00" "22:45:00" "2022-06-22"
E.g. A valid case would be: 149-139 = 1 hour
but, An invalid case would be: 189-179 = 0.45 hour
So I basically need a query like this:
Select count(*) from myScheduleTable where consecutiveBlocksTimeDifference = 1 hour;
Is this possible to achieve in postgres?
There can be more combinations in this, like what if row-1 and row-2 are not equal to 1 hour then will both row-1 and row-2 be invalid or what!
However, given below is one approach, which you can tailor as per the specific use-case.
select *,
case when
id > (select min(id) from schedule) then
case when
to_timestamp(concat(to_char(date1,'yyyy-mm-dd' ),' ',
time2),'yyyy-mm-dd hh24:mi:ss' ) -
to_timestamp(concat(to_char((coalesce(lag (date1)
over (order by id),date1)),'yyyy-mm-dd' ),' ',
(coalesce(lag(time2) over (order by id), time2)))
,'yyyy-mm-dd hh24:mi:ss' ) = interval '1 hour'
then
'valid'
else
'invalid'
end
else
'valid'
end
from schedule;
DB fiddle here.
Refer here for count(*) query.
Id
SleepDay
TotalMinutesAsleep
TotalTimeInBed
8378563200
4/20/2016
381
409
8378563200
4/21/2016
396
417
8378563200
4/22/2016
441
469
8378563200
4/23/2016
565
591
8378563200
4/24/2016
458
492
8378563200
4/25/2016
388
402 ---> this is the duplicate
8378563200
4/25/2016
388
402
8378563200
4/26/2016
550
584
8378563200
4/27/2016
531
600
This is part of my table and how can I delete the duplicate row? I use CTE clause but it deleted all records of id #8378563200 on 4/25/2016.
Use:
DELETE
FROM table1
WHERE ctid IN (SELECT ctid
FROM (SELECT ctid,
ROW_NUMBER() OVER (
PARTITION BY Id, SleepDay,TotalMinutesAsleep,TotalTimeInBed ) AS rn
FROM table1) t
WHERE rn > 1);
Replace table1 with your own table name.
Without column(s) to identify a unique row?
Then you could use ctid.
ctid
The physical location of the row version within its table. Note
that although the ctid can be used to locate the row version very
quickly, a row's ctid will change if it is updated or moved by VACUUM
FULL. Therefore ctid is useless as a long-term row identifier. A
primary key should be used to identify logical rows
For example:
delete
from SleepLogs log1
using SleepLogs log2
where log2.Id = log1.Id
and log2.SleepDay = log1.SleepDay
and log2.TotalMinutesAsleep = log1.TotalMinutesAsleep
and log2.TotalTimeInBed = log1.TotalTimeInBed
and log2.ctid < log1.ctid;
1 rows affected
select * from SleepLogs
id
sleepday
totalminutesasleep
totaltimeinbed
8378563200
2016-04-20
381
409
8378563200
2016-04-21
396
417
8378563200
2016-04-22
441
469
8378563200
2016-04-23
565
591
8378563200
2016-04-24
458
492
8378563200
2016-04-25
388
402
8378563200
2016-04-26
550
584
8378563200
2016-04-27
531
600
Test on db<>fiddle here
Could someone help me with cte expresion? I have a table:
old_card
new_card
dt
111
555
2020-01-09
222
223
2020-02-10
333
334
2020-03-11
444
222
2020-04-12
555
666
2020-05-12
666
777
2020-06-13
777
888
2020-07-14
888
0
2020-08-15
999
333
2020-09-16
223
111
2020-10-16
I need to get all the changes of old_card to a new_card, since old_card number 111 to a new_card number 0. So I must get 5 records from this table having only a new_card = 0 as input parameter
old_card
new_card
dt
111
555
2020-01-09
555
666
2020-05-12
666
777
2020-06-13
777
888
2020-07-14
888
0
2020-08-15
I think of to do it using cte, but I get all the records from the source table and can't understand why. Here is my cte:
;with cte as(
select
old_card,
new_card,
dt
from
cards_transfer
where
new_card = 0
union all
select
t1.old_card,
t1.new_card,
t1.dt
from
cards_transfer t1
inner join
cte on cte.old_card = t1.new_card)
But I get 8 rows instead. Can someone tell me please what I did wrong?
You said you wanted from 111 onwards. So you need to add that "stop" condition
where cte.old_card <> 111
;with cte as(
select
old_card,
new_card,
dt
from
cards_transfer
where
new_card = 0
union all
select
t1.old_card,
t1.new_card,
t1.dt
from
cards_transfer t1
inner join
cte on cte.old_card = t1.new_card
where cte.old_card <> 111
)
If I find a max value in my database of LIM50177
lim_id
LIM50172
LIM50173
LIM50174
LIM50175
LIM50176
LIM50177
How can I loop through another table and for every base_id go and bulk replace the temp_id with a new lim_id?
temp_id base id desc
1008 720 GP
1009 721 GT
1010 722 GA
1021 723 P
1021 724 G
1021 725 X
In other words
The data will be updated as follows:
temp_id base id desc
LIM50178 720 GP
LIM50179 721 GT
LIM50180 722 GA
LIM50181 723 P
LIM50182 724 G
LIM50183 725 X
Use a sequence every time you generate the lim_id values so you get unique values.
(Don't use the values in the other table to calculate the maximum value as, if the table is updated in two sessions at the same time and you are always basing the next value off the maximum value in the table then neither session will see the updates performed by the other session and you can end up generating identical "next" values in each session. Instead, every time you generate the "next" value always get that "next" value from the sequence.)
Oracle Setup:
CREATE SEQUENCE lim_id_seq START WITH 50178;
CREATE TABLE temp_data ( temp_id, base_id, "desc" ) AS
SELECT CAST( 1008 AS VARCHAR2(10) ), 720, 'GP' FROM DUAL UNION ALL
SELECT CAST( 1009 AS VARCHAR2(10) ), 721, 'GT' FROM DUAL UNION ALL
SELECT CAST( 1010 AS VARCHAR2(10) ), 722, 'GA' FROM DUAL UNION ALL
SELECT CAST( 1021 AS VARCHAR2(10) ), 723, 'P' FROM DUAL UNION ALL
SELECT CAST( 1021 AS VARCHAR2(10) ), 724, 'G' FROM DUAL UNION ALL
SELECT CAST( 1021 AS VARCHAR2(10) ), 725, 'X' FROM DUAL
Update using the Sequence:
UPDATE temp_data
SET temp_id = 'LIM' || lim_id_seq.NEXTVAL;
Result:
SELECT * FROM temp_data;
TEMP_ID | BASE_ID | desc
:------- | ------: | :---
LIM50178 | 720 | GP
LIM50179 | 721 | GT
LIM50180 | 722 | GA
LIM50181 | 723 | P
LIM50182 | 724 | G
LIM50183 | 725 | X
db<>fiddle here
I just found myself writing the code below - which works.
Interesting, but is it necessarily the best method?
the syntax allows the TRY_CAST to only be performed once.
Note "Atextfield" can contain valid numbers and invalid numbers.
SELECT *
FROM call
WHERE
EXISTS ( SELECT 1
FROM ( VALUES( TRY_CAST(call.[Atextfield] AS int) )
) AS Table1(num)
WHERE
(Table1.num BETWEEN 124 AND 140 )
OR (Table1.num BETWEEN 143 AND 146 )
OR (Table1.num BETWEEN 148 AND 149 )
OR (Table1.num BETWEEN 160 AND 169 )
OR (Table1.num BETWEEN 181 AND 189 )
)
;
2 .Could this be re-written as follows?
SELECT *
FROM [call]
WHERE TRY_CAST([call].AtextField AS TINYINT) BETWEEN 124 AND 189
AND TRY_CAST([call].AtextField AS TINYINT) NOT IN (141,142,147)
AND TRY_CAST([call].AtextField AS TINYINT) NOT BETWEEN 150 AND 159
AND TRY_CAST([call].AtextField AS TINYINT) NOT BETWEEN 170 AND 180
Note I'm new to CASE in t-sql...
2A. Is the TRY_CAST(...) evaluated more than once?
Which of the above will be quicker?
Is there a better way to write this?
Is the first method useful when the criteria get more involved and complex.
Is this an acceptable approach?
Harvey
There's no need to use exists or 1 = CASE...
Just put your logic in the where clause directly. I'd probably do something like this:
SELECT *
FROM [call]
WHERE TRY_CAST([call].AtextField AS TINYINT) BETWEEN 124 AND 189
AND TRY_CAST([call].AtextField AS TINYINT) NOT IN (141,142,147)
AND TRY_CAST([call].AtextField AS TINYINT) NOT BETWEEN 150 AND 159
AND TRY_CAST([call].AtextField AS TINYINT) NOT BETWEEN 170 AND 180
Cross Apply Method:
SELECT *
FROM [call]
CROSS APPLY (SELECT CAST(PersonID AS TINYINT)) CA(intField)
WHERE intField BETWEEN 124 AND 189
AND intField NOT IN (141,142,147)
AND intField NOT BETWEEN 150 AND 159
AND intField NOT BETWEEN 170 AND 180
My guess is that your query and mine queries will be pretty similiar. If you want to check performance, try running this first and then running each query and recording the logical reads and times.
SET STATISTICS IO ON
SET STATISTICS TIME ON