I'm trying to count the number of transactions made between 8AM-8PM (for example in - May 2o 2013) in comparison to 8PM-8AM (of the following day)
The field that has those times is called - CREATED_DT
The field for the items being sold is called - ITEM_ID
Can anyone please help? Couldn't find it on the forum.
Thanks,
Or.
select count(ITEM_ID) from your_table
where CREATED_DT between '2013-05-20 08:00:00' and '2013-05-20 20:00:00'
Something, like this:
SELECT
COUNT(1)
FROM
dbo.YOUR_TABLE
WHERE
CAST(CREATED_DT AS DATE) = '03.20.2013'
AND
CAST(CREATED_DT AS TIME(0)) BETWEEN '08:00:00' AND '20:00:00'
MSDN: TSQL time
Since you are looking for relative count
select day2-day1 from
(
select
sum(case when created_dt = '2013-05-20' then 1 else 0 end) as day1
, sum(case when created_dt = '2013-05-21' then 1 else 0 end) as day2
from your_table
where CREATED_DT between '2013-05-20 08:00:00' and '2013-05-20 20:00:00'
or CREATED_DT between '2013-05-21 08:00:00' and '2013-05-21 20:00:00'
)t
This gives in btw 8am and 8pm
select count(ITEM_ID) from your_table
where CREATED_DT between
CONVERT(varchar(15),CAST('2013-05-20 08:00:00' AS TIME),100) and
CONVERT(varchar(15),CAST('2013-05-20 20:00:00' AS TIME),100)
Related
I want to create a pivot table using postgresql. I could accomplish this using SQLite, and I thought the logic would be similar, but it doesn't seem to be the case.
Here's the sample table:
create table df(
campaign varchar(50),
date date not null,
revenue integer not null
);
insert into df(campaign,date,revenue) values('A','2019-01-01',10000);
insert into df(campaign,date,revenue) values('B','2019-01-02',7000);
insert into df(campaign,date,revenue) values('A','2018-01-01',5000);
insert into df(campaign,date,revenue) values('B','2018-01-01',3500);
here's my sqlite code to transform the tidy data into pivot table:
select
sum(case when strftime('%Y', date) = '2019' then revenue else 0 end) as '2019',
sum(case when strftime('%Y', date) = '2018' then revenue else 0 end) as '2018',
campaign
from df
group by campaign
the result would be like this:
2018 2019 campaign
5000 10000 A
3500 7000 B
I tried making the similar code using postgres, I will just use the year 2019:
select
sum(case when extract('year' from date) = '2019' then revenue else 0 end) as '2019',
campaign
from df
group by campaign
somehow the code doesn't work, I don't understand what's wrong.
Query Error: error: syntax error at or near "'2019'"
what do I miss here?
db-fiddle link:
https://www.db-fiddle.com/f/f1WjMAAxwSPRvB8BrxECN7/0
The function strftime() is used to extract various parts of a date in SQLite, but is not supported by Postgresql.
Use date_part():
select campaign,
sum(case when date_part('year', date) = '2019' then revenue else 0 end) as "2019",
sum(case when date_part('year', date) = '2018' then revenue else 0 end) as "2018"
from df
group by campaign
Or use Postgresql's FILTER clause:
select campaign,
sum(revenue) filter (where date_part('year', date) = '2019') as "2019",
sum(revenue) filter (where date_part('year', date) = '2018') as "2018"
from df
group by campaign
Also, don't use single quotes for table/column names.
SQLite allows it but Postgresql does not.
It accepts only double quotes which is the SQL standard.
See the demo.
I need to create a list of objects in PL/SQL - postgres and return it as table to user.
Here is the scenario. I have two table called
create table ProcessDetails(
processName varchar,
processstartdate timestamp,
processenddate timestamp);
create table processSLA(
processName varchar,
sla numeric);
Now I need to loop over all the records in processDetails table and check which records for each activity type has breached sla, within sla and those that are more 80% of sla.
I would need help in understanding how to loop over records and create a collection in which for each processtype I have details required.
sample data from processdetails table
ProcessName processstartdate processenddate
-----------------------------------------------------
"Create" "2018-12-24 13:11:05.122694" null
"Delete" "2018-12-24 12:12:24.269266" null
"Delete" "2018-12-23 13:12:31.89164" null
"Create" "2018-12-22 13:12:37.505486" null
processSLA
ProcessName sla(in hrs)
---------------------------------
Create 1
Delete 10
And the output will look something like this
ProcessName WithinSLA(Count) BreachedSLA(Count) Exceeded80%SLA(Count)
---------------------------------------------------------------------
Create 1 1 3
Delete 1 2 1
For each SLA, you can look up all corresponding process details with a join. The link between two joined tables specified in a join condition. For your example, using (processName) would work.
To find processes that have exceeded the SLA, say that the allowed end date is smaller than the actual end date:
select processName
, count(case when det.processstartdate + interval '1 hour' * sla.hours >=
coalesce(det.processenddate, now()) then 1 end) as InSLA
, count(case when det.processstartdate + interval '1 hour' * sla.hours <
coalesce(det.processenddate, now()) then 1 end) as BreachedSLA
, count(case when det.processstartdate + interval '1 hour' * 0.8 * sla.hours <
coalesce(det.processenddate, now()) then 1 end) as 80PercentSLA
from processSLA sla
left join
ProcessDetails det
using (processName)
group by
processName
You can join both tables and use conditional aggregation based on the calculation of the difference between the timestamps.
Something like that:
SELECT count(CASE
WHEN extract(EPOCH FROM pd.processenddate - pd.processstartdate) / 3600 < ps.sla * .8 THEN
1
END) "less than 80%",
count(CASE
WHEN extract(EPOCH FROM pd.processenddate - pd.processstartdate) / 3600 >= ps.sla * .8
AND extract(EPOCH FROM pd.processenddate - pd.processstartdate) / 3600 <= ps.sla THEN
1
END) "80% to 100%",
count(CASE
WHEN extract(EPOCH FROM pd.processenddate - pd.processstartdate) / 3600 > ps.sla THEN
1
END) "more than 100%"
FROM processdetails pd
INNER JOIN processsla ps
ON ps.processname = pd.processname;
Good Day Everyone!
well i have this kind of code and it kinda ugly,
a friend of mine told me i can implement Case Statements in here, but i do not know how or how would i implement, the code is long so if you could just help me to optimize my code i would appreciate it greatly!
PS. please be gentle to me, im new in T-sql :)
Thank yoU!
SELECT
SUM(CYJEWELRY) 'CY_Jewelry'
,SUM(CYAPPLICANCE) 'CY_Appliance'
,SUM(CYCELLPHONE) 'CY_Cellphone'
,SUM(PYJEWELRY) 'PY_Jewelry'
,SUM(PYAPPLIANCE) 'PY_Appliance'
,SUM(PYCELLPHONE) 'PY_Cellphone'
FROM
(
---TOTAL NUNG A FORMAT 0,0,0,0,0,0
--------------CURRENT YEAR JEWELRY
SELECT COUNT (*) AS CYJEWELRY,0 AS CYAPPLICANCE,0 AS CYCELLPHONE,0 AS PYJEWELRY,0 AS PYAPPLIANCE,0 AS PYCELLPHONE
FROM #TEMPTABLE1
WHERE (fld_StorageGroupID >= 3 and fld_StorageGroupID <= 14)
UNION
-----------CURRENT YEAR APPLIANCE
SELECT 0,COUNT(*),0,0,0,0
FROM #TEMPTABLE1
WHERE fld_StorageGroupID = 1
UNION
------------CURRENT YEAR CELLPHONE
SELECT 0,0,COUNT(*),0,0,0
FROM #TEMPTABLE1
WHERE fld_StorageGroupID = 2
UNION
---------------LAST YEAR JEWELRY
SELECT 0,0,0,COUNT(*),0,0
FROM #TEMPTABLE2
WHERE (fld_StorageGroupID >= 3 and fld_StorageGroupID <= 14)
UNION
-----------------------LAST YEAR APPLIANCE
SELECT 0,0,0,0,COUNT (*),0
FROM #TEMPTABLE2
WHERE fld_StorageGroupID = 1
UNION
-------------------------LAST YEAR CELLPHONE
SELECT 0,0,0,0,0,COUNT(*)
FROM #TEMPTABLE2
WHERE fld_StorageGroupID = 2
)A
Assuming your data is bit like this Sql Fiddle Example, try this for the sub query using SUM() and CASE.
SELECT SUM(CASE WHEN fld_StorageGroupID >= 3 and fld_StorageGroupID <= 14 ELSE 0 END) Col1And4,
SUM(CASE WHEN fld_StorageGroupID = 1 THEN 1 ELSE 0 END) Col2And5,
SUM(CASE WHEN fld_StorageGroupID = 2 THEN 1 ELSE 0 END) Col3And6
FROM #TEMPTABLE1
GROUP BY fld_StorageGroupID
Since you are applying the same filter for last 3 columns in the subquery, I have done only first 3 columns here.
EDIT:
I think this is better than above (Note: no need to use SUM() in the main query).
Fiddle Example with data
select col1_4 CY_Jewelry,
col2_5 CY_Appliance,
col3_6 CY_Cellphone,
col1_4 PY_Jewelry,
col2_5 PY_Appliance,
col3_6 PY_Cellphone
from (
select sum(case when id>= 3 and id <= 14 then 1 else 0 end) col1_4,
sum(case when id = 2 then 1 else 0 end) col2_5,
sum(case when id = 3 then 1 else 0 end) col3_6
from t
--group by id
) X
Can you show how can this be done in t-sql?
sample records
accountnumber trandate
-------------------------
1000 02-11-2010
1000 02-12-2010
1000 02-13-2010
2000 02-10-2010
2000 02-15-2010
How to compute the # of days between each transactions for each accountnumber?
like this
accountnumber trandate # of days
----------------------------------------
1000 02-11-2010 0
1000 02-12-2010 1
1000 02-13-2010 1
2000 02-10-2010 0
2000 02-15-2010 5
Thanks a lot!
SELECT accountnumber,
trandate,
Datediff(DAY, a.trandate, (SELECT TOP 1 trandate
FROM mytable b
WHERE b.trandate > a.trandate
ORDER BY trandate))
FROM mytable a
ORDER BY trandate
you can use between and
select * from table1 where trandate between 'date1' and 'date2'
Hope this helps.
Select A.AccountNo, A.TranDate, B.TranDate as PreviousTranDate, A.TranDate - B.Trandate as NoOfDays
from
(Select AccountNo, TranDate, Row_Number() as RNO over (Partition by AccountNo order by TranDate)) as A,
(Select AccountNo, TranDate, Row_Number() as RNO over (Partition by AccountNo order by TranDate)) as B
Where A.AccountNo = B.AccountNo and A.RNO -1 = B.RNO
You can also use a CTE expression to increase preformance.
I have a table with a column REGDATE, a registration date (YYYY-MM-DD HH:MM:SS). I would like to show an histogram (ExtJS) in order to understand in which period of the years users are signing up. I would like to do this for the past twelve months with respect to the current date and to group dates by week.
Any hints?
FWIW in PostgreSQL, Karaszi has an answer that works, but there is a faster query:
SELECT date_trunc('week', REGDATE) AS "Week" , count(*) AS "No. of users"
FROM <<TABLE>>
WHERE REGDATE > now() - interval '12 months'
GROUP BY 1
ORDER BY 1;
I based this off the work of Ben Goodacre
in MySQL:
SELECT COUNT(*), DATE_FORMAT(regdate, "%X%V") AS regweek FROM table GROUP BY regweek;
or
SELECT COUNT(*), YEARWEEK(NOW(), 2) as regweek FROM table GROUP BY regweek;
in PostgreSQL:
SELECT COUNT(*), EXTRACT(YEAR FROM regdate)::text || EXTRACT(WEEK FROM regdate)::text AS regweek FROM table GROUP BY regweek;
Maybe this?
select to_char(REGDATE,'WW') "Week number",
count(*) "number of signups",
from YOUR_TABLE
where REGDATE > current_date-365
group by to_char(REGDATE,'WW')
order by to_char(REGDATE,'WW')
Hint: (SQL)
SELECT CONVERT (VARCHAR(7), REGDATE, 120) AS [RegistrationMonth]
FROM ...
GROUP BY CONVERT (VARCHAR(7), REGDATE, 120)
ORDER BY CONVERT (VARCHAR(7), REGDATE, 120)