Count number of days by ignoring times - tsql

I am trying to calculate the number of days that an event category has occurred in T-SQL in SSMS 2008. How do I write this expression?
This value is stored as a datetime, but I want to count the day portion only. For example, if my values were:
2013-01-05 19:20:00.000
2013-01-06 17:20:00.000
2013-01-06 18:20:00.000
2013-01-06 19:20:00.000
2013-01-03 16:15:00.000
2013-01-04 12:55:00.000
Then although there are 6 unique records listed above, I would want to count this as only 4, since there are 3 records on 1/6/2013. Make sense?
This is what I'm trying now that doesn't work:
select
count(s.date_value)
From
table_A s

Cast the datetime value as a date. Also if you want only unique values, use DISTINCT:
SELECT COUNT(DISTINCT CAST(date_value AS date)) FROM table_A

Related

How to get a list of dates in Pervasive SQL

Our time & attendance database is a Pervasive/Actian Zen database. What I'm trying to do is create a query that just lists the next 14 days from today. I'll then cross apply this list of dates with employee records so that in effect I have a list of people/dates for the next 14 days.
I've done it with a recursive CTE on SQL server quite easily. I could also do it with a loop in SQL Server too but I can't figure it out with Pervasive SQL. Loops can only exist within Stored Procedures and triggers.
Looking around I thought that this code that I found and adapted might work, but it doesn't (and further research suggests that there isn't a recursive option within Pervasive at all.
WITH RECURSIVE cte_numbers(n, xDate)
AS (
SELECT
0, CURDATE() + 1
UNION ALL
SELECT
n+1,
dateAdd(day,n,xDate)
FROM
cte_numbers
WHERE n < 14
)
SELECT
xDate
FROM
cte_numbers;
I just wondered whether anyone could help me write an SQL query that gives me this list of dates, outside of a stored procedure.
When you create a table like this:
CREATE TABLE dates(d DATE PRIMARY KEY, x INTEGER);
And create a first record like this:
INSERT INTO dates VALUES ('2021-01-01',0);
Then you can use this statement which doubles the number of records in the table dates, every time it is executed. (so you need to run it a couple of times
When you run it 10 times the table dates will have 21 oktober 2023 as last date.
When you run it 12 times the last date will be 19 march 2032.
INSERT INTO dates
SELECT
DATEADD(DAY,m.m+1,d),
x+m.m+1
from dates
cross join (select max(x) m from dates) m
order by d;
Of course the column x can be deleted (optionally) with next statement, but you cannot add more records using the previous statement:
ALTER TABLE dates DROP COLUMN x;
Finally, to return the next 14 day from today:
SELECT d
FROM DATES
WHERE d BETWEEN CURDATE( ) AND DATEADD(DAY,13,CURDATE());

Sum with aggregate date postgresql

I have a table that has a column date and value, what I need is to sum a value showing just one date column.
Ex:
I have this:
date value
2018-01-01 150
2018-01-23 140
what I need:
date sum(value)
2018-01 290
Simple solution to get sums per month:
SELECT to_char(date, 'YYYY-MM') AS mon, sum(value) AS sum_value
FROM tbl
GROUP BY 1;
For large tables it's cheaper to group on date_trunc('month', date) instead.
Related:
Concatenate multiple result rows of one column into one, group by another column
Group and count events per time intervals, plus running total
How to get the date and time from timestamp in PostgreSQL select query?

create a new postgres table but with averages

I'm a SQL newbie.
I have a postgres table that has datetime, and values. with subminute entries. I want to create a new table that takes the average of each minute and saves that instead of having subminute entries. so something like this:
1-12-07 12:29:56:00 2
1-12-07 12:29:56:16 3
1-12-07 12:29:56:34 3
1-12-07 12:29:56:58 4
1-12-07 12:30:00:00 7
to
1-12-07 12:29:00 3
1-12-07 12:30:00 #
Is there a way to do it in postgres?
The only solution I can think of is using a python script to do the trick. But that will take forever as I have a significant amount of data.
In order to do this you must group the results by your time truncated at the minutes:
insert into "NewTable"
select minute, avg(value) from (select date_trunc('minutes', date) as minute, value from "Time") as Average group by minute;
So this is going to be awful and inefficient and an incomplete answer, recommend writing a script (just my .02)
It should be something like this, since there is not a sqlFiddle provided in the question:
select average(my_count)
from
(
select my_date, my_time,my_count
from my_table
group by my_date, extract(minute from my_time)
)

Select unique values sorted by date

I am trying to solve an interesting problem. I have a table that has, among other data, these columns (dates in this sample are shown in European format - dd/mm/yyyy):
n_place_id dt_visit_date
(integer) (date)
========== =============
1 10/02/2012
3 11/03/2012
4 11/05/2012
13 14/06/2012
3 04/10/2012
3 03/11/2012
5 05/09/2012
13 18/08/2012
Basically, each place may be visited multiple times - and the dates may be in the past (completed visits) or in the future (planned visits). For the sake of simplicity, today's visits are part of planned future visits.
Now, I need to run a select on this table, which would pull unique place IDs from this table (without date) sorted in the following order:
Future visits go before past visits
Future visits take precedence in sorting over past visits for the same place
For future visits, the earliest date must take precedence in sorting for the same place
For past visits, the latest date must take precedence in sorting for the same place.
For example, for the sample data shown above, the result I need is:
5 (earliest future visit)
3 (next future visit into the future)
13 (latest past visit)
4 (previous past visit)
1 (earlier visit in the past)
Now, I can achieve the desired sorting using case when in the order by clause like so:
select
n_place_id
from
place_visit
order by
(case when dt_visit_date >= now()::date then 1 else 2 end),
(case when dt_visit_date >= now():: date then 1 else -1 end) * extract(epoch from dt_visit_date)
This sort of does what I need, but it does contain repeated IDs, whereas I need unique place IDs. If I try to add distinct to the select statement, postgres complains that I must have the order by in the select clause - but then the unique won't be sensible any more, as I have dates in there.
Somehow I feel that there should be a way to get the result I need in one select statement, but I can't get my head around how to do it.
If this can't be done, then, of course, I'll have to do the whole thing in the code, but I'd prefer to have this in one SQL statement.
P.S. I am not worried about the performance, because the dataset I will be sorting is not large. After the where clause will be applied, it will rarely contain more than about 10 records.
With DISTINCT ON you can easily show additional columns of the row with the resulting n_place_id:
SELECT n_place_id, dt_visit_date
FROM (
SELECT DISTINCT ON (n_place_id) *
,dt_visit_date < now()::date AS prio -- future first
,#(now()::date - dt_visit_date) AS diff -- closest first
FROM place_visit
ORDER BY n_place_id, prio, diff
) x
ORDER BY prio, diff;
Effectively I pick the row with the earliest future date (including "today") per n_place_id - or latest date in the past, failing that.
Then the resulting unique rows are sorted by the same criteria.
FALSE sorts before TRUE
The "absolute value" # helps to sort "closest first"
More on the Postgres specific DISTINCT ON in this related answer.
Result:
n_place_id | dt_visit_date
------------+--------------
5 | 2012-09-05
3 | 2012-10-04
13 | 2012-08-18
4 | 2012-05-11
1 | 2012-02-10
Try this
select n_place_id
from
(
select *,
extract(epoch from (dt_visit_date - now())) as seconds,
1 - SIGN(extract(epoch from (dt_visit_date - now())) ) as futurepast
from #t
) v
group by n_place_id
order by max(futurepast) desc, min(abs(seconds))

distinct key word only for one column

I'm using postgresql as my database, I'm stuck with getting desired results with a query,
what I have in my table is something like following,
nid date_start date_end
1 20 25
1 20 25
2 23 26
2 23 26
what I want is following
nid date_start date_end
1 20 25
2 23 26
for that I used SELECT DISTINCT nid,date_start,date_end from table_1 but this result duplicate entries, how can I get distinct nid s with corresponding date_start and date_end?
can anyone help me with this?
Thanks a lot!
Based on your sample data and sample output, your query should work fine. I'll assume your sample input/output is not accurate.
If you want to get distinct values of a certain column, along with values from other corresponding columns, then you need to determine WHICH value from the corresponding columns to display (your question and query would otherwise not make sense). For this you need to use aggregates and group by. For example:
SELECT
nid,
MAX(date_start),
MAX(date_end)
FROM
table_1
GROUP BY
nid
That query should work unless you are selecting more columns.
Or maybe you are getting the same nid with a different start and/or end date
Try distinct on:
select distinct on (col1) col1, col2 from table;
DISTINCT can't result in duplicate entries - that's what it does... removed duplicates.
Is your posted data is incorrect? Exactly what are your data and output?