I want to get the maximum id from the home_history table and filtered home_history by this value. I used it with the operator. where did I mistake it?
with max_table as (select max(id) as max_id from home_history),
current_data as (
select Cast(created_at As date), count(id)
from home_history
where id > (max_table.max_id - 30 * 500000)
and created_at >= CAST((now() + (INTERVAL '-30 day')) AS date)
and home_history.created_at < CAST(now() AS date)
group by CAST(created_at AS date)
order by CAST(created_at As Date)
)
SELECT *
from current_data;
[42P01] ERROR: missing FROM-clause entry for table "max_table"
Position: 201
Even if it is obvious to you, you have to explicitly state in your main query that you select from the CTE.
Think of a CTE as a view defined just for a single query. You'd need a FROM clause to indicate the view you select from.
Related
I've got a remedial question about pulling results out of a CTE in a later part of the query. For the example code, below are the relevant, stripped down tables:
CREATE TABLE print_job (
created_dts timestamp not null default now(),
status text not null
);
CREATE TABLE calendar_day (
date_actual date not null
);
In the current setup, there are gaps in the dates in the print_job data, and we would like to have a gapless result. For example, there are 87 days from the first to last date in the table, and only 77 days in there have data. We've already got a calendar_day dimension table to join with to get the 87 rows for the 87-day range. It's easy enough to figure out the min and max dates in the data with a subquery or in a CTE, but I don't know how to use those values from a CTE. I've got a full query below, but here are the relevant fragments with comments:
-- Get the date range from the data.
date_range AS (
select min(created_dts::date) AS start_date,
max(created_dts::date) AS end_date
from print_job),
-- This CTE does not work because it doesn't know what date_range is.
complete_date_series_using_cte AS (
select actual_date
from calendar_day
where actual_date >= date_range.start_date
and actual_date <= date_range.end_date
),
-- Subqueries are fine, because the FROM is specified in the subquery condition directly.
complete_date_series_using_subquery AS (
select date_actual
from calendar_day
where date_actual >= (select min(created_dts::date) from print_job)
and date_actual <= (select max(created_dts::date) from print_job)
)
I run into this regularly, and finally figured I'd ask. I've hunted around already for an answer, but I'm not clear how to summarize it well. And while there's nothing wrong with the subqueries in this case, I've got other situations where a CTE is nicer/more readable.
If it helps, I've listed the complete query below.
-- Get some counts and give them names.
WITH
daily_status AS (
select created_dts::date as created_date,
count(*) AS daily_total,
count(*) FILTER (where status = 'Error') AS status_error,
count(*) FILTER (where status = 'Processing') AS status_processing,
count(*) FILTER (where status = 'Aborted') AS status_aborted,
count(*) FILTER (where status = 'Done') AS status_done
from print_job
group by created_dts::date
),
-- Get the date range from the data.
date_range AS (
select min(created_dts::date) AS start_date,
max(created_dts::date) AS end_date
from print_job),
-- There are gaps in the data, and we want a row for dates with no results.
-- Could use generate_series on a timestamp & convert that to dates. But,
-- in our case, we've already got dimension tables for days. All that's needed
-- here is the actual date.
-- This CTE does not work because it doesn't know what date_range is.
-- complete_date_series_using_cte AS (
-- select actual_date
--
-- from calendar_day
--
-- where actual_date >= date_range.start_date
-- and actual_date <= date_range.end_date
-- ),
complete_date_series_using_subquery AS (
select date_actual
from calendar_day
where date_actual >= (select min(created_dts::date) from print_job)
and date_actual <= (select max(created_dts::date) from print_job)
)
-- The final query joins the complete date series with whatever data is in the print_job table daily summaries.
select date_actual,
coalesce(daily_total,0) AS total,
coalesce(status_error,0) AS errors,
coalesce(status_processing,0) AS processing,
coalesce(status_aborted,0) AS aborted,
coalesce(status_done,0) AS done
from complete_date_series_using_subquery
left join daily_status
on daily_status.created_date =
complete_date_series_using_subquery.date_actual
order by date_actual
I said it was a remedial question....I remembered where I'd seen this done before:
https://tapoueh.org/manual-post/2014/02/postgresql-histogram/
In my example, I need to list the CTE in the table list. That's obvious in retrospect, and I realize that I automatically don't think to do that as I'm habitually avoiding CROSS JOIN. The fragment below shows the slight change needed:
WITH
date_range AS (
select min(created_dts)::date as start_date,
max(created_dts)::date as end_date
from print_job
),
complete_date_series AS (
select date_actual
from calendar_day, date_range
where date_actual >= date_range.start_date
and date_actual <= date_range.end_date
),
I want to set some parameters/variables at the top of the query and use them in several places in a complex query on Heroku Dataclips.
Here is a simple example:
WITH vars AS ( SELECT
'2018-01-07' AS calcdate,
12345 AS salary
)
select *
from taxes
where country_alpha3='USA' and year='2018' and active=true
and subdivision_code='US-MEDI'
and local_code is null
and start_date <= DATE(vars.calcdate) and end_date >= DATE(vars.calcdate)
and lower_amount_cents <= vars.salary and upper_amount_cents >= vars.salary;
I saw this style of code in another answer (from 2013) and it is not working in the actual dataclip as of today.
Error: Your query couldn't be updated.
ERROR: missing FROM-clause entry for table "vars"
LINE 12: and start_date <= DATE(vars.calcdate) and end_date >= DATE(v...
^
If the vars.calcdate and vars.salaryare changed to the constants, then the SQL works fine. It is something to do with the vars. syntax or usage, I think.
I found the solution. It was syntax as I supposed. The format of vars.name is not allowed (not certain where I saw that).
WITH vars AS ( SELECT
DATE('2018-01-07') AS calcdate,
12345 AS salary
)
select *
from taxes
where country_alpha3='USA' and year='2018' and active=true
and subdivision_code='US-MEDI'
and local_code is null
and start_date <= (SELECT calcdate from vars) and end_date >= (SELECT calcdate from vars)
and lower_amount_cents <= (SELECT salary from vars) and upper_amount_cents >= (SELECT salary from vars);
I have the following query.
SELECT *
FROM (SELECT temp.*, ROWNUM AS rn
FROM ( SELECT (id) M_ID,
CREATION_DATE,
RECIPIENT_STATUS,
PARENT_OR_CHILD,
CHILD_COUNT,
IS_PICKABLE,
IS_GOLDEN,
trxn_id,
id AS id,
MASTER_ID,
request_wf_state,
TITLE,
FIRST_NAME,
MIDDLE,
LAST_NAME,
FULL_NAME_LNF,
FULL_NAME_FNF,
NAME_OF_ORGANIZATION,
ADDRESS,
CITY,
STATE,
COUNTRY,
HCP_TYPE,
HCP_SUBTYPE,
is_edit_locked,
record_type rec_type,
DATA_SOURCE_NAME,
DEA_DATA,
NPI_DATA,
STATE_DATA,
RPPS,
SIREN_NUMBER,
FINESS,
ROW_NUMBER ()
OVER (PARTITION BY id ORDER BY full_name_fnf)
AS rp
FROM V_RECIPIENT_TRANS_SCRN_OP
WHERE 1 = 1
AND creation_date >=
to_date( '01-Sep-2015', 'DD-MON-YYYY') AND creation_date <=
to_date( '09-Sep-2015', 'DD-MON-YYYY')
ORDER BY CREATION_DATE DESC) temp
WHERE rp = 1)
WHERE rn > 0 AND rn < 10;
Issue is, that the above query does return data which has creation_date as '09-Sep-2015'.
NLS_DATE_FORMAT of my database is 'DD-MON-RR'.
Datatype of the column creation_date is date and the date format in which date is stored is MM/DD/YYYY.
Since your column creation_date has values with non-zero time components, and the result of to_date( '09-Sep-2015', 'DD-MON-YYYY') has a zero time component, the predicate creation_date <= to_date( '09-Sep-2015', 'DD-MON-YYYY') is unlikely to match. As an example, "9/9/2015 1:07:45 AM" is clearly greater than "9/9/2015 0:00:00 AM", which is returned by your to_date() call.
You will need to take into account the time component of the Oracle DATE data type.
One option is to use the trunc() function, as you did, to remove the time component from values of creation_date. However, this may prevent the use of index on creation_date if it exists.
A better alternative, in my view, would be to reformulate your predicate as creation_date < to_date( '10-Sep-2015', 'DD-MON-YYYY'), which would match any time values on the date of 09-Sep-2015.
I need to select the rows for which the difference between max(date) and the date just before max(date) is smaller than 366 days. I know about SELECT MAX(date) FROM table to get the last date from now, but how could I get the date before?
I would need a query of this kind:
SELECT code, MAX(date) - before_date FROM troncon WHERE MAX(date) - before_date < 366 ;
NB : before_date does not refer to anything and is to be replaced by a functionnal stuff.
Edit : Example of the table I'm testing it on:
CREATE TABLE troncon (code INTEGER, ope_date DATE) ;
INSERT INTO troncon (code, ope_date) VALUES
('C086000-T10001', '2014-11-11'),
('C086000-T10001', '2014-11-11'),
('C086000-T10002', '2014-12-03'),
('C086000-T10002', '2014-01-03'),
('C086000-T10003', '2014-08-11'),
('C086000-T10003', '2014-03-03'),
('C086000-T10003', '2012-02-27'),
('C086000-T10004', '2014-08-11'),
('C086000-T10004', '2013-12-30'),
('C086000-T10004', '2013-06-01'),
('C086000-T10004', '2012-07-31'),
('C086000-T10005', '2013-10-01'),
('C086000-T10005', '2012-11-01'),
('C086000-T10006', '2014-04-01'),
('C086000-T10006', '2014-05-15'),
('C086000-T10001', '2014-07-05'),
('C086000-T10003', '2014-03-03');
Many thanks!
The sub query contains all rows joined with the unique max date, and you select only ones which there differente with the max date is smaller than 366 days:
select * from
(
SELECT id, date, max(date) over(partition by code) max_date FROM your_table
) A
where max_date - date < interval '366 day'
PS: As #a_horse_with_no_name said, you can partition by code to get maximum_date for each code.
I have a postgres table with a unique datetime field.
I would like to use/create a function that takes as argument a datetime value and returns the row id having the closest datetime relative (but not equal) to the passed datetime value. A second argument could specify before or after the passed value.
Ideally, some combination of native datetime functions could handle this requirement. Otherwise it'll have to be a custom function.
Question: What are methods for querying relative datetime over a collection of rows?
select id, passed_ts - ts_column difference
from t
where
passed_ts > ts_column and positive_interval
or
passed_ts < ts_column and not positive_interval
order by abs(extract(epoch from passed_ts - ts_column))
limit 1
passed_ts is the timestamp parameter and positive_interval is a boolean parameter. If true only rows where the timestamp column is lower then the passed timestamp. If false the inverse.
use simply -.
Assuming you have a table with attributes Key, Attr and T (timestamp with or without timezone):
you can search with
select min(T - TimeValue) from Table where (T - TimeValue) > 0;
this will give you the main difference. You can combine this value with a join to the same table to get the tuple you are interested in:
select * from (select *, T - TimeValue as diff from Table) as T1 NATURAL JOIN
( select min(T - TimeValue) as diff from Table where (T - TimeValue) > 0) as T2;
that should do it
--dmg
You want the first row of a select statement producing all the rows below (or above) the given datetime in descending (or ascending) order.
Pseudo code for the function body:
SELECT id
FROM table
WHERE IF(#above, datecol < #param, datecol > #param)
ORDER BY IF (#above. datecol ASC, datecol DESC)
LIMIT 1
However, this does not work: one cannot condition the ordering direction.
The second idea is to do both queries, and select afterwards:
SELECT *
FROM (
(
SELECT 'below' AS dir, id
FROM table
WHERE datecol < #param
ORDER BY datecol DESC
LIMIT 1
) UNION (
SELECT 'above' AS dir, id
FROM table
WHERE datecol > #param
ORDER BY datecol ASC
LIMIT 1)
) AS t
WHERE dir = #dir
That should be pretty fast with an index on the datetime column.
-- test rig
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path=tmp;
CREATE TABLE lutser
( dt timestamp NOT NULL PRIMARY KEY
);
-- populate it
INSERT INTO lutser(dt)
SELECT gs
FROM generate_series('2013-04-30', '2013-05-01', '1 min'::interval) gs
;
DELETE FROM lutser WHERE random() < 0.9;
--
-- The query:
WITH xyz AS (
SELECT dt AS hh
, LAG (dt) OVER (ORDER by dt ) AS ll
FROM lutser
)
SELECT *
FROM xyz bb
WHERE '2013-04-30 12:00' BETWEEN bb.ll AND bb.hh
;
Result:
NOTICE: drop cascades to table tmp.lutser
DROP SCHEMA
CREATE SCHEMA
SET
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "lutser_pkey" for table "lutser"
CREATE TABLE
INSERT 0 1441
DELETE 1288
hh | ll
---------------------+---------------------
2013-04-30 12:02:00 | 2013-04-30 11:50:00
(1 row)
Wrapping it into a function is left as an excercise for the reader
UPDATE: here is a second one with the sandwiched-not-exists-trick (TM):
SELECT lo.dt AS ll
FROM lutser lo
JOIN lutser hi ON hi.dt > lo.dt
AND NOT EXISTS (
SELECT * FROM lutser nx
WHERE nx.dt < hi.dt
AND nx.dt > lo.dt
)
WHERE '2013-04-30 12:00' BETWEEN lo.dt AND hi.dt
;
You have to join the table to itself with the where condition looking for the smallest nonzero (negative or positive) interval between the base table row's datetime and the joined table row's datetime. It would be good to have an index on that datetime column.
P.S. You could also look for the max() of the previous or the min() of the subsequent.
Try something like:
SELECT *
FROM your_table
WHERE (dt_time > argument_time and search_above = 'true')
OR (dt_time < argument_time and search_above = 'false')
ORDER BY CASE WHEN search_above = 'true'
THEN dt_time - argument_time
ELSE argument_time - dt_time
END
LIMIT 1;