How to get median for time interval in Postgres? [duplicate] - postgresql

I have the following query:
SELECT
title,
(stock_one + stock_two) AS global_stock
FROM
product
ORDER BY
global_stock = 0,
title;
Running it in PostgreSQL 8.1.23 i get this error:
Query failed: ERROR: column "global_stock" does not exist
Anybody can help me to put it to work? I need the availale items first, after them the unnavailable items. Many thanks!

You can always ORDER BY this way:
select
title,
( stock_one + stock_two ) as global_stock
from product
order by 2, 1
or wrap it in another SELECT:
SELECT *
from
(
select
title,
( stock_one + stock_two ) as global_stock
from product
) x
order by (case when global_stock = 0 then 1 else 0 end) desc, title

One solution is to use the position:
select title,
( stock_one + stock_two ) as global_stock
from product
order by 2, 1
However, the alias should work, but not necessarily the expression. What do you mean by "global_stock = 0"? Do you mean the following:
select title,
( stock_one + stock_two ) as global_stock
from product
order by (case when global_stock = 0 then 1 else 0 end) desc, title

In case anyone finds this when googling for whether you can just ORDER BY my_alias: Yes, you can. This cost me a couple hours.
As the postgres docs state:
The ordinal number refers to the ordinal (left-to-right) position of the output column. This feature makes it possible to define an ordering on the basis of a column that does not have a unique name. This is never absolutely necessary because it is always possible to assign a name to an output column using the AS clause.
So either this has been fixed since, or this question is specifically about the ORDER BY my_alias = 0, other_column syntax which I didn't actually need.

Related

comparison within in clause of postgresql

Is it possible to add condition within the in clause of postgresql
for example
select ... where (t1.subject,t2.weight) in ((1,2),(2,3))
I want to check whether subject is 1 but weight can be >= 2 not just 2 and so on. So that condition would logically look somewhat like
select ... where (t1.subject,t2.weight) in ((1,>2),(2,>3))
No, this is not possible. You need to write
…
WHERE t1.subject = 1 AND t2.weight > 2
OR t1.subject = 2 AND t2.weight > 3;
You can select value of object using subquery. Simple just select query subject which are having weight greater than >=2.
select ... where (t1.subject,t2.weight) in (select subject FROM ... where weight >=2 ,select subject FROM ... where weight >=3 );

more than one row returned by a subquery used as an expression problem

I am trying to update a column in one database with a query:
Here the query
and this is the output i think it is impossible to asign a query to a field but what is the solution for that plz.
enter image description here
= can be used when we are pretty sure that the subquery returns only 1 value.
When we are not sure whether subquery returns more than 1 value, we will have to use IN to accommodate all values or simply use TOP 1 to limit the equality matching to one value:
UPDATE mascir_fiche SET partner = (SELECT TOP 1 id FROM hr_employee WHERE parent_id IN (SELECT id FROM hr_employee));
With Limit:
UPDATE mascir_fiche SET artner = (SELECT id FROM hr_employee WHERE parent_id IN (SELECT id FROM hr_employee) limit 1);

How to get a sum of all rows that meets condition in postgres

I am trying to return sums with their specific conditions.
SELECT
COUNT(*),
SUM("transactionTotal" WHERE "entryType"=sold) as soldtotal,
SUM(case when "entryType"=new then "transactionTotal" else 0 end) as newtotal
FROM "MoneyTransactions"
WHERE cast("createdAt" as date) BETWEEN '2020-10-08' AND '2020-10-09'
I am trying to sum up the rows with "entryType"=sold and "entryType"=new and return those values separately.
obviously both my logic are wrong.
can someone lend a hand.
You were on the right track to use conditional aggregation, but your syntax is slightly off. Try this version:
SELECT
COUNT(*) AS total_cnt,
SUM(transactionTotal) FILTER (WHERE entryType = 'sold') AS soldtotal,
SUM(transactionTotal) FILTER (WHERE entryType = 'new') AS newtotal
FROM MoneyTransactions
WHERE
createdAt::date BETWEEN '2020-10-08' AND '2020-10-09';
Note: Assuming your createdAt column already be date, then there is no point in casting it. If it is text, then yes you would need to convert it, but you might have to use TO_DATE depending on its format.

DISTINCT ON still gives me an error that select item should be in GROUP BY

I have a table with a list of customer IDs and a list of dates as follows:
id |take_list_date | customer_id
1 |2016-02-17 | X00001
2 |2016-02-20 | X00002
3 |2016-02-20 | X00003
I am trying to return a count of all the IDs in the table on a specific day in the following format:
label: 2016-02-20 value: 2
The following query produces the required results within the specified date range:
select
count(customer_id)::int as value,
take_list_date::varchar as label
FROM
customer_take_list
where
take_list_date >= '10-12-2017'
and
take_list_date <= '20-12-2017'
GROUP BY
take_list_date
ORDER BY
take_list_date
The problem is I have to include an ID field to make it compatible with Ember Data. When I include an ID field I need to add it to the Group By clause which produces incorrect results.
After looking at some suggestions on other SO questions I tried to resolve this using DISTINCT ON:
select distinct on
(take_list_date) take_list_date::varchar as label
count(customer_id)::int as value
FROM
customer_take_list
where
take_list_date >= '10-12-2017'
and
take_list_date <= '20-12-2017'
order by
take_list_date
Bizarrely this still gives me the same Group By error. What have I done wrong?
I'm not an expert in the technologies involved, but I think you need to create an arbitrary ID rather than use one of the ID's in the table. An example is here: Add Postgres incremental ID. I think your final query should look something like this:
SELECT
COUNT(customer_id)::int as value,
take_list_date::varchar as label,
ROW_NUMBER() OVER (ORDER BY take_list_date) AS id
FROM
customer_take_list
where
take_list_date >= '10-12-2017'
and
take_list_date <= '20-12-2017'
GROUP BY
take_list_date
ORDER BY
take_list_date

Compare counts in PostgreSQL

I want to compare two results of queries of the same table, by checking theresulting row count, but Postgres doesn't support column aliases in the where clause.
select id from article where version=1308
and exists(
select count(ident) as count1 from artprice AS p1
where p1.valid_to<to_timestamp(1586642400000) or p1.valid_from>to_timestamp(1672441199000)
and p1.article=article.id
and p1.count1=(select count(ident) from artprice where article=article.id)
)
I also cannot use aggregate functions in the where clause, so
select id from article where version=1308
and exists(
select count(ident) as count1 from artprice AS p1
where p1.valid_to<to_timestamp(1586642400000) or p1.valid_from>to_timestamp(1672441199000)
and p1.article=article.id
and p1.count(ident)=(select count(ident) from artprice where article=article.id)
)
also doesn't work. Any ideas?
EDIT:
What I want to get are articles where every article price is outside of a valid range defined by validFrom andValidTo.
I now changed the statement by negating the positive conditions:
Select distinct article.id from Article article, ArtPrice price
where
(
(article.version=?)
and
(
(
(
(
(not(price.valid_from>=?)) or (not(price.valid_to<=?))
)
and
(
(not(price.valid_from<=?)) or (not(price.valid_to>=?))
)
)
and
(
(not(price.valid_to>=?)) or (not(price.valid_to<=?))
)
)
and
(
(not(price.valid_from>=?)) or (not(price.valid_from<=?))
)
)
) and article.id=price.article
Probably not the very elegant solution, but it works.
Aggregates are not allowed in WHERE clause, but there's HAVING clause for them.
EDIT: What I want to get are articles where every article price is outside of a valid range defined by validFrom andValidTo.
I think that bool_or() would be a good fit here when combined with range operations:
SELECT article.id
FROM Article AS article
JOIN ArtPrice AS price ON price.article = article.id
WHERE article.version = 1308
GROUP BY article.id
HAVING NOT bool_or(tsrange(price.valid_from, price.valid_to)
&& tsrange(to_timestamp(1586642400000),
to_timestamp(1672441199000)))
This reads as "...those having not any price tsrange overlap with given tsrange".
Postgresql also supports the SQL OVERLAPS operator:
(price.valid_from, price.valid_to) OVERLAPS (to_timestamp(1586642400000),
to_timestamp(1672441199000))
As a note, it operates on half-open intervals start <= time < end.