How can i solve this? Grafana Postgres - postgresql

So i was trying to do a code where i go and find the average of a time spent in a ticket, but i can´t do it, the problem is when i try to see the results i can´t because i need a time column and i can´t put one in whit the code i have.
This is my code:
SELECT AVG(lol) FROM
(SELECT COUNT(DISTINCT(ticket_id)), SUM(time_spent_minutes) AS "lol", updated_on as "time"
FROM ticket_messages
WHERE admin_id IN ('20439','20457','20291','20371','20357','20235','20449','20355','20488')
GROUP BY updated_on, ticket_id) as m

You can find a nice example here with the full explanation.
TLDR: add a NOW() column to your query
SELECT NOW() as tm, AVG(lol) FROM
(SELECT COUNT(DISTINCT(ticket_id)), SUM(time_spent_minutes) AS "lol", updated_on as "time"
FROM ticket_messages
WHERE admin_id IN ('20439','20457','20291','20371','20357','20235','20449','20355','20488')
GROUP BY updated_on, ticket_id) as m

Related

Reorganize the data by months in grafana using psql

im trying to do a "AVG" by months but grafana wont let me do it because im using two Selects and the first one is using the other select and i can`t specifie my time column.
I want to get a "time series"(type of grafana dashboard) where it show´s me the average by month in grafana but i dont know how could i do it with psql and the code i have.
This is the code im using:
SELECT AVG(lol), NOW() as time FROM
(SELECT COUNT(DISTINCT(ticket_id)), SUM(time_spent_minutes) AS "lol"
FROM ticket_messages
WHERE admin_id IN ('20439', '20457', '20291', '20371', '20357', '20235','20449','20355','20488')
GROUP BY ticket_id) as media
Where is the temporal column coming from? As of now your query is not doing anything specifically.
I would think that probably you have a ticket_date column available somewhere, the below query could become
SELECT
EXTRACT(MONTH FROM ticket_date) ticket_month,
SUM(time_spent_minutes)/COUNT(DISTINCT ticket_id) avg_time
FROM ticket_messages
WHERE
admin_id IN ('20439', '20457', '20291', '20371', '20357', '20235','20449','20355','20488')
GROUP BY EXTRACT(MONTH FROM ticket_date)

Postgresql Newbie - Looking for insight

I am in an introduction to sql class (using postgresql) and struggling to take simple queries to the next step. I have a single table with two datetime columns (start_time & end_time) that I want to extract as two date only columns. I figured out how to extract just the date from datetime using the following:
Select start_time,
CAST(start_time as date) as Start_Date
from [table];
or
Select end_time,
CAST(end_time as date) as End_Date
from [table];
Problem: I can't figure out the next step to combine both of these queries into a single step. I tried using WHERE but i am still doing something wrong.
1st wrong example
SELECT start_time, end_time
From baywheels_2017
WHERE
CAST(start_time AS DATE) AS Start_Date
AND (CAST(end_time AS DATE) AS End_Date);
Any help is greatly appreciated. Thanks for taking the time to look.
You don't need to select the underlying field in order to later cast it; each field in the "select" clause is relatively independent. With the table created by:
CREATE TABLE test (
id SERIAL PRIMARY KEY,
start_time TIMESTAMP WITH TIME ZONE NOT NULL,
end_time TIMESTAMP WITH TIME ZONE NOT NULL
);
INSERT INTO test(start_time, end_time)
VALUES ('2022-10-31T12:30:00Z', '2022-12-31T23:59:59Z');
You could run the select:
SELECT
cast(start_time as date) as start_date,
cast(end_time as date) as end_date
FROM test;
(You can try this out on a website like DB-Fiddle.)

how to delete all rows after 1000 rows in postgresql Table

I have huge Database with 15 tables.
I need to make light version of that and leave only first 1000 rows in each table based on DESC Date. I did try to find on google how to do that but nothing really works.
It will be perfect it there will be automated way to go through each table and leave only 1000 rows.
But If I need to do that manually with each table it will be fine as well.
Thank you,
This looks positively awful, but maybe it's a starting point from which you can build.
with cte as (
select mod_date, row_number() over (order by mod_date desc) as rn
from table1
),
min_date as (
select mod_date
from cte
where rn = 1000
)
delete from table1 t1
where t1.mod_date < (select mod_date from min_date)
So solution is:
DELETE FROM "table" WHERE "date" < now() - interval '1 year';
That way it will delete all data from table where Date is older that 1 year.

How to execute SELECT DISTINCT ON query using SQLAlchemy

I have a requirement to display spend estimation for last 30 days. SpendEstimation is calculated multiple times a day. This can be achieved using simple SQL query:
SELECT DISTINCT ON (date) date(time) AS date, resource_id , time
FROM spend_estimation
WHERE
resource_id = '<id>'
and time > now() - interval '30 days'
ORDER BY date DESC, time DESC;
Unfortunately I can't seem to be able to do the same using SQLAlchemy. It always creates select distinct on all columns. Generated query does not contain distinct on.
query = session.query(
func.date(SpendEstimation.time).label('date'),
SpendEstimation.resource_id,
SpendEstimation.time
).distinct(
'date'
).order_by(
'date',
SpendEstimation.time
)
SELECT DISTINCT
date(time) AS date,
resource_id,
time
FROM spend
ORDER BY date, time
It is missing ON (date) bit. If I user query.group_by - then SQLAlchemy adds distinct on. Though I can't think of solution for given problem using group by.
Tried using function in distinct part and order by part as well.
query = session.query(
func.date(SpendEstimation.time).label('date'),
SpendEstimation.resource_id,
SpendEstimation.time
).distinct(
func.date(SpendEstimation.time).label('date')
).order_by(
func.date(SpendEstimation.time).label('date'),
SpendEstimation.time
)
Which resulted in this SQL:
SELECT DISTINCT
date(time) AS date,
resource_id,
time,
date(time) AS date # only difference
FROM spend
ORDER BY date, time
Which is still missing DISTINCT ON.
Your SqlAlchemy version might be the culprit.
Sqlalchemy with postgres. Try to get 'DISTINCT ON' instead of 'DISTINCT'
Links to this bug report:
https://bitbucket.org/zzzeek/sqlalchemy/issues/2142
A fix wasn't backported to 0.6, looks like it was fixed in 0.7.
Stupid question: have you tried distinct on SpendEstimation.date instead of 'date'?
EDIT: It just struck me that you're trying to use the named column from the SELECT. SQLAlchemy is not that smart. Try passing in the func expression into the distinct() call.

PostgreSQL - get records with null values

I'm trying to get a query which would show distributors that haven't sell anything in 90 days, but the problem I get is with NULL values. It seems PostgreSQL ignores null values, even when I queried to show it (or maybe I did it in wrong way).
Let say there are 1000 distributors, but with this query I only get 1 distributor, but there should be more distributors that didn't sell anything, because if I write SQL query to show distributors that sold by any amount in the last 90 days, it shows about 500. So I wonder where are those other 499? If I understand correctly, those other 499, didn't have any sales, so all records are null and are not showed in query.
Does anyone know how to make it show null values of one table where in relation other table is not null? (like partners table (res_partner) is not null, but sale_order table (sales) or object is null? (I also tried to filter like so.id IS NULL, but in such way I get empty query)
Code of my query:
(
SELECT
min(f1.id) as id,
f1.partner as partner,
f1.sum1
FROM
(
SELECT
min(f2.id) as id,
f2.partner as partner,
sum(f2.null_sum) as sum1
FROM
(
SELECT
min(rp.id) as id,
rp.search_name as partner,
CASE
WHEN
sol.price_subtotal IS NULL
THEN
0
ELSE
sol.price_subtotal
END as null_sum
FROM
sale_order as so,
sale_order_line as sol,
res_partner as rp
WHERE
sol.order_id=so.id and
so.partner_id=rp.id
and
rp.distributor=TRUE
and
so.date_order <= now()::timestamp::date
and
so.date_order >= date_trunc('day', now() - '90 day'::interval)::timestamp::date
and
rp.contract_date <= date_trunc('day', now() - '90 day'::interval)::timestamp::date
GROUP BY
partner,
null_sum
)as f2
GROUP BY
partner
) as f1
WHERE
sum1=0
GROUP BY
partner,
sum1
)as fld
EDIT: 2012-09-18 11 AM.
I think I understand why Postgresql behaves like this. It is because of the time interval. It checks if there is any not null value in that inverval. So it only found one record, because that record had sale order with zero (it was not converted from null to zero) and part which checked for null values was just skipped. If I delete time interval, then I would see all distributors that didn't sell anything at all. But with time interval for some reason it stops checking null values and looks if there are only not null values.
So does anyone know how to make it check for null values too in given interval?.. (for the last 90 days to be exact)
Aggregates like sum() and and min() do ignore NULL values. This is required by the SQL standard and every DBMS I know behaves like that.
If you want to treat a NULL value as e.g. a zero, then use something like this:
sum(coalesce(f2.null_sum, 0)) as sum1
But as far as I understand you question and your invalid query you actually want an outer join between res_partner and the sales tables.
Something like this:
SELECT min(rp.id) as id,
rp.search_name as partner,
sum(coalesce(sol.price_subtotal,0)) as price_subtotal
FROM res_partner as rp
LEFT JOIN sale_order as so ON so.partner_id=rp.id and rp.distributor=TRUE
LEFT JOIN sale_order_line as sol ON sol.order_id=so.id
WHERE so.date_order <= CURRENT_DATE
and so.date_order >= date_trunc('day', now() - '90 day'::interval)::timestamp::date
and rp.contract_date <= date_trunc('day', now() - '90 day'::interval)::timestamp::date
GROUP BY rp.search_name
I'm not 100% sure I understood your problem correctly, but it might give you a headstart.
Try to name subqueries, and retrieve their columns with col.q1, col.q2 etc. to make sure which column from which query/subquery you're dealing with. Maybe it's somewhat simple, e.g. it unites some rows containing only NULLs into one row? Also, at least for debugging purposes, it's smart to add , count(*) at the end of each query/subquery to get implicit number of rows returned on result.. hard to guess what exactly happened..