Reorganize the data by months in grafana using psql - postgresql

im trying to do a "AVG" by months but grafana wont let me do it because im using two Selects and the first one is using the other select and i can`t specifie my time column.
I want to get a "time series"(type of grafana dashboard) where it show´s me the average by month in grafana but i dont know how could i do it with psql and the code i have.
This is the code im using:
SELECT AVG(lol), NOW() as time FROM
(SELECT COUNT(DISTINCT(ticket_id)), SUM(time_spent_minutes) AS "lol"
FROM ticket_messages
WHERE admin_id IN ('20439', '20457', '20291', '20371', '20357', '20235','20449','20355','20488')
GROUP BY ticket_id) as media

Where is the temporal column coming from? As of now your query is not doing anything specifically.
I would think that probably you have a ticket_date column available somewhere, the below query could become
SELECT
EXTRACT(MONTH FROM ticket_date) ticket_month,
SUM(time_spent_minutes)/COUNT(DISTINCT ticket_id) avg_time
FROM ticket_messages
WHERE
admin_id IN ('20439', '20457', '20291', '20371', '20357', '20235','20449','20355','20488')
GROUP BY EXTRACT(MONTH FROM ticket_date)

Related

Grafana PostgreSQL distinct on() with time series

I'm quite new to Grafana and Postgres and could use some help with this. I have a dataset in PostgreSQL with temperature forecasts. Mutiple forecasts are published at various points throughout the day (indicated by dump_date) for the same reference date. Say: at 06:00 today and at 12:00 today a forecast is published for tomorrow (where the time is indicated by start_time). Now I want to visualize the temperature forecast as a time series using Grafana. However, I only want to visualize the latest published forecast (12:00) and not both forecasts. I thought I would use DISTINCT ON() to select only the latest published forecast from this dataset, but somehow with Grafana this is not responding. My code in Grafana is as follows:
SELECT
$__time(distinct on(t_ID.start_time)),
concat('Forecast')::text as metric,
t_ID.value
FROM
forecast_table t_ID
WHERE
$__timeFilter(t_ID.start_time)
and t_ID.start_time >= (current_timestamp - interval '30 minute')
and t_ID.dump_date >= (current_timestamp - interval '30 minute')
ORDER BY
t_ID.start_time asc,
t_ID.dump_date desc
This is not working however since I get the message: 'syntax error at or near AS'. What should I do?
You are using Grafana macro $__time, so your query in the editor:
SELECT
$__time(distinct on(t_ID.start_time)),
generates SQL:
SELECT
distinct on(t_ID.start_time AS "time"),
which is incorrect SQL syntax.
I wouldn't use macro. I would write correct SQL directly, e.g.
SELECT
distinct_on(t_ID.start_time) AS "time",
Also use Generated SQL and Query inspector Grafana features for debugging and query development. Make sure that Grafana generates correct SQL for Postgres.

Converting a TEXT to dateColumn in Grafana

New to Grafana.
I have set a Postgres as a data source and am trying to create a sample time series dashboard like so...
SELECT
$__timeGroupAlias(UNIX_TIMESTAMP(start_time),$__interval),
count(events) AS "events"
FROM source_table
WHERE
$__timeFilter(UNIX_TIMESTAMP(start_time))
GROUP BY 1
ORDER BY 1
The problem is that in my table in postgres the start_time is of a type TEXT and this throws a
macro __timeGroup needs time column and interval and optional fill value
on Grafana side.
Can someone explain how can my start_time be properly converted to DateColumn so that the macros would work?
Thank you

how to get each and every Date in between selected date (in DB2)

Dear StackOverflow Community,
as a New to DB2 ,i have a a query
may be its a very basic question for you, please share your knowledge.
i have a start date and End Date.
I need a list of each and every date in between.
Its ok with me ,if it creates a temp table no issue.
Thanks in Advance
You can generate the dates between start and end dates by using Recursive CTE expression. Try below code
with cte(your_columns,startdate,enddate)
as (select your_columns,startdate,enddate,startdate
as derDate
from yourTable
union all
select your_columns,startdate,enddate,derDate+1
from cte where
derDate<=endDate)
select * from cte

How to execute SELECT DISTINCT ON query using SQLAlchemy

I have a requirement to display spend estimation for last 30 days. SpendEstimation is calculated multiple times a day. This can be achieved using simple SQL query:
SELECT DISTINCT ON (date) date(time) AS date, resource_id , time
FROM spend_estimation
WHERE
resource_id = '<id>'
and time > now() - interval '30 days'
ORDER BY date DESC, time DESC;
Unfortunately I can't seem to be able to do the same using SQLAlchemy. It always creates select distinct on all columns. Generated query does not contain distinct on.
query = session.query(
func.date(SpendEstimation.time).label('date'),
SpendEstimation.resource_id,
SpendEstimation.time
).distinct(
'date'
).order_by(
'date',
SpendEstimation.time
)
SELECT DISTINCT
date(time) AS date,
resource_id,
time
FROM spend
ORDER BY date, time
It is missing ON (date) bit. If I user query.group_by - then SQLAlchemy adds distinct on. Though I can't think of solution for given problem using group by.
Tried using function in distinct part and order by part as well.
query = session.query(
func.date(SpendEstimation.time).label('date'),
SpendEstimation.resource_id,
SpendEstimation.time
).distinct(
func.date(SpendEstimation.time).label('date')
).order_by(
func.date(SpendEstimation.time).label('date'),
SpendEstimation.time
)
Which resulted in this SQL:
SELECT DISTINCT
date(time) AS date,
resource_id,
time,
date(time) AS date # only difference
FROM spend
ORDER BY date, time
Which is still missing DISTINCT ON.
Your SqlAlchemy version might be the culprit.
Sqlalchemy with postgres. Try to get 'DISTINCT ON' instead of 'DISTINCT'
Links to this bug report:
https://bitbucket.org/zzzeek/sqlalchemy/issues/2142
A fix wasn't backported to 0.6, looks like it was fixed in 0.7.
Stupid question: have you tried distinct on SpendEstimation.date instead of 'date'?
EDIT: It just struck me that you're trying to use the named column from the SELECT. SQLAlchemy is not that smart. Try passing in the func expression into the distinct() call.

postgres(redshift) query including to_char and group by returns some errors

Im using redshift now.
then Id like to run query like
SELECT to_char(created_at, 'HH24') AS hour , to_char(created_at, 'YYYY-MM-DD HH24') AS tmp FROM log GROUP BY tmp;
this returns error, when I do it in mysql, it seems to be good.
this error is
ERROR: column "log.created_at" must appear in the GROUP BY clause or be used in an aggregate function
when I changed group by clause like "group by created_at", it returns results, but it has duplicated list.
Is is due to redshift?
If you're using a GROUP BY clause, any column in your query must either appear in the clause or you have to specify how you want it to be aggregated.
In your case, you seem to be trying to aggregate your log entries by hour. I suggest using the postgres date manipulation functions, for example:
SELECT created_at::date AS date,
extract('HOUR' FROM created_at) as hour
FROM log
GROUP BY date, hour;