extract timestamp from EPOCH cloumn from postgresql DB table - postgresql

I am trying to execute below-mentioned PostgreSQL query and in which checkin_ts in epoch format and want to write a query which is humanly understandable(putting timestamp in human readable format).
select * from users where to_timestamp(checkin_ts) >= '2017-11-11 00:00:00'
LIMIT 100;
when I tried to execute the above query then I get the following error
ERROR: execute cannot be used while an asynchronous query is underway

You need to give us the description of your table, to know checkin_ts format.
But to_timestamp need to text paramaters like this :
SELECT *
FROM users
WHERE to_timestamp(checkin_ts::text, 'YYYY-MM-DD HH24:MI:SS') >= '2017-11-11 00:00:00' LIMIT 100;
And what is your environment to execute this query ?
The error seems to be that you are trying to execute two queries on the same connection using two different cursors.

Related

How to use oracle sys_sxtract_utc like function in postgresql?

I have an oracle query that uses sys_sxtract_utc function in where clause:
select * from my_table where sys_sxtract_utc(timestmp) > sys_sxtract_utc(last_stable_date)
But I could not use this in postgresql query. I could not find any function like this.
How can I use in postgresql?
You can use at time zone in your query to get the current GMT timestamp
select CURRENT_TIMESTAMP AT TIME ZONE 'UTC'
PostgreSQL Community Doc

Grafana PostgreSQL distinct on() with time series

I'm quite new to Grafana and Postgres and could use some help with this. I have a dataset in PostgreSQL with temperature forecasts. Mutiple forecasts are published at various points throughout the day (indicated by dump_date) for the same reference date. Say: at 06:00 today and at 12:00 today a forecast is published for tomorrow (where the time is indicated by start_time). Now I want to visualize the temperature forecast as a time series using Grafana. However, I only want to visualize the latest published forecast (12:00) and not both forecasts. I thought I would use DISTINCT ON() to select only the latest published forecast from this dataset, but somehow with Grafana this is not responding. My code in Grafana is as follows:
SELECT
$__time(distinct on(t_ID.start_time)),
concat('Forecast')::text as metric,
t_ID.value
FROM
forecast_table t_ID
WHERE
$__timeFilter(t_ID.start_time)
and t_ID.start_time >= (current_timestamp - interval '30 minute')
and t_ID.dump_date >= (current_timestamp - interval '30 minute')
ORDER BY
t_ID.start_time asc,
t_ID.dump_date desc
This is not working however since I get the message: 'syntax error at or near AS'. What should I do?
You are using Grafana macro $__time, so your query in the editor:
SELECT
$__time(distinct on(t_ID.start_time)),
generates SQL:
SELECT
distinct on(t_ID.start_time AS "time"),
which is incorrect SQL syntax.
I wouldn't use macro. I would write correct SQL directly, e.g.
SELECT
distinct_on(t_ID.start_time) AS "time",
Also use Generated SQL and Query inspector Grafana features for debugging and query development. Make sure that Grafana generates correct SQL for Postgres.

Postgres SQL query runs very slow when using current_date

We have a SQL query using a where clause to filter the data set by date. Currently the where clause in the query is set as follows -
WHERE date_field BETWEEN current_date - integer #interval AND current_date (where the interval is the last 90 days)
This query has suddenly started to slow down for the past couple of days. It has started to take upwards of 10 mins. If we remove the current_date from this query and hard code dates in this query it runs like it used to previously in less than 10 seconds.
Hard-coded version
WHERE date_field BETWEEN '03-12-2020' AND '06-12-2020'
This query is run against the Postgres engine in Amazon Aurora. The same where clause is used in other queries filter against the same field which are not impacted by this issue.
I am trying to figure out how we can determine what caused this issue suddenly and how we can resolve this?

How to migrate "as of timestamp" query in PostgreSQL

I want to migrate or write an equivalent query to get the data from table one hr before the current time in PostgreSQL.
oracle query:
select *
from T_DATA as of timestamp (systimestamp - interval '60' minute);
select * from T_DATA where timestamp_column >= now() - interval '1 hour'
Since flashback queries are not supported in postgresql, One approach I tried with temporal tables extension.

postgres(redshift) query including to_char and group by returns some errors

Im using redshift now.
then Id like to run query like
SELECT to_char(created_at, 'HH24') AS hour , to_char(created_at, 'YYYY-MM-DD HH24') AS tmp FROM log GROUP BY tmp;
this returns error, when I do it in mysql, it seems to be good.
this error is
ERROR: column "log.created_at" must appear in the GROUP BY clause or be used in an aggregate function
when I changed group by clause like "group by created_at", it returns results, but it has duplicated list.
Is is due to redshift?
If you're using a GROUP BY clause, any column in your query must either appear in the clause or you have to specify how you want it to be aggregated.
In your case, you seem to be trying to aggregate your log entries by hour. I suggest using the postgres date manipulation functions, for example:
SELECT created_at::date AS date,
extract('HOUR' FROM created_at) as hour
FROM log
GROUP BY date, hour;