I'm newb on BigQuery. Is-it possible to get hit.second in a way in BigQuery?
My idea would be after to concat hits_time, hits_hour, hits_minute with date to display in a column the date with time (YYYY-MM-DD HH:MM:SS).
Thank for your help
try below
#standardSQL
SELECT
visitId,
hit.hitNumber,
TIMESTAMP_SECONDS(visitStartTime) AS visitStart,
TIMESTAMP_MILLIS(1000 * visitStartTime + hit.time) AS hitStart
FROM `google.com:analytics-bigquery.LondonCycleHelmet.ga_sessions_20130910`, UNNEST(hits) AS hit
ORDER BY 1, 2
LIMIT 100
From BigQuery Export Schema:
hits.time INTEGER The number of milliseconds after the visitStartTime when this
hit was registered. The first hit has a hits.time of 0
Related
I want to find the latest value of a column for particular time duration(1 minute in my case) from Kusto table.
I have timeseries data in PostgreSQL table and I am using last() function (https://docs.timescale.com/api/latest/hyperfunctions/last/)to find the latest value of scaled_value for 1 minute time bucket of PostgreSQL and I want to use same function in Kusto Table to get the latest value of scaled_value . What will be correct function to use in Kusto corresponding to last() function in Postgresql
Code I am using in PostgreSQL :
SELECT CAST(EXTRACT(EPOCH FROM time_bucket('1 minutes', timestamp) AT TIME ZONE 'UTC') * 1000 AS BIGINT) as timestamp_epoch,
vessel_telemetry.timeSeries,
last(vessel_telemetry.scaled_value, vessel_telemetry.timestamp) as scaled_value,
FROM shipping.vessel_telemetry
WHERE vessel_telemetry.ingested_timestamp >= '2022-07-20T10:10:58.71Z' AND vessel_telemetry.ingested_timestamp < '2022-07-20T10:15:33.703985Z'
GROUP BY time_bucket('1 minutes', vessel_telemetry.timestamp), vessel_telemetry.vessel_timeSeries
Corresponding code I am using in ADX
VesselTelemetry_DS
| where ingested_timestamp >= datetime(2022-07-20T10:10:58.71Z) and ingested_timestamp < datetime(2022-07-20T10:15:33.703985Z)
| summarize max_scaled_value = max(scaled_value) by bin(timestamp, 1m), timeSeries
| project timestamp_epoch =(datetime_diff('second', timestamp, datetime(1970-01-01)))*1000, timeSeries, max_scaled_value
The data that i am getting using PostgreSQL is not matching with the data I am getting from ADX Query. I think the functionality of last() function of Postgre is different from max() function of ADX. Is there any function in ADX that we can use to perform same as last() of PSQL
arg_max()
arg_max (ExprToMaximize, * | ExprToReturn [, ...])
Please note the order of the parameters, which is opposite to Timescale's last() -
First the expression to maximize, in your case timestamp and then the expression(s) to return, in your case scaled_value
I wrote the query below to calculate the death percentage in Postgresql and the result is zero,but same query in Bigquery gets the correct result. Can anyone have idea? Thanks!
SELECT
location,
MAX(total_cases) AS Cases_total,
MAX(total_deaths) AS Death_total, (MAX(total_deaths)/MAX(total_cases))*100 AS DeathPercentange
FROM covid_deaths
WHERE continent IS NOT NULL
GROUP BY location
ORDER BY DeathPercentange DESC;
I am not allowed to insert my screenshot so I have the link:
Same query in Bigquery
Query in PostgreSQL
The database looks like this:
The preview of the database
The result is ok, your operation is between big integers. The result is just an integer.
112/9766 = 0 * 100 = 0
If you want a numeric as result you have to cast your columns as numeric
SELECT
location,
MAX(total_cases) AS Cases_total,
MAX(total_deaths) AS Death_total, (MAX(total_deaths)::numeric/MAX(total_cases)::numeric)*100 AS DeathPercentange
FROM covid_deaths
WHERE continent IS NOT NULL
GROUP BY location
ORDER BY DeathPercentange DESC;
You do integer division. The result will always be 0 whenever total_deaths < total_cases (which is most likely your case). What you should do is cast to float or decimal at least one operand, e.g.
(MAX(total_deaths)::decimal / MAX(total_cases))*100 AS DeathPercentange
I have a column date and column time on my PostgreSQL table. I wish to make a query, to filter rows that are not expired based on date and time. I tried this, but it does not works and returns an error Postgrex.Error) ERROR 42601 (syntax_error) syntax error at or nea:
from q in Line, where: fragment("date ? + time ? > NOW()", q.date, q.time)
I think this problem can be solved by not using time and date prefixes:
from q in Line, where: fragment("? + ? > NOW()", q.date, q.time)
or even
from q in Line, where: q.date + q.time < fragment("NOW()")
Provided, your columns have the correct data type
not sure if you need to run a standard query or if you are filtering via some GUI, but time and date types can be combined together via simple addition. https://www.postgresql.org/docs/current/functions-datetime.html
The following code:
WITH q AS (
SELECT* FROM (VALUES
('11:29:10'::time,'03-18-2019'::date),
('11:29:10'::time,'03-18-2021'::date)
) t ("time","date")
)
SELECT * FROM q WHERE q.time+q.date > NOW()
Should only print the date in the future, which is what you are trying to achieve.
Hope this helps!
How would I write the date range condition correctly for the following query: list all instruments from table "asset", where the "maturity_dt" > 1 year:...
in English it sounds like :" AND asset.maturity_dt >= Today + 365.."
If you are using MySQL you will be needing a query like,
Select * from asset where DATEDIFF(maturity_dt, now())>365;
I'm new in PostgreSQL I need to convert the minute's value in a column into hours and minutes format I have searched in various sources but failed to achive.Some one please help me to achieve this.
In mean while I try to use to_char() as follows :
UPDATE tablename SET col2 = TO_CHAR(((col1*60 ||`second`))::interval, ‘HH24:MI:SS’) where id = 145;
but I get the following error...
column "late_by" is of type timestamp with time zone but expression is of type text
LINE 2: UPDATE attendance SET late_by = TO_CHAR(((lateby*60 || 'seco...
^
HINT: You will need to rewrite or cast the expression.
It is unclear what type each column in your statement is.
But if it helps, you can perform maths on intervals
select interval '60 seconds' * 15;
or in your case, if "col1" is integer: interval '60 seconds' * col1;
your column late_by is of time timestamp, yet you want to update it with interval, not timestamp. If you want to save how much time somebody is late, better use interval, eg:
t=# create table w3 (t interval);
CREATE TABLE
t=# insert into w3 select (185||' seconds')::interval;
INSERT 0 1
t=# select * from w3;
t
----------
00:03:05
(1 row)
as you can see "conversion" to minutes done by postgres itself