In postgres, I'm getting an error when I try to union two tables where one table has a column (Amount) containing double precision data type, and the other table does not have a matching column and I'd like the records from that table to just have NULL in the Amount field.
Error:
"union types text and double precision cannot be matched postgres"
Pseudo-code:
SELECT * FROM (
SELECT
t1.Amount AS 'amount',
NULL::DATE AS 'date'
FROM Table1 AS t1
UNION ALL
SELECT
/* next line is the issue */
NULL AS 'amount',
t2.Date AS 'date'
FROM Table2 AS t2
) AS FOO
I feel fairly certain this solution is a simple casting problem but could not find anything from searching. How do I do the equivalent of NULL::DOUBLE in postgres?
EDIT::POSTERITY
The accepted answer from #klin and #a_horse_with_no_name's comment that points to a "historical" postgres cast expression :: where the syntax is equivalent:
CAST ( expression AS type )
expression::type
And, here is a list of the postgres data types.
In Postgres the type double precision is also known as float8 or simply float. Use one of them.
select null::double precision, null::float;
Related
select *
from nsclc_thought_spot
where patientid = 11000001
and service_date in ('2019-07-08', '2019-07-10')
order by patientid, service_date
is returning the results properly
But this is not working as expected:
select *
from nsclc_thought_spot
where patientid = 11000001
and service_date in (2019-07-08, 2019-07-10)
order by patientid, service_date
This query is not returning results.
If I have defined service_date column as date, then why do I have to pass the values in quotes inside IN operator in redshift?
Because 2019-07-08 means the integer 2019 minus the integer 7 minus the integer 8 which equals the integer 2004. Without quotes in SQL numbers are seen as numeric values. To be interpreted as something else you need to quote them (which is a text value) and then they need to be cast to the data type needed. In this case '2019-07-08' is a text value but Redshift will implicitly cast this to a date to make the comparison to the column data "service_date".
If you want to do this explicitly you can add the casting to the values - ... service_date IN ('2019-07-08'::date,'2019-07-10'::date) ... - which might make things clearer for you.
There is table
CREATE TABLE IF NOT EXISTS dbo.gps_online_state
(
accountid integer NOT NULL,
lat double precision,
lng double precision,
updatedon timestamp without time zone DEFAULT now()
);
I make sql that search all points in circle with radius 1500 m. But index in that query does not work
SELECT *
from gps_online_state gps
where gps.lat <> 'NaN' and gps.lng <> 'NaN'
AND
gps.updatedon > now() - '1 minute'::interval
AND
ST_Contains(
geometry(
ST_Buffer(geography(
'SRID=4326;POINT(' || 82.599620::text ||' ' || 49.957620::text ||')'),
1500)
),
ST_SetSRID( ST_MakePoint(gps.lng, gps.lat),4326)
)
Index query:
CREATE INDEX idx_gps_online_state_point_updatedon2
ON dbo.gps_online_state USING gist
((st_setsrid(st_makepoint(lng, lat), 4326)::geography), updatedon)
TABLESPACE pg_default
WHERE lat <> 'NaN'::double precision AND lng <> 'NaN'::double precision;
And I cant to force to work it. Trying geography and geomtry. Nothing help
I can't find anything as old as 2.4.3 to test with, but if I go back to 2.5.5 in PG 10.18 (using PGDG apt repo), I can get it to work as long as I leave the ::geography out of the index definition. Is leaving it out incorrect for some reason? If so, can you explain why or provide an example to demonstrate the error?
You also have a problem with the other column in the index. Your column is a timestamp but your computed value is a timestamptz, and this mismatch will prevent it from using that operator from the index.
I'm running into a simple error on PostgreSQL inserting data into a new table. I'd like to use a simple query because this table is only going to store averages across different dimensions. I want my avg column to be double precision. My insert statement is
insert into benchmark_table
(select avg(s.percentage_value) as avg, s.metric_name, s.category
from some_table s group by s.category, s.metric_name);
This command fails with the following error:
ERROR: column "avg" is of type double precision but expression is of
type text LINE 2: ...(s.percentage_value) as double precision) as avg,
s.metric_n...
^ HINT: You will need to rewrite or cast the expression.
So I try casting my avg column to double precision:
INSERT into benchmark_table
(SELECT cast(avg(s.percentage_value) as double precision) as avg, s.metric_name, s.category
FROM some_table s group by s.category, s.metric_name);
I've also attempted
insert into benchmark_table
(Select avg(s.percentage_value)::double precision as avg, s.metric_name, s.category
from summary_view_output s group by s.category, s.metric_name);
However, I get the same error about avg being text. I understand that what's being returned from my query is a result set that is by default text, but I'm not seeing any way to convert this into another datatype for my outer INSERT statement to use.
Try to change the ::double precision to ::float and see if that works. Also I noticed you aren't including the field names in the Insert clause. Maybe the ordinal position of the fields of the benchmark_table is not the same as in the select statement.
try to use.
insert into benchmark_table( avg, metric_name, category)
Just installed 9.4 and trying to use JSONB field type.
I've made a table with jsonb field and able to select from it:
select statistics->'statistics'->'all_trades'->'all'->'all_trades_perc_profit' as profitable_perc FROM trade_statistics
Works fine.
Now I want to filter results based on field value:
select statistics->'statistics'->'all_trades'->'all'->'all_trades_perc_profit' as profitable_perc FROM trade_statistics WHERE profitable_perc > 1
//There is no "profitable_perc" column
Does not work.
If I try to convert result to double, does not work either.
select cast(statistics->'statistics'->'all_trades'->'all'->'all_trades_perc_profit' as double precision) as profitable_perc FROM trade_statistics
//cant convert jsonb into double precision
How should I use select results in WHERE clause in case of jsonb?
Three corrections have to be made:
Wrap the the query in a subquery - you cannot reference the SELECT list aliases in WHERE clause
Use the ->> operator to get the value as text
Cast the text value as integer so you can make the comparison
SELECT *
FROM (
SELECT (statistics->'statistics'->'all_trades'->'all'->>'all_trades_perc_profit')::integer as profitable_perc
FROM trade_statistics
) sq1
WHERE profitable_perc > 1
My query looks like this:
SELECT mthreport.*
FROM crosstab
('SELECT
to_char(ipstimestamp, ''mon DD HH24h'') As row_name,
varid::text || log.varid || ''_'' || ips.objectname::text As bucket,
COUNT(*)::integer As bucketvalue
FROM loggingdb_ips_boolean As log
INNER JOIN IpsObjects As ips
ON log.Varid=ips.ObjectId
WHERE ((log.varid = 37551)
OR (log.varid = 27087)
OR (log.varid = 50876)
OR (log.varid = 45096)
OR (log.varid = 54708)
OR (log.varid = 47475)
OR (log.varid = 54606)
OR (log.varid = 25528)
OR (log.varid = 54729))
GROUP BY to_char(ipstimestamp, ''yyyy MM DD HH24h''), row_name, objectid, bucket
ORDER BY to_char(ipstimestamp, ''yyyy MM DD HH24h''), row_name, objectid, bucket' )
As mthreport(item_name text, varid_37551 integer,
varid_27087 integer ,
varid_50876 integer ,
varid_45096 integer ,
varid_54708 integer ,
varid_47475 integer ,
varid_54606 integer ,
varid_25528 integer ,
varid_54729 integer ,
varid_29469 integer)
the query can be tested against a test table with this connection string:
"host=bellariastrasse.com port=5432 dbname=IpsLogging user=guest password=guest"
The query is syntactically correct and runs fine. My problem is that it the COUNT(*) values are always filling the leftmost column. however, in many instances the left columns should have a zero, or a NULL, and only the 2nd (or n-th) column should be filled. My brain is melting and I cannot figure out what is wrong!
The solution for your problem is to use the crosstab() variant with two parameters.
The second parameter (another query string) produces the list of output columns, so that NULL values in the data query (the first parameter) are assigned correctly.
Check the manual for the tablefunc extension, and in particular crosstab(text, text):
The main limitation of the single-parameter form of crosstab is that
it treats all values in a group alike, inserting each value into the
first available column. If you want the value columns to correspond to
specific categories of data, and some groups might not have data for
some of the categories, that doesn't work well. The two-parameter form
of crosstab handles this case by providing an explicit list of the
categories corresponding to the output columns.
Emphasis mine. I posted a couple of related answers recently here or here or here.