I have a table with money type column in my PostgreSQL database. I want to connect the data to Google Data Studio, but it doesn't support money type data.
is there a way to convert money type to double or bigint in a query like this or any other equivalent query?
SELECT todouble(maintenance.cost) FROM maintenance
Thanks!
The most natural choice is numeric:
select 123.45::money::numeric;
numeric
---------
123.45
(1 row)
You can also use integer or bigint but you have to take care of handling the fractional part then. Do not try real or double precision, per the documentation::
Floating point numbers should not be used to handle money due to the potential for rounding errors.
you can use ::numeric::float8 this to convert it. so it become
SELECT maintenance.cost::numeric::float8 FROM maintenance
you can see it in the documentation below:
https://www.postgresql.org/docs/current/datatype-money.html
Related
We are investigating using PostGIS to perform some spacial filtering of data that has been gathered from a roving GPS engine. We have defined some start and end points that we use in our processing with the following table structure:
CREATE TABLE IF NOT EXISTS tracksegments
(
idtracksegments bigserial NOT NULL,
name text,
approxstartpoint geometry,
approxendpoint geometry,
maxpoints integer
);
If the data in this table is queried:
SELECT ST_AsText(approxstartpoint) FROM tracksegments
we get ...
POINT(-3.4525845 58.5133318)
Note that the Long/Lat points are given to 7 decimal places.
To get just the longitude element, we tried:
SELECT ST_X(approxstartpoint) AS long FROM tracksegments
we get ...
-3.45
We need much more precision than the 2 decimal places that are returned. We've searched the documentation and there does not appear to be a way to set the level of precision. Any help would be appreciated.
Vance
Your problem is definitely client related. Your client is most likely truncating double precision values for some reason. As ST_AsText returns a text value, it does not get affected by this behaviour.
ST_X does not truncate the coordinate's precision like that, e.g.
SELECT ST_X('POINT(-3.4525845 58.5133318)');
st_x
------------
-3.4525845
(1 Zeile)
Tested with psql in PostgreSQL 9.5 + PostGIS 2.2 and PostgreSQL 12.3 + PostGIS 3.0 and with pgAdmin III
Note: PostgreSQL 9.5 is a pretty old release! Besides the fact that it will reach EOL next January, you're missing really kickass features in the newer releases. I sincerely recommend you to plan a system upgrade as soon as possible.
So i learned in my Sql course last week how to turn a string into an integer. the table we used for this was timezone based. so it was '-5' hours offset.
in order to do this we had to cast the string to a DECIMAL and then to an SMALLINT. It was pretty simple once I knew that , thats not where my question lies.
what im curious about is why a SMALLlNT wouldnt take a negative sign but A Decimal could do it. according to the specs a SMALLINT still can go to -32768. so does anyone know if this persists in all coding languages or is it just SQL specific? As well as what wont allow it to cast
I don't see why you would bother doing any casting to begin with? According to the documentation (see table 1/3 down the page), T-SQL supports implicit conversion between varchar and smallint.
DECLARE #negative_varchar VARCHAR(10) = '-5'
DECLARE #negative_smallint SMALLINT = CONVERT(SMALLINT, #negative_varchar)
DECLARE #negative_smallint_implicit SMALLINT = #negative_varchar
SELECT #negative_varchar, #negative_smallint, #negative_smallint_implicit
Produces
---------- ------ ------
-5 -5 -5
no this is what im saying (declare #s as nvarchar
#s=-5
(50) = CAST (#s AS SMALL INT)
again you have to cast from decimal then small int. im asking about the underlying code behind the process.
such as does one of the bytes hold whether a number is positive or negative etc...
Bigint in postgresql is 8 byte integer. which is has half the range as uint64 (as one bit is used to sign the integer)
I need to do a lot of aggregation on the column and I am under the impression that aggregation on NUMERIC type is slow in comparison to integer types.
How should I optimize my storage in this case?
Unless you have a concrete reason, just use NUMERIC. It is slower, quite a lot slower, but that might not matter as much as you think.
You don't really have any alternative, as PostgreSQL doesn't support unsigned 64-bit integers at the SQL level. You could add a new datatype as an extension module, but it'd be a lot of work.
You could shove the unsigned 64-bit int bitwise into a 64-bit signed int, so values above maxuint64/2 are negative. But that'll be totally broken for aggregation, and would generally be horribly ugly.
sum() will return numeric if the input is bigint so it will not overflow
select sum(a)
from (values (9223372036854775807::bigint), (9223372036854775807)) s(a)
;
sum
----------------------
18446744073709551614
http://www.postgresql.org/docs/current/static/functions-aggregate.html
There is also an extension to provide an additional uint64 datatype in postgresql. See Github
It is by Peter Eisentraut
I'm working with postgresql-9.1 recently.
For some reason I have to use a tech which does not support data type numeric but decimal. Unfortunately, the data type of columns which I've assigned decimal to them in my Postgresql are always numeric. I tried to alter the type, but it did not work though I've got the messages just like "Query returned successfully with no result in 12 ms".
SO, I want to know how can I get the columns to be decimal.
Any help will be highly appreciate.
e.g.
My creating clauses:
CREATE TABLE IF NOT EXISTS htest
(
dsizemin decimal(8,3) NOT NULL,
dsizemax decimal(8,3) NOT NULL,
hidentifier character varying(10) NOT NULL,
tgrade character varying(10) NOT NULL,
fdvalue decimal(8,3),
CONSTRAINT htest_pkey PRIMARY KEY (dsizemin , dsizemax , hidentifier , tgrade )
);
My altering clauses:
ALTER TABLE htest
ALTER COLUMN dsizemin TYPE decimal(8,3);
But it does not work.
In PostgreSQL, "decimal" is an alias for "numeric" which poses some problems when your app thinks it expects a type called "decimal" from the database. As Craig noted above, you can't even create a domain called "decimal"
There is no good workaround in the database side. The only thing you can do is change the application to expect a numeric data type back.
Use Numeric (precision, scale) to store decimals
precision represents the total number of expected digits on either side of the decimal point. scale is the number decimals you wish to store.
This Numeric (5,5) would imply you only want numbers less than 1 (negative or positive) with 5 decimal points. Debug, it may be Numeric (6,5) if the postgre sql errors out because it things the leading 0 is a decimal.
0.12345 would be an example of the above.
1.12345 would need a field Numeric (6,5)
100.12345 would need a field Numeric (8,5)
-100.12345 would need a field Numeric (8,5)
When you write a select statement to see the decimals, it rounds to 2; but if you do something like Select 100 * [field] from [table], then extra decimals should start appearing....
Thanks to a last minute client request a integer field in our database now needs to be a decimal, to two points.A value of 23 should become 23.00.
Is there a nice way I can convert the table and cast the data across?
I'll freely admit, I haven't done anything like this with PostgreSQL before so please be gentle with me.
Something like this should work:
alter table t alter column c type decimal(10,2);
Edit:
As #Oli stated in the comments; the first number is the entire length of the number (excluding the point) so the maxval for (10,2) would be 99999999.99
alter table table_name alter column columname type decimal(16,2);
for converting data type from int to decimal. with precession after decimal point 2 values it will come for example 10.285 then it will be 10.23.or in decimal place we can use numeric also it will work.