Oracle round function with scientific notation - oracle10g

Dpes anyone know how to get this to work in Oracle? The value is what is in the table and I need to round this to 2 digits:
Select round(3.00345678908765E+60, 2) from dual

You need to shift the rounding by -60 decimal places (so from +2 to -58):
Select round( 3.00345678908765E+60, -58) from dual
Outputs:
| ROUND(3.00345678908765E+60,-58) |
| ------------------------------------------------------------: |
| 3000000000000000000000000000000000000000000000000000000000000 |
If you want to round to 3 significant figures:
SELECT ROUND( value, 2 - FLOOR( LOG( 10, value ) ) ) FROM table_name;
Which, for the sample data:
CREATE TABLE table_name ( value ) AS
SELECT 3.00345678908765E+60 FROM DUAL UNION ALL
SELECT 3.00555678908765E+60 FROM DUAL UNION ALL
SELECT 3.05345678908765E+58 FROM DUAL;
Outputs:
| ROUND(VALUE,2-FLOOR(LOG(10,VALUE))) |
| ------------------------------------------------------------: |
| 3000000000000000000000000000000000000000000000000000000000000 |
| 3010000000000000000000000000000000000000000000000000000000000 |
| 30500000000000000000000000000000000000000000000000000000000 |
Or you can use TO_CHAR to convert the value to a string rounded in scientific notation:
SELECT TO_CHAR( value, 'FM0.00EEEE' ) AS as_string,
TO_NUMBER(TO_CHAR( value, 'FM0.00EEEE' )) AS as_number
FROM table_name;
Outputs:
AS_STRING | AS_NUMBER
:-------- | ------------------------------------------------------------:
3.00E+60 | 3000000000000000000000000000000000000000000000000000000000000
3.01E+60 | 3010000000000000000000000000000000000000000000000000000000000
3.05E+58 | 30500000000000000000000000000000000000000000000000000000000
db<>fiddle here

Related

How can I retrieve the complete record with the maximun value grouped by another in Postgres?

I have a table (it's a big query in fact, so don't use joins over the table please) as follows:
date | priority | data
20200301 | 1 | 0.3
20200301 | 2 | 0.4
20200302 | 2 | 0.4
20200302 | 3 | 0.1
20200303 | 1 | 0.8
So, I want the date and the data with the LOWEST priority of each date, so the result of the query I'm looking for would be:
date | priority | data
20200301 | 1 | 0.3
20200302 | 2 | 0.4
20200303 | 1 | 0.8
Whenever I try to make a group by clause, that query cannot retrieve the data column nor support different values on the data column.
You can use a the row_number window function for this:
CREATE TABLE t (
"date" INTEGER,
"priority" INTEGER,
"data" FLOAT
);
INSERT INTO t
("date", "priority", "data")
VALUES ('20200301', '1', '0.3')
, ('20200301', '2', '0.4')
, ('20200302', '2', '0.4')
, ('20200302', '3', '0.1')
, ('20200303', '1', '0.8');
SELECT *
FROM (
SELECT *, row_number() OVER (PARTITION BY date ORDER BY priority)
FROM t
) f
WHERE row_number = 1
returns:
+--------+--------+----+----------+
|date |priority|data|row_number|
+--------+--------+----+----------+
|20200301|1 |0.3 |1 |
|20200302|2 |0.4 |1 |
|20200303|1 |0.8 |1 |
+--------+--------+----+----------+
As mentioned by #david in the comments, it might be more efficient to filter the rows based on "priority = min_priority_for_date" (instead of ranking them and filtering them afterwards):
SELECT *
FROM t
WHERE (date, priority) IN (
SELECT date, MIN(priority)
FROM t
GROUP BY date
)

Given a row representing a path, union a total column

Say I have a table like the following table that represents a path from 1 -> 2 -> 3 -> 4 -> 5:
+------+----+--------+
| from | to | weight |
+------+----+--------+
| a | b | 1 |
| b | c | 2 |
| c | d | 1 |
| d | e | 1 |
| e | f | 3 |
+------+----+--------+
Each row knows where it came from and where it is going
I would like to union a total row that takes the starting name, ending name, and a total weight like so:
+------+----+--------+
| from | to | weight |
+------+----+--------+
| a | f | 8 |
+------+----+--------+
The first table is a result of a CTE expression, and I can easily get the total of the previous query with SUM, but I'm unable to get the LAST_VALUE to work in a similar way to:
WITH RECURSIVE cte AS (
...
)
SELECT *
FROM cte
UNION ALL
SELECT 'total', FIRST_VALUE(from), LAST_VALUE(to), SUM(weight)
FROM cte
The FIRST_VALUE and LAST_VALUE functions require OVER clauses which seem to add unnecessary complications to what I would expect, so I think I am going the wrong direction with that. Any ideas on how to achieve this?
So I made a strange solution that:
Selects the first from value (partitioned by TRUE)
Selects the last to value (partitioned by TRUE again)
Cross joins the sum of all weights, limited to 1
WITH RECURSIVE cte AS (
...
)
SELECT *
FROM cte
UNION ALL (
SELECT FIRST_VALUE(from) OVER (PARTITION BY TRUE), LAST_VALUE(to) OVER (PARTITION BY TRUE), total
FROM cte
CROSS JOIN (
SELECT SUM(weight) as total
FROM cte
) tmp
LIMIT 1
);
Is it hacky? Yes. Does it work? Also yes. I'm sure there are better solutions, and I would love to hear them.

How would you read a csv in a stored procedure such that the csv needs data extraction?

The csv has urls of images in the format -
www.domain.com/table_id/x_y_height_width.jpg
We want to extract table_id, x, y, height and width from these urls in a stored procedure and then use these parameters in multiple sql queries.
How can we do that?
regexp_split_to_array and split_part functions
create or replace function split_url (
_url text, out table_id int, out x int, out y int, out height int, out width int
) as $$
select
a[2]::int,
split_part(a[3], '_', 1)::int,
split_part(a[3], '_', 2)::int,
split_part(a[3], '_', 3)::int,
split_part(split_part(a[3], '_', 4), '.', 1)::int
from (values
(regexp_split_to_array(_url, '/'))
) rsa(a);
$$ language sql immutable;
select *
from split_url('www.domain.com/234/34_12_400_300.jpg');
table_id | x | y | height | width
----------+----+----+--------+-------
234 | 34 | 12 | 400 | 300
To use the function with other tables do lateral:
with t (url) as ( values
('www.domain.com/234/34_12_400_300.jpg'),
('www.examplo.com/984/12_90_250_360.jpg')
)
select *
from
t
cross join lateral
split_url(url)
;
url | table_id | x | y | height | width
---------------------------------------+----------+----+----+--------+-------
www.domain.com/234/34_12_400_300.jpg | 234 | 34 | 12 | 400 | 300
www.examplo.com/984/12_90_250_360.jpg | 984 | 12 | 90 | 250 | 360

PostgreSQL JSONB query types of each keys

I have certain table:
CREATE TABLE x(
id BIGSERIAL PRIMARY KEY,
data JSONB
);
INSERT INTO x(data)
VALUES( '{"a":"test", "b":123, "c":null, "d":true}' ),
( '{"a":"test", "b":123, "c":null, "d":"yay", "e":"foo", "f":[1,2,3]}' );
How to query types of each key in that table, so it would give an output something like this:
a | string:2
b | number:2
c | null:2
d | boolean:1 string:1
e | string:1
f | jsonb:1 -- or anything
I only know the way to get the keys and count, but don't know how to get the type of each key:
SELECT jsonb_object_keys(data), COUNT(id) FROM x GROUP BY 1 ORDER BY 1
that would give something like:
a | 2
b | 2
c | 2
d | 2
e | 1
f | 1
EDIT:
As pozs points out, there are two typeof functions: one for JSON and one for SQL. This query is the one you're looking for:
SELECT
json_data.key,
jsonb_typeof(json_data.value),
count(*)
FROM x, jsonb_each(x.data) AS json_data
group by key, jsonb_typeof
order by key, jsonb_typeof;
Old Answer: (Hey, it works...)
This query will return the type of the keys:
SELECT
json_data.key,
pg_typeof(json_data.value),
json_data.value
FROM x, jsonb_each(x.data) AS json_data;
... unfortunately, you'll notice that Postgres doesn't differentiate between the different JSON types. it regards it all as jsonb, so the results are:
key1 | value1 | value
------+--------+-----------
a | jsonb | "test"
b | jsonb | 123
c | jsonb | null
d | jsonb | true
a | jsonb | "test"
b | jsonb | 123
c | jsonb | null
d | jsonb | "yay"
e | jsonb | "foo"
f | jsonb | [1, 2, 3]
(10 rows)
However, there aren't that many JSON primitive types, and the output seems to be unambiguous. So this query will do what you're wanting:
with jsontypes as (
SELECT
json_data.key AS key1,
CASE WHEN left(json_data.value::text,1) = '"' THEN 'String'
WHEN json_data.value::text ~ '^-?\d' THEN
CASE WHEN json_data.value::text ~ '\.' THEN 'Number'
ELSE 'Integer'
END
WHEN left(json_data.value::text,1) = '[' THEN 'Array'
WHEN left(json_data.value::text,1) = '{' THEN 'Object'
WHEN json_data.value::text in ('true', 'false') THEN 'Boolean'
WHEN json_data.value::text = 'null' THEN 'Null'
ELSE 'Beats Me'
END as jsontype
FROM x, jsonb_each(x.data) AS json_data -- Note that it won't work if we use jsonb_each_text here because the strings won't have quotes around them, etc.
)
select *, count(*) from jsontypes
group by key1, jsontype
order by key1, jsontype;
Output:
key1 | jsontype | count
------+----------+-------
a | String | 2
b | Integer | 2
c | Null | 2
d | Boolean | 1
d | String | 1
e | String | 1
f | Array | 1
(7 rows)
You can improve your last query with jsonb_typeof
with jsontypes as (
SELECT
json_data.key AS key1,
jsonb_typeof(json_data.value) as jsontype
FROM x, jsonb_each(x.data) AS json_data
)
select *, count(*)
from jsontypes
group by 1, 2
order by 1, 2;

Equivalent to unpivot() in PostgreSQL

Is there a unpivot equivalent function in PostgreSQL?
Create an example table:
CREATE TEMP TABLE foo (id int, a text, b text, c text);
INSERT INTO foo VALUES (1, 'ant', 'cat', 'chimp'), (2, 'grape', 'mint', 'basil');
You can 'unpivot' or 'uncrosstab' using UNION ALL:
SELECT id,
'a' AS colname,
a AS thing
FROM foo
UNION ALL
SELECT id,
'b' AS colname,
b AS thing
FROM foo
UNION ALL
SELECT id,
'c' AS colname,
c AS thing
FROM foo
ORDER BY id;
This runs 3 different subqueries on foo, one for each column we want to unpivot, and returns, in one table, every record from each of the subqueries.
But that will scan the table N times, where N is the number of columns you want to unpivot. This is inefficient, and a big problem when, for example, you're working with a very large table that takes a long time to scan.
Instead, use:
SELECT id,
unnest(array['a', 'b', 'c']) AS colname,
unnest(array[a, b, c]) AS thing
FROM foo
ORDER BY id;
This is easier to write, and it will only scan the table once.
array[a, b, c] returns an array object, with the values of a, b, and c as it's elements.
unnest(array[a, b, c]) breaks the results into one row for each of the array's elements.
You could use VALUES() and JOIN LATERAL to unpivot the columns.
Sample data:
CREATE TABLE test(id int, a INT, b INT, c INT);
INSERT INTO test(id,a,b,c) VALUES (1,11,12,13),(2,21,22,23),(3,31,32,33);
Query:
SELECT t.id, s.col_name, s.col_value
FROM test t
JOIN LATERAL(VALUES('a',t.a),('b',t.b),('c',t.c)) s(col_name, col_value) ON TRUE;
DBFiddle Demo
Using this approach it is possible to unpivot multiple groups of columns at once.
EDIT
Using Zack's suggestion:
SELECT t.id, col_name, col_value
FROM test t
CROSS JOIN LATERAL (VALUES('a', t.a),('b', t.b),('c',t.c)) s(col_name, col_value);
<=>
SELECT t.id, col_name, col_value
FROM test t
,LATERAL (VALUES('a', t.a),('b', t.b),('c',t.c)) s(col_name, col_value);
db<>fiddle demo
Great article by Thomas Kellerer found here
Unpivot with Postgres
Sometimes it’s necessary to normalize de-normalized tables - the opposite of a “crosstab” or “pivot” operation. Postgres does not support an UNPIVOT operator like Oracle or SQL Server, but simulating it, is very simple.
Take the following table that stores aggregated values per quarter:
create table customer_turnover
(
customer_id integer,
q1 integer,
q2 integer,
q3 integer,
q4 integer
);
And the following sample data:
customer_id | q1 | q2 | q3 | q4
------------+-----+-----+-----+----
1 | 100 | 210 | 203 | 304
2 | 150 | 118 | 422 | 257
3 | 220 | 311 | 271 | 269
But we want the quarters to be rows (as they should be in a normalized data model).
In Oracle or SQL Server this could be achieved with the UNPIVOT operator, but that is not available in Postgres. However Postgres’ ability to use the VALUES clause like a table makes this actually quite easy:
select c.customer_id, t.*
from customer_turnover c
cross join lateral (
values
(c.q1, 'Q1'),
(c.q2, 'Q2'),
(c.q3, 'Q3'),
(c.q4, 'Q4')
) as t(turnover, quarter)
order by customer_id, quarter;
will return the following result:
customer_id | turnover | quarter
------------+----------+--------
1 | 100 | Q1
1 | 210 | Q2
1 | 203 | Q3
1 | 304 | Q4
2 | 150 | Q1
2 | 118 | Q2
2 | 422 | Q3
2 | 257 | Q4
3 | 220 | Q1
3 | 311 | Q2
3 | 271 | Q3
3 | 269 | Q4
The equivalent query with the standard UNPIVOT operator would be:
select customer_id, turnover, quarter
from customer_turnover c
UNPIVOT (turnover for quarter in (q1 as 'Q1',
q2 as 'Q2',
q3 as 'Q3',
q4 as 'Q4'))
order by customer_id, quarter;
FYI for those of us looking for how to unpivot in RedShift.
The long form solution given by Stew appears to be the only way to accomplish this.
For those who cannot see it there, here is the text pasted below:
We do not have built-in functions that will do pivot or unpivot. However,
you can always write SQL to do that.
create table sales (regionid integer, q1 integer, q2 integer, q3 integer, q4 integer);
insert into sales values (1,10,12,14,16), (2,20,22,24,26);
select * from sales order by regionid;
regionid | q1 | q2 | q3 | q4
----------+----+----+----+----
1 | 10 | 12 | 14 | 16
2 | 20 | 22 | 24 | 26
(2 rows)
pivot query
create table sales_pivoted (regionid, quarter, sales)
as
select regionid, 'Q1', q1 from sales
UNION ALL
select regionid, 'Q2', q2 from sales
UNION ALL
select regionid, 'Q3', q3 from sales
UNION ALL
select regionid, 'Q4', q4 from sales
;
select * from sales_pivoted order by regionid, quarter;
regionid | quarter | sales
----------+---------+-------
1 | Q1 | 10
1 | Q2 | 12
1 | Q3 | 14
1 | Q4 | 16
2 | Q1 | 20
2 | Q2 | 22
2 | Q3 | 24
2 | Q4 | 26
(8 rows)
unpivot query
select regionid, sum(Q1) as Q1, sum(Q2) as Q2, sum(Q3) as Q3, sum(Q4) as Q4
from
(select regionid,
case quarter when 'Q1' then sales else 0 end as Q1,
case quarter when 'Q2' then sales else 0 end as Q2,
case quarter when 'Q3' then sales else 0 end as Q3,
case quarter when 'Q4' then sales else 0 end as Q4
from sales_pivoted)
group by regionid
order by regionid;
regionid | q1 | q2 | q3 | q4
----------+----+----+----+----
1 | 10 | 12 | 14 | 16
2 | 20 | 22 | 24 | 26
(2 rows)
Hope this helps, Neil
Pulling slightly modified content from the link in the comment from #a_horse_with_no_name into an answer because it works:
Installing Hstore
If you don't have hstore installed and are running PostgreSQL 9.1+, you can use the handy
CREATE EXTENSION hstore;
For lower versions, look for the hstore.sql file in share/contrib and run in your database.
Assuming that your source (e.g., wide data) table has one 'id' column, named id_field, and any number of 'value' columns, all of the same type, the following will create an unpivoted view of that table.
CREATE VIEW vw_unpivot AS
SELECT id_field, (h).key AS column_name, (h).value AS column_value
FROM (
SELECT id_field, each(hstore(foo) - 'id_field'::text) AS h
FROM zcta5 as foo
) AS unpiv ;
This works with any number of 'value' columns. All of the resulting values will be text, unless you cast, e.g., (h).value::numeric.
Just use JSON:
with data (id, name) as (
values (1, 'a'), (2, 'b')
)
select t.*
from data, lateral jsonb_each_text(to_jsonb(data)) with ordinality as t
order by data.id, t.ordinality;
This yields
|key |value|ordinality|
|----|-----|----------|
|id |1 |1 |
|name|a |2 |
|id |2 |1 |
|name|b |2 |
dbfiddle
I wrote a horrible unpivot function for PostgreSQL. It's rather slow but it at least returns results like you'd expect an unpivot operation to.
https://cgsrv1.arrc.csiro.au/blog/2010/05/14/unpivotuncrosstab-in-postgresql/
Hopefully you can find it useful..
Depending on what you want to do... something like this can be helpful.
with wide_table as (
select 1 a, 2 b, 3 c
union all
select 4 a, 5 b, 6 c
)
select unnest(array[a,b,c]) from wide_table
You can use FROM UNNEST() array handling to UnPivot a dataset, tandem with a correlated subquery (works w/ PG 9.4).
FROM UNNEST() is more powerful & flexible than the typical method of using FROM (VALUES .... ) to unpivot datasets. This is b/c FROM UNNEST() is variadic (with n-ary arity). By using a correlated subquery the need for the lateral ORDINAL clause is eliminated, & Postgres keeps the resulting parallel columnar sets in the proper ordinal sequence.
This is, BTW, FAST -- in practical use spawning 8 million rows in < 15 seconds on a 24-core system.
WITH _students AS ( /** CTE **/
SELECT * FROM
( SELECT 'jane'::TEXT ,'doe'::TEXT , 1::INT
UNION
SELECT 'john'::TEXT ,'doe'::TEXT , 2::INT
UNION
SELECT 'jerry'::TEXT ,'roe'::TEXT , 3::INT
UNION
SELECT 'jodi'::TEXT ,'roe'::TEXT , 4::INT
) s ( fn, ln, id )
) /** end WITH **/
SELECT s.id
, ax.fanm -- field labels, now expanded to two rows
, ax.anm -- field data, now expanded to two rows
, ax.someval -- manually incl. data
, ax.rankednum -- manually assigned ranks
,ax.genser -- auto-generate ranks
FROM _students s
,UNNEST /** MULTI-UNNEST() BLOCK **/
(
( SELECT ARRAY[ fn, ln ]::text[] AS anm -- expanded into two rows by outer UNNEST()
/** CORRELATED SUBQUERY **/
FROM _students s2 WHERE s2.id = s.id -- outer relation
)
,( /** ordinal relationship preserved in variadic UNNEST() **/
SELECT ARRAY[ 'first name', 'last name' ]::text[] -- exp. into 2 rows
AS fanm
)
,( SELECT ARRAY[ 'z','x','y'] -- only 3 rows gen'd, but ordinal rela. kept
AS someval
)
,( SELECT ARRAY[ 1,2,3,4,5 ] -- 5 rows gen'd, ordinal rela. kept.
AS rankednum
)
,( SELECT ARRAY( /** you may go wild ... **/
SELECT generate_series(1, 15, 3 )
AS genser
)
)
) ax ( anm, fanm, someval, rankednum , genser )
;
RESULT SET:
+--------+----------------+-----------+----------+---------+-------
| id | fanm | anm | someval |rankednum| [ etc. ]
+--------+----------------+-----------+----------+---------+-------
| 2 | first name | john | z | 1 | .
| 2 | last name | doe | y | 2 | .
| 2 | [null] | [null] | x | 3 | .
| 2 | [null] | [null] | [null] | 4 | .
| 2 | [null] | [null] | [null] | 5 | .
| 1 | first name | jane | z | 1 | .
| 1 | last name | doe | y | 2 | .
| 1 | | | x | 3 | .
| 1 | | | | 4 | .
| 1 | | | | 5 | .
| 4 | first name | jodi | z | 1 | .
| 4 | last name | roe | y | 2 | .
| 4 | | | x | 3 | .
| 4 | | | | 4 | .
| 4 | | | | 5 | .
| 3 | first name | jerry | z | 1 | .
| 3 | last name | roe | y | 2 | .
| 3 | | | x | 3 | .
| 3 | | | | 4 | .
| 3 | | | | 5 | .
+--------+----------------+-----------+----------+---------+ ----
Here's a way that combines the hstore and CROSS JOIN approaches from other answers.
It's a modified version of my answer to a similar question, which is itself based on the method at https://blog.sql-workbench.eu/post/dynamic-unpivot/ and another answer to that question.
-- Example wide data with a column for each year...
WITH example_wide_data("id", "2001", "2002", "2003", "2004") AS (
VALUES
(1, 4, 5, 6, 7),
(2, 8, 9, 10, 11)
)
-- that is tided to have "year" and "value" columns
SELECT
id,
r.key AS year,
r.value AS value
FROM
example_wide_data w
CROSS JOIN
each(hstore(w.*)) AS r(key, value)
WHERE
-- This chooses columns that look like years
-- In other cases you might need a different condition
r.key ~ '^[0-9]{4}$';
It has a few benefits over other solutions:
By using hstore and not jsonb, it hopefully minimises issues with type conversions (although hstore does convert everything to text)
The columns don't need to be hard coded or known in advance. Here, columns are chosen by a regex on the name, but you could use any SQL logic based on the name, or even the value.
It doesn't require PL/pgSQL - it's all SQL