I've just upgraded to Postgresql 9.3beta. When I apply json_each or json_each_text functions to a json column, the result is a set of rows with column names 'key' and 'value'.
Here's an example:
I have a table named customers and education column is of type json
Customers table is as follows:
----------------------------------------------------------------------
| id | first_name | last_name | education |
---- ------------ ----------- ----------------------------------------
| 1 | Harold | Finch | {\"school\":\"KSU\",\"state\":\"KS\"} |
----------------------------------------------------------------------
| 2 | John | Reese | {\"school\":\"NYSU\",\"state\":\"NY\"} |
----------------------------------------------------------------------
The query
select * from customers, json_each_text(customers.education) where value = 'NYSU'
returns a set of rows with the following column names
---------------------------------------------------------------------------------------
| id | first_name | last_name | education | key | value |
---- ------------ ----------- ---------------------------------------- -------- -------
| 2 | John | Reese | {\"school\":\"NYSU\",\"state\":\"NY\"} | school | NYSU |
---------------------------------------------------------------------------------------
because json_each_text function returns the set of rows with key and value column names by default.
However, I want json_each_text to return custom column names such as key1 and key2:
-----------------------------------------------------------------------------------------
| id | first_name | last_name | education | key1 | value1 |
---- ------------ ----------- ---------------------------------------- -------- ---------
| 2 | John | Reese | {\"school\":\"NYSU\",\"state\":\"NY\"} | school | NYSU |
-----------------------------------------------------------------------------------------
Is there a way to get different column names like 'key1' and 'value1' after applying those functions?
You can solve that by using AS in FROM and SELECT clause:
postgres=# SELECT json_data.key AS key1,
json_data.value AS value1
FROM customers,
json_each_text(customers.education) AS json_data
WHERE value = 'NYSU';
key1 | value1
--------+--------
school | NYSU
(1 row)
Related
How can I use string_to_array or split_part on another column value.
I want do something like select * from tenants where id IN (select string_to_array(select ancestry from tenants where id = 39,'/'));
-[ RECORD 1 ]-------------+----------------------
id | 1
domain |
subdomain |
name | My Company
login_text |
logo_file_name |
logo_content_type |
logo_file_size |
logo_updated_at |
login_logo_file_name |
login_logo_content_type |
login_logo_file_size |
login_logo_updated_at |
ancestry |
divisible | t
description | Tenant for My Company
use_config_for_attributes | t
default_miq_group_id | 1
source_type |
source_id |
-[ RECORD 3 ]-------------+----------------------
id | 35
domain |
subdomain |
name | Tenant_2
login_text |
logo_file_name |
logo_content_type |
logo_file_size |
logo_updated_at |
login_logo_file_name |
login_logo_content_type |
login_logo_file_size |
login_logo_updated_at |
ancestry | 1
divisible | t
description | Tenant_2
use_config_for_attributes | f
default_miq_group_id | 36
source_type |
source_id |
-[ RECORD 7 ]-------------+----------------------
id | 39
domain |
subdomain |
name | Child_Teanant_202
login_text |
logo_file_name |
logo_content_type |
logo_file_size |
logo_updated_at |
login_logo_file_name |
login_logo_content_type |
login_logo_file_size |
login_logo_updated_at |
ancestry | 1/35
divisible | t
description | Child_Teanant_202
use_config_for_attributes | f
default_miq_group_id | 52
source_type |
source_id |
Use regex to enforce word boundaries:
select *
from tenants
where (select ancestry from tenants where id = 39)
~ ('\y' || id || '\y')
See live demo.
Without the word boundaries an id of 1 would match an ancestry of 123.
Note Postgres's unusual regex for word boundary \y, which elsewhere is \b.
There are two ways to solve this.
One is to simply unnest the elements of ancestry
select *
from tenants
where id in (select a.id::int
from tenants t2
cross join unnest(string_to_array(t2.ancestry, '/')) as a(id)
where t2.id = 39);
Converting the string to an array in order to be able to use the = ANY() operator is a bit tricky, because you need two levels of parentheses plus a type cast to an integer array to make that work:
select *
from tenants
where id = any ((select string_to_array(t2.ancestry, '/')
from tenants t2
where t2.id = 39)::int[]);
Online example
So many similar questions - mainly the questions are about how to select one of the duplicates where only a single column is different, but I want to exclude all of them from a query, and only get the ones where a particular field isn't different.
I am looking for all the reference_no where the status is -1, except for those where the status is both -1 and 1 for the same reference_no, as in the table below. The query should return only row Id 4. How do I do that?
This is using SQL server 2016
| id | process_date | status | reference_no |
| --- | ------------ | ------ | ----------- |
| 1 | 12/5/22 | 1 | 789456 |
| 2 | 12/5/22 | -1 | 789456 |
| 3 | 12/5/22 | 1 | 789456 |
| 4 | 12/5/22 | 1 | 321654 |
If I understand correctly you want a not exists check
select *
from t
where status = -1
and not exists (
select * from t t2
where t2.status = 1 and t2.reference_no = t.reference_no
);
I have two tables:
people
| peopleID | Lastname | Firstname |
| -------- | -------- | --------- |
| 1 | Smith | Marc |
| 2 | Doe | John |
| 3 | Davidson | Terry |
| 4 | Meyer | Todd |
| 5 | Richards | Abe |
customers
| customerID | Lastname | Company |
| ---------- | -------- | ------------------- |
| 1 | Davidson | Wonderproducts Inc. |
| 2 | Meyer | Banana Inc. |
Now I want to insert all elements of the table people to the table customers, except the ones, where the lastname equals the lastname in customers.
So at the end customers should look like this:
| customerID | Lastname | Company |
| ---------- | -------- | ------------------- |
| 1 | Davidson | Wonderproducts Inc. |
| 2 | Meyer | Banana Inc. |
| 3 | Smith | |
| 4 | Doe | |
| 5 | Richards | |
I already tried around this:
IF NOT EXISTS
(SELECT 1 FROM customers WHERE Lastname = (SELECT Lastname FROM people))
INSERT INTO customers (Lastname) VALUES (SELECT Lastname FROM people)
Try with this
INSERT INTO customers (Lastname)
SELECT P.Lastname
FROM people P
LEFT JOIN customers C ON C.Lastname = P.Lastname
WHERE C.customerID IS NULL
I want to create a function that can create a table, in which part of the columns is derived from the other two tables.
input table1:
This is a static table for each loan. Each loan has only one row with information related to that loan. For example, original unpaid balance, original interest rate...
| id | loan_age | ori_upb | ori_rate | ltv |
| --- | -------- | ------- | -------- | --- |
| 1 | 360 | 1500 | 4.5 | 0.6 |
| 2 | 360 | 2000 | 3.8 | 0.5 |
input table2:
This is a dynamic table for each loan. Each loan has seraval rows show the loan performance in each month. For example, current unpaid balance, current interest rate, delinquancy status...
| id | month| cur_upb | cur_rate |status|
| ---| --- | ------- | -------- | --- |
| 1 | 01 | 1400 | 4.5 | 0 |
| 1 | 02 | 1300 | 4.5 | 0 |
| 1 | 03 | 1200 | 4.5 | 1 |
| 2 | 01 | 2000 | 3.8 | 0 |
| 2 | 02 | 1900 | 3.8 | 0 |
| 2 | 03 | 1900 | 3.8 | 1 |
| 2 | 04 | 1900 | 3.8 | 2 |
output table:
The output table contains information from table1 and table2. Payoffupb is the last record of cur_upb in table2. This table is built for model development.
| id | loan_age | ori_upb | ori_rate | ltv | payoffmonth| payoffupb | payoffrate |lastStatus | modification |
| ---| -------- | ------- | -------- | --- | ---------- | --------- | ---------- |---------- | ------------ |
| 1 | 360 | 1500 | 4.5 | 0.6 | 03 | 1200 | 4.5 | 1 | null |
| 2 | 360 | 2000 | 3.8 | 0.5 | 04 | 1900 | 3.8 | 2 | null |
Most columns in the output table can directly get or transferred from columns in the two input tables, but some columns can not get then leave blank.
My main question is how to write a function to take two tables as inputs and output another table?
I already wrote the feature transformation part for data files in 2018, but I need to do the same thing again for data files in some other years. That's why I want to create a function to make things easier.
As you want to insert the latest entry of table2 against each entry of table1 try this
insert into table3 (id, loan_age, ori_upb, ori_rate, ltv,
payoffmonth, payoffupb, payoffrate, lastStatus )
select distinct on (t1.id)
t1.id, t1.loan_age, t1.ori_upb, t1.ori_rate, t1.ltv, t2.month, t2.cur_upb,
t2.cur_rate, t2.status
from
table1 t1
inner join
table2 t2 on t1.id=t2.id
order by t1.id , t2.month desc
DEMO1
EDIT for your updated question:
Function to do the above considering table1, table2, table3 structure will be always identical.
create or replace function insert_values(table1 varchar, table2 varchar, table3 varchar)
returns int as $$
declare
count_ int;
begin
execute format('insert into %I (id, loan_age, ori_upb, ori_rate, ltv, payoffmonth, payoffupb, payoffrate, lastStatus )
select distinct on (t1.id) t1.id, t1.loan_age, t1.ori_upb,
t1.ori_rate,t1.ltv,t2.month,t2.cur_upb, t2.cur_rate, t2.status
from %I t1 inner join %I t2 on t1.id=t2.id order by t1.id , t2.month desc',table3,table1,table2);
GET DIAGNOSTICS count_ = ROW_COUNT;
return count_;
end;
$$
language plpgsql
and call above function like below which will return the number of inserted rows:
select * from insert_values('table1','table2','table3');
DEMO2
I need help with a bit of a crazy single-query goal please that I'm not sure if GROUP BY or sub-SELECT applies to?
The following query:
SELECT id_finish, description, inside_rate, outside_material, id_part, id_metal
FROM parts_finishing AS pf
LEFT JOIN parts_finishing_descriptions AS fd ON (pf.id_description=fd.id);
Returns the results like the following:
+-------------+-------------+------------------+--------------------------------+
| description | inside_rate | outside_material | id_part - id_finish - id_metal |
+-------------+-------------+------------------+--------------------------------+
| Nickle | 0 | 33.44 | 4444-44-44, 5555-55-55 |
+-------------+-------------+------------------+--------------------------------+
| Bend | 11.22 | 0 | 1111-11-11 |
+-------------+-------------+------------------+--------------------------------+
| Pack | 22.33 | 0 | 2222-22-22, 3333-33-33 |
+-------------+-------------+------------------+--------------------------------+
| Zinc | 0 | 44.55 | 6000-66-66 |
+-------------+-------------+------------------+--------------------------------+
I need the results to return in the fashion below but there are catches:
I need to group by either the inside_rate column or the outside_material column but ORDER BY the description column but not ORDER BY or sort them by price (inside_rate and outside_material are the prices). So we know that they belong to a group if inside_rate is 0 or to the other group if outside_material is 0.
I need to ORDER BY the description column desc secondary after they are returned per group.
I need to return a list of parts (composed of three separate columns) for that inside/outside group / price for that finishing.
Stack format fix.
+-------------+-------------+------------------+--------------------------------+
| description | inside_rate | outside_material | id_part - id_finish - id_metal |
+-------------+-------------+------------------+--------------------------------+
| Bend | 11.22 | 0 | 1111-11-11 |
+-------------+-------------+------------------+--------------------------------+
| Pack | 22.33 | 0 | 2222-22-22, 3333-33-33 |
+-------------+-------------+------------------+--------------------------------+
| Nickle | 0 | 33.44 | 4444-44-44, 5555-55-55 |
+-------------+-------------+------------------+--------------------------------+
| Zinc | 0 | 44.55 | 6000-66-66 |
+-------------+-------------+------------------+--------------------------------+
The tables I'm working with and their data types:
Table "public.parts_finishing"
Column | Type | Modifiers
------------------+---------+-------------------------------------------------------------
id | bigint | not null default nextval('parts_finishing_id_seq'::regclass)
id_part | bigint |
id_finish | bigint |
id_metal | bigint |
id_description | bigint |
date | date |
inside_hours_k | numeric |
inside_rate | numeric |
outside_material | numeric |
sort | integer |
Indexes:
"parts_finishing_pkey" PRIMARY KEY, btree (id)
Table "public.parts_finishing_descriptions"
Column | Type | Modifiers
------------+---------+------------------------------------------------------------------
id not null | bigint | default nextval('parts_finishing_descriptions_id_seq'::regclass)
date | date |
description | text |
rate_hour | numeric |
type | text |
Indexes:
"parts_finishing_descriptions_pkey" PRIMARY KEY, btree (id)
The second table's first column is just id. (Why are we still dealing with a 1024 static width layout in 2015?)
I'd make an SQL fiddle though it refuses to load for me regardless of the browser.
Not entirely sure I understand your question. Might look like this:
SELECT pd.description, pf.inside_rate, pf.outside_material
, concat_ws(' - ', pf.id_part::text
, pf.id_finish::text
, pf.id_metal::text) AS id_part_finish_metal
FROM parts_finishing pf
LEFT JOIN parts_finishing_descriptions fd ON pf.id_description = fd.id
ORDER BY (pf.inside_rate = 0) -- 1. sorts group "inside_rate" first
, pd.description DESC NULLS LAST -- 2. possible NULL values last
;