Get column value from another table in PostgreSQL - postgresql

I have two tables:
User and Order
A Order belongs to a User, in a way that the Order table has a column called user_id. So, for example, a row in Order table would look like:
id: 4
description: "pair of shoes"
price: 18.40
user_id: 1
And a User row would look like:
id: 1
email: "myemail#gmail.com"
name: "John"
How would I go if I want to get all the Orders, but instead of getting the user id, get the email?

A simple JOIN will do the trick
SELECT o.id,
o.description,
o.price,
u.email
FROM "order" o JOIN "user" u
ON o.user_id = u.id
Sample output:
| ID | DESCRIPTION | PRICE | EMAIL |
--------------------------------------------------
| 4 | pair of shoes | 18.4 | myemail#gmail.com |
Here is SQLFiddle demo
Further reading A Visual Explanation of SQL Joins

Related

Transforming information in postgresql

So, I have 2 tables,
In the 1st table, there is an Information of users
user_id | name
1 | Albert
2 | Anthony
and in the other table, I have information
where some users have address information where it can either be home, office or both home and office
user_id| address_type | address
1 | home | a
1 | office | b
2 | home | c
and the final result I want is this
user_id | name | home_address | office_address
1 | Albert | a | b
2 | Anthony | c | null
I have tried using left join and json_agg but the information that way is not readable,
any suggestions on how I can do this?
You can use two outer joins, one for the office address and one for the home address.
select t1.user_id, t1.name,
ha.address as home_address,
oa.address as office_address
from table1 t1
left join table2 ha on ha.user_id = t1.user_id and ha.address_type = 'home'
left join table2 oa on oa.user_id = t1.user_id and ha.address_type = 'office';
A solution using JSON could look like this
select t1.user_id, t1.name,
a.addresses ->> 'home' as home_address,
a.addresses ->> 'office' as office_address
from table1 t1
left join (
select user_id, jsonb_object_agg(address_type, address) as addresses
from table2
group by user_id
) a on a.user_id = t1.user_id;
Which might be a bit more flexible, because you don't need to add a new join for each address type. The first query is likely to be faster if you need to retrieve a large number of rows.

PostgreSQL: Selecting one address from almost but not exactly duplicate rows

I have a big table that I'm trying to join another table to, however the table has entries such as:
--- Name | Address | Priority
----------------------------------------
1 | Jane Doe | 123 Baker St | 1
2 | Jane Doe | 345 Clay Dr | 2
3 | Jeff Boe | 231 Street St| 1
4 | Karen Al | 4232 Elm St | 1
5 | Karen Al | 5632 Pine Ct | 2
What I really want to select is one single address per person. The correct address I want is priority 2. However some of the addresses don't have a priority 2, so I can't join only on priority 2.
I've tried the following test query:
SELECT DISTINCT n.ID, LastName, FirstName, MAX(Address), MAX(Address2), City, State, PostalCode, n.Phone
FROM NormalTable n
JOIN Contracts cn ON n.ID = cn.ID
Which returns the table that I sketched out above, with the same person/sameID but different addresses.
Is there a way to do this in one query? I can think of maybe doing one INSERT statement into my final table where I do all the priority 2 addresses and then ANOTHER INSERT statement for IDs that aren't in the table yet, and use the priority 1 address for those. But I'd much prefer if there's a way to do this all in one go where I end up with only the address I want.
You could choice the address you need joining a subquery for max priority
select m.LastName, m.FirstName, m.Address, m.Address2, m.City, m.State, m.PostalCode, m.Phone
from my_table m
inner join (
select LastName, FirstName, max(priority) max_priority
from my_table
group by LastName, FirstName
) t on t.LastName = m.LastName
AND t.FirstName = m.FirstName
AND t.max_priority = m.priority
I think you want something like this
SELECT DISTINCT (Name), Address, Priority
ORDER BY Priority DESC
How this works is that the DISTINCT (Name) only returns one row per name. The row returned for each Name is the first row. Which will be the one with the highest priority because of the ORDER BY.

order by case alphabetical ordering

Question is not too specific but not sure how to explain the question well.
I have table in my db with names in the field.
I would like to order the names in the way if the name starts with certain alphabet, order that first and so on.
What I have now is
SELECT
(T.firstname||' '||T.lastname) as Full_Name
FROM
TABLE T
ORDER BY
CASE
WHEN LPAD(T.firstname, 1) = 'J' THEN T.firstname
WHEN LPAD(T.firstname, 1) = 'B' THEN T.firstname
END DESC,
Full_Name ASC
Now this returns as what I would like to see, name starting with 'J' is ordered first then 'B' then the rest.
However, the result looks like
What I get What I want
Full_Name Full_Name
---------- ----------
Junior MR James A
John Doe Joe Bob
Joe Bob John Doe
James A Junior MR
Brad T B Test
Bob Joe Bb Test
Bb Test Bob Joe
B Test Brad T
A Test A Test
Aa Test Aa Test
AFLKJASDFJ AFLKJASDFJ
Ann Doe Ann Doe
But what I want is that J and B to be sorted alphabetical order as well, right now it is doing reverse alphabetical order.
How can I specify the order inside of case?
I tried having 2 seperate case statement for different cases for starting with 'J' and 'B', it just shows me the same result
Just make one extra column, material using triggers or volatile using expression only executed when select is run, and then use it in sorting.
For secondary sorting use original components of names, not the expression bringing both names together thus destroying the information which was which.
Examples: https://dbfiddle.uk/?rdbms=firebird_3.0&fiddle=fbf89b3903d3271ae6c55589fd9cfe23
create table T (
firstname varchar(10),
lastname varchar(10),
fullname computed by (
Coalesce(firstname, '-') || ' ' || Coalesce(T.lastname, '-')
),
sorting_helper computed by (
CASE WHEN firstname starting with 'J' then 100
WHEN firstname starting with 'B' then 50
ELSE 0
END
)
)
Notice the important distinction: my helper expression is "ranking" one. It yields one of several pre-defined ranks, thus putting "James" and "Joe" into the same bin having exactly the same ranking value. Your expression still yields the names themselves, thus erroneously keeping difference between those names. But you do NOT want that difference, you told you want all J-started names to be moved upwards and then sorted among themselves by usual rules. So, just do what you say, make an expression that pulls all J-names together WITHOUT making distinction between those.
insert into T
select
'John', 'Doe'
from rdb$database union all select
'James', 'A'
from rdb$database union all select
'Aa ', 'Test'
from rdb$database union all select
'Ann', 'Doe'
from rdb$database union all select
'Bob', 'Joe'
from rdb$database union all select
'Brad', 'Test'
from rdb$database union all select
NULL, 'Smith'
from rdb$database union all select
'Ken', NULL
from rdb$database
8 rows affected
select * from T
FIRSTNAME | LASTNAME | FULLNAME | SORTING_HELPER
:-------- | :------- | :---------- | -------------:
John | Doe | John Doe | 100
James | A | James A | 100
Aa | Test | Aa Test | 0
Ann | Doe | Ann Doe | 0
Bob | Joe | Bob Joe | 50
Brad | Test | Brad Test | 50
null | Smith | - Smith | 0
Ken | null | Ken - | 0
Select FullName from T order by sorting_helper desc, firstname asc, lastname asc
| FULLNAME |
| :---------- |
| James A |
| John Doe |
| Bob Joe |
| Brad Test |
| - Smith |
| Aa Test |
| Ann Doe |
| Ken - |
Or without computed-by column
Select FullName from T order by (CASE WHEN firstname starting with 'J' then 0
WHEN firstname starting with 'B' then 1
ELSE 2
END) asc, firstname asc, lastname asc
| FULLNAME |
| :---------- |
| James A |
| John Doe |
| Bob Joe |
| Brad Test |
| - Smith |
| Aa Test |
| Ann Doe |
| Ken - |
For extra tuning of the positioning of the rows lacking name or surname you can also use NULLS FIRST or NULLS LAST option as described in Firebird docs at https://firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/nullguide-sorts.html
The problem with this approach however, on big enough tables, would be that you won't be able to use indices built over names and surnames for sorting, instead you would have to resort to un-sorted pulling of data (aka NATURAL SORT when reading QUERY PLAN) and then sorting it into temporary files on disk. Which might turn very slow and volume-consuming on large enough data.
You can try to make it better by creating "index by the expression", using your ranking expression there. And hope that FB optimizer will use it (it is quite tricky with verbose expressions like CASE). Frankly you would probably still be left without it (at least I did not manage to make FB 2.1 utilize index-by-case-expression there).
You can "materialize" the ranking expression into a regular SmallInt Not Null column instead of COMPUTED BY one, and use TRIGGER of BEFORE UPDATE OR INSERT type keep that column populated with proper data. Then you can create a regular index over that regular column. While it will add two bytes to each row, that is not that much a grow.
But even then, the index with very few distinct values does not add much value, it will have "low selectivity". Also, index-by-expression can not be compound one (meaning, including other columns past the expression).
So for large data you'd practically better be with using THREE different queries fused together. Add scaffolding, if you did not do already:
create index i58647579_names on T58647579 ( firstname, lastname )
Then you can do triple-select like this:
WITH S1 as (
select FullName from T58647579
where firstname starting with 'J'
order by firstname asc, lastname asc
), S2 as (
select FullName from T58647579
where firstname starting with 'B'
order by firstname asc, lastname asc
), S3 as (
select FullName from T58647579
where (firstname is null)
or ( (firstname not starting with 'J')
and (firstname not starting with 'B')
)
order by firstname asc, lastname asc
)
SELECT * FROM S1
UNION ALL
SELECT * FROM S2
UNION ALL
SELECT * FROM S3
And while you would traverse the table thrice - you would do it by pre-sorted index:
PLAN (S1 T58647579 ORDER I58647579_NAMES INDEX (I58647579_NAMES))
PLAN (S2 T58647579 ORDER I58647579_NAMES INDEX (I58647579_NAMES))
PLAN (S3 T58647579 ORDER I58647579_NAMES)

How to use COUNT() in more that one column?

Let's say I have this 3 tables
Countries ProvOrStates MajorCities
-----+------------- -----+----------- -----+-------------
Id | CountryName Id | CId | Name Id | POSId | Name
-----+------------- -----+----------- -----+-------------
1 | USA 1 | 1 | NY 1 | 1 | NYC
How do you get something like
---------------------------------------------
CountryName | ProvinceOrState | MajorCities
| (Count) | (Count)
---------------------------------------------
USA | 50 | 200
---------------------------------------------
Canada | 10 | 57
So far, the way I see it:
Run the first SELECT COUNT (GROUP BY Countries.Id) on Countries JOIN ProvOrStates,
store the result in a table variable,
Run the second SELECT COUNT (GROUP BY Countries.Id) on ProvOrStates JOIN MajorCities,
Update the table variable based on the Countries.Id
Join the table variable with Countries table ON Countries.Id = Id of the table variable.
Is there a possibility to run just one query instead of multiple intermediary queries? I don't know if it's even feasible as I've tried with no luck.
Thanks for helping
Use sub query or derived tables and views
Basically If You You Have 3 Tables
select * from [TableOne] as T1
join
(
select T2.Column, T3.Column
from [TableTwo] as T2
join [TableThree] as T3
on T2.CondtionColumn = T3.CondtionColumn
) AS DerivedTable
on T1.DepName = DerivedTable.DepName
And when you are 100% percent sure it's working you can create a view that contains your three tables join and call it when ever you want
PS: in case of any identical column names or when you get this message
"The column 'ColumnName' was specified multiple times for 'Table'. "
You can use alias to solve this problem
This answer comes from #lotzInSpace.
SELECT ct.[CountryName], COUNT(DISTINCT p.[Id]), COUNT(DISTINCT c.[Id])
FROM dbo.[Countries] ct
LEFT JOIN dbo.[Provinces] p
ON ct.[Id] = p.[CountryId]
LEFT JOIN dbo.[Cities] c
ON p.[Id] = c.[ProvinceId]
GROUP BY ct.[CountryName]
It's working. I'm using LEFT JOIN instead of INNER JOIN because, if a country doesn't have provinces, or a province doesn't have cities, then that country or province doesn't display.
Thanks again #lotzInSpace.

table with two nullable foreign keys join results into single column

Sorry if title isn't very descriptive. I have a table like this example, and am using sql server 2012:
PersonId | PetID
and want to join it to the following two tables
PersonId | PersonName | PersonAsset
AnimalId | AnimalName | Animal Asset
So the end result is:
PersonId | PetId | Name | Asset
-------------------------------
1 null Dave 1
null 1 Fido 2
The output you require can be achieved by using a LEFT JOIN for your two tables and ISNULL for the required fields.
For example (assuming the first table is named 'common'):
SELECT common.PersonId,
common.PetId,
ISNULL(person.PersonName, animal.AnimalName) AS Name,
ISNULL(person.PersonAsset, animal.AnimalAsset) AS Asset
FROM common
LEFT JOIN person ON common.PersonId = person.PersonId
LEFT JOIN animal ON common.AnimalId = animal.AnimalId