I would like to know an efficient to way to fetch the data in the following case.
There are two tables say Table1 and Table2 having two common field say contry and pincode and other table "Table3" having key fields of first two tables (DNO, MPNO).
Here is the little glitch, In table3 data, if it is having DNO it wont have MPNO
So when in the selection screen(Pic no2) if the use enter any thing, result should be as follows
**MFID | DNO | MPNO | COUNTRY | PINCODE**
----------
00001 | 10011 | novalue | IN | 4444
00002 | Novalue | 1200 | IN | 5555
00003 | 300 | novalue | US | 9999
( as you can observe if DNO present no MPNO , vice versa )
Please have a look at the pictures for a clear picture :-)
Table Relation:
Selection screen with select options:
The code shouldn't be long.
PSEUDO CODE:
Select queries:
Select * from table3 into it_table3.
Select * from table1 FOR ALL ENTRIES IN it_table3 INTO it_table1
WHERE dno = table3-dno.
Select * from table2 FOR ALL ENTRIES IN it_table3 INTO it_table2
WHERE mpno = table3-mpno.
Loop at internal table 3 and build final table.
LOOP at it_table3 into wa_table3.
IF wa_table3-dno IS NOT INITIAL.
READ it_table1 where dno = wa_table3-dno.
ELSE.
READ it_table2 where mpno = wa_table3-mpno.
ENDIF.
ENDLOOP.
Hope this was the answer you were hoping to find!
Building of efficient select will require information about obligatory fields in your selection screen, as well as about alleged production size of all 3 tables. However, without this information let's assume that table1 and table2 are reference tables and table3 is a transaction table, as onr can assume from their structure. It would be sensible to build selection in a following way:
Selecting data from reference tables. As you said fields DNO/MPNO are mutually exclusive then there will be no hits of country/pincode pair in both reference tables, so JOIN is useless here. However we can merge 2 result sets in single itab without any constraints' violations.
TYPES: BEGIN OF tt_result,
dno TYPE table1-dno,
mpno TYPE table2-mpno,
country TYPE table1-country,
pincode TYPE table1-pincode,
...other field from table3
END OF tt_result.
DATA: itab_result TYPE tt_result.
SELECT dno
FROM table1
INTO CORRESPONDING FIELDS OF TABLE itab_result
WHERE pincode IN so_pincode
AND country IN so_country.
SELECT mpno
FROM table2
APPENDING CORRESPONDING FIELDS OF TABLE itab_result
WHERE pincode IN so_pincode
AND country IN so_country.
FOR ALL ENTRIES addition allows specifying the same table in FOR ALL ENTRIES clause and in INTO clause, so we can fill our result table with absent table3 data by DNO/MPNO key.
SELECT *
FROM table3
INTO CORRESPONDING FIELDS OF TABLE itab_result
FOR ALL ENTRIES IN itab_result
ON itab_result~dno = itab3~dno
AND itab_result_mpno = itab3~mpno.
Related
I have two tables with almost 13,000 records and looks something like this
TableA:
ID Status Option
----------------------
1 | Approved |
2 | Reject |
3 | Approved |
4
.
.
13,000
TableB
Name Option Status
-----------------------------------------------
First | {'data':'Add into box','ID':'1'} | Approved
Second | {'data':'Don't Add','ID':'2'} | Reject
Third | {'data':'Add into box','ID':'3'} | Approved
.
.
.
13,000
I want to fill the Option column (data type varchar)in table A with similar data to that of Table B Option column (data type B) based on same ID which is also in option json object. How do i fill them in one go rather than going one by one.
An update query where we set the "option" in TableA using a subquery, where we filter the result based on "id" of TableA matching with "id" inside varchar column "option" of TableB.
update tablea
set option = (select option from tableb
where tablea.id::text = tableb.option::json ->> 'id'
limit 1);
-- assuming id has a 1:1 relation in both tables
I have a table with a column named "ids" , type of String. Could someone tell me how to remove the duplicated values in each of the rows?
Example, table is:
--------------------------------------------------
primary_key | ids
--------------------------------------------------
1 | {23,40,23}
--------------------------------------------------
2 | {78,40,13,78}
--------------------------------------------------
3 | {20,13,20}
--------------------------------------------------
4 | {7,2,7}
--------------------------------------------------
and I want to update it into:
--------------------------------------------------
primary_key | ids
--------------------------------------------------
1 | {23,40}
--------------------------------------------------
2 | {78,40,13}
--------------------------------------------------
3 | {20,13}
--------------------------------------------------
4 | {7,2}
--------------------------------------------------
In postgres I wrote:
UPDATE table_name
SET ids = (SELECT DISTINCT UNNEST(
(SELECT ids FROM table_name)::text[]))
In sqlalchemy I wrote:
session.query(table_name.ids).\
update({table_name.ids: func.unnest(table_name.ids,String).alias('data_view')},
synchronize_session=False)
None of these are working, so please help me, thanks in advance!
I think you could improve the design by storing these ids in another table one id per row with a foreign key referencing table_name.primary_key.
Also storing Array data as text strings seems strange.
Anyway, here is one way to do it: I wrapped the set returned by UNNEST with an inner subselect to be able to apply the aggregate_function needed to concatenate the strings again.
UPDATE table_name
SET ids = new_ids
FROM LATERAL (
SELECT primary_key, array_agg(elem)::text AS new_ids
FROM (SELECT DISTINCT primary_key, UNNEST(ids::text[]) as elem
FROM table_name ) t_inner
GROUP by primary_key )t_sub
WHERE t_sub.primary_key = table_name.primary_key
So i have this task to resolve in Postgresql where for a given number of classIds that the user provides i have to return him the common properties of said classes.
I have three tables to represent the model (one class has multiple properties and one property can be in many classes)
Table classes:
---------------------------
| Id | Name | Description |
---------------------------
Table Properties:
---------------------------
| Id | Name | Description |
---------------------------
And finally table ClassProperties
------------------------
| ClassId | PropertyId |
------------------------
So the users gives me an array with classIds and i have to return him all common properties of all classes (like i said above)
As of now i'm only being able to return every property of all classes with this code:
select p.*
from properties as p
inner join ClassProperties as cp on cp.propertyid= p.id
where cp.classid = any ('{88d5fe8f-e19e-40b4-bc65-83ac64f825b0, a2a63bea-
aeee-4d3b-817e-cc635383c571}') ;
The ids as u can see are Guid, but that really does't matter. Any help would be appreciated. Thanks!
You can do this as:
select p.*
from properties p inner join
ClassProperties cp
on cp.propertyid = p.id
where cp.classid = any ('{88d5fe8f-e19e-40b4-bc65-83ac64f825b0, a2a63bea-aeee-4d3b-817e-cc635383c571}')
group by p.id
having count(*) = 2;
The count(*) should be count(distinct classid) if duplicates are allowed.
Note: the value "2" is the number of elements you are checking. You can also use array_length('{88d5fe8f-e19e-40b4-bc65-83ac64f825b0, a2a63bea-aeee-4d3b-817e-cc635383c571}').
Let's say I have this 3 tables
Countries ProvOrStates MajorCities
-----+------------- -----+----------- -----+-------------
Id | CountryName Id | CId | Name Id | POSId | Name
-----+------------- -----+----------- -----+-------------
1 | USA 1 | 1 | NY 1 | 1 | NYC
How do you get something like
---------------------------------------------
CountryName | ProvinceOrState | MajorCities
| (Count) | (Count)
---------------------------------------------
USA | 50 | 200
---------------------------------------------
Canada | 10 | 57
So far, the way I see it:
Run the first SELECT COUNT (GROUP BY Countries.Id) on Countries JOIN ProvOrStates,
store the result in a table variable,
Run the second SELECT COUNT (GROUP BY Countries.Id) on ProvOrStates JOIN MajorCities,
Update the table variable based on the Countries.Id
Join the table variable with Countries table ON Countries.Id = Id of the table variable.
Is there a possibility to run just one query instead of multiple intermediary queries? I don't know if it's even feasible as I've tried with no luck.
Thanks for helping
Use sub query or derived tables and views
Basically If You You Have 3 Tables
select * from [TableOne] as T1
join
(
select T2.Column, T3.Column
from [TableTwo] as T2
join [TableThree] as T3
on T2.CondtionColumn = T3.CondtionColumn
) AS DerivedTable
on T1.DepName = DerivedTable.DepName
And when you are 100% percent sure it's working you can create a view that contains your three tables join and call it when ever you want
PS: in case of any identical column names or when you get this message
"The column 'ColumnName' was specified multiple times for 'Table'. "
You can use alias to solve this problem
This answer comes from #lotzInSpace.
SELECT ct.[CountryName], COUNT(DISTINCT p.[Id]), COUNT(DISTINCT c.[Id])
FROM dbo.[Countries] ct
LEFT JOIN dbo.[Provinces] p
ON ct.[Id] = p.[CountryId]
LEFT JOIN dbo.[Cities] c
ON p.[Id] = c.[ProvinceId]
GROUP BY ct.[CountryName]
It's working. I'm using LEFT JOIN instead of INNER JOIN because, if a country doesn't have provinces, or a province doesn't have cities, then that country or province doesn't display.
Thanks again #lotzInSpace.
I have a table called _sample_table_delme_data_files which contains some duplicates. I want to copy its records, without duplicates, into data_files:
INSERT INTO data_files (SELECT distinct * FROM _sample_table_delme_data_files);
ERROR: could not identify an ordering operator for type box3d
HINT: Use an explicit ordering operator or modify the query.
Problem is, PostgreSQL can not compare (or order) box3d types. How do I supply such an ordering operator so I can get only the distinct into my destination table?
Thanks in advance,
Adam
If you don't add the operator, you could try translating the box3d data to text using its output function, something like:
INSERT INTO data_files (SELECT distinct othercols,box3dout(box3dcol) FROM _sample_table_delme_data_files);
Edit The next step is: cast it back to box3d:
INSERT INTO data_files SELECT othercols, box3din(b) FROM (SELECT distinct othercols,box3dout(box3dcol) AS b FROM _sample_table_delme_data_files);
(I don't have box3d on my system so it's untested.)
The datatype box3d doesn't have an operator for the DISTINCT-operation. You have to create the operator, or ask the PostGIS-project, maybe somebody has already fixed this problem.
Finally, this was solved by a colleague.
Let's see how many dups are there:
SELECT COUNT(*) FROM _sample_table_delme_data_files ;
count
-------
12728
(1 row)
Now, we shall add another column to the source table to help us differentiate similar rows:
ALTER TABLE _sample_table_delme_data_files ADD COLUMN id2 serial;
We can now see the dups:
SELECT id, id2 FROM _sample_table_delme_data_files ORDER BY id LIMIT 10;
id | id2
--------+------
198748 | 6449
198748 | 85
198801 | 166
198801 | 6530
198829 | 87
198829 | 6451
198926 | 88
198926 | 6452
199062 | 6532
199062 | 168
(10 rows)
And remove them:
DELETE FROM _sample_table_delme_data_files
WHERE id2 IN (SELECT max(id2) FROM _sample_table_delme_data_files
GROUP BY id
HAVING COUNT(*)>1);
Let's see it worked:
SELECT id FROM _sample_table_delme_data_files GROUP BY id HAVING COUNT(*)>1;
id
----
(0 rows)
Remove the auxiliary column:
ALTER TABLE _sample_table_delme_data_files DROP COLUMN id2;
ALTER TABLE
Insert the remaining rows into the destination table:
INSERT INTO data_files (SELECT * FROM _sample_table_delme_data_files);
INSERT 0 6364