Appending results of two tables in SQL - oracle-sqldeveloper

I was trying to append the results of two tables in oracle SQL. In which single row of one table should repeat to the number of rows of the other table.
example:
TABLE 1.
R_ID R_name
654 ABC
364 BCD
541 REA
980 HTD
788 UJS
TABLE 2.
G_ID G_NAME
675464 CHEF
Result
G_ID G_NAME R_ID R_name
675464 CHEF 654 ABC
675464 CHEF 364 BCD
675464 CHEF 541 REA
675464 CHEF 980 HTD
675464 CHEF 788 UJS
I used union all but couldn't get the expected result.

Have you tried UNION alone without ALL?

I have found it myself. This may be helpful for someone.
Union/ Union All wont work in this scenario.
The right way to do this is
Select * from (Query1, Query2);
Query1 -retrieves data from table2(one with single row)
Query2 -retrieves data from table1

Related

How to export the data from table to csv file by applying sort on one column

I have 2 billion of records in table in SQL developer and wanted to export the records in csv file but while exporting data I want to sort one column in ascending order. Is there any efficient or quick way to do this?
for ex:
Suppose the table name is TEMP and i want to sort the A_KEY column in ascending order and then export it
/* TEMP
P_ID ADDRESS A_KEY
1 242 Street 4
2 242 Street 5
3 242 Street 3
4 242 Long St 1
Expected Result in csv file:
P_ID, ADDRESS, A_KEY
4, 242 Long St,1
3, 242 Street,3
1, 242 Street, 4
2, 242 Long St,5
I have tried using below query :
insert into temp2select * from TEMP order by A_KEY ASC;
and then export the table from sqldeveloper but is there any efficient or quick way to direct export records without query?
Losing time on creating a new table (TEMP2) won't help because you are using ORDER BY clause during INSERT, but that means nothing. It is ORDER BY in SELECT statement that matters.
Therefore, run
select * from temp order by a_key;
and export result returned by that query.
2 billion rows? It'll take time. What will you do with such a CSV file? That's difficult to work with.
If you're trying to move data into another Oracle database, then consider using Data Pump export and import utilities which are designed for such a purpose.

Will changing PostgreSQL generated column logic affect previous records?

Let's say we have a table that includes a generated column that concatenates the string of two columns (data_a + data_b):
id
data_a
data_b
generated_data
1
abc
123
'123abc'
2
xyz
890
'xyz890'
... but we want to change the generation logic, for example reversing the concatenation order (data_b + data_a). Would that "backfill" my previous records, or maintain them but only update the new records?
IE, would this change result in this ("backfill"):
id
data_a
data_b
generated_data
1
abc
123
'abc123'
2
xyz
890
'890xyz'
3
lmn
567
'567lmn'
... or this ("maintain")?
id
data_a
data_b
generated_data
1
abc
123
'123abc'
2
xyz
890
'xyz890'
3
lmn
567
'567lmn'
Generated columns:
A stored generated column is computed when it is written (inserted or updated) and occupies storage as if it were a normal column
PostgreSQL currently implements only stored generated columns.
If you could change the expression in place it would not backfill them. You would have to make an explicit UPDATE to make that happen.
The bigger issue is that I can't see anyway to alter the generation code without dropping the column and adding it back with the new expression. Doing so though will change all the column values to the new result of the new expression.

how to create a formatted output directly using databricks SQL query

We are using select query to fetch data by joining tables in Databricks SQL. With the obtained dataset, we also need to create a header record(which contains static information) and a trailer record (containing details which are dependent on the join output).
Example is given as follows -
Let's assume that there are two Databricks SQL tables "class" and "student", both are joined by a common column " student_id" we are using the following query to obtain marks of all students in each class -
Select
a.student_id
, a.student_name
, a.student_age
, b.class
, b.roll_no
, b.marks
from student as a
inner join class as b
On a.student_id = b.student_id
From the join dataset I need to create the following final output -
Header 202205 some_static_text
Student01 Tom 23 01 01 50
Student02 Dick 21 01 02 40
Student03 Harry 22 01 03 30
Trailer some_text 120
where the last field in trailer reacord(120) is the sum of the last field(b.marks) in the SQL join output.
Is it possible to achieve the entire final output with a single SQL query and without using any other tool/script?
To consider here - our team has only SELECT permission on the databricks tables.
Any help is appreciated.
Thanks.

PostgreSQL: a variation of rows to columns

postgresql V 9.3
Best explained with an example:
So I have 2 tables:
Books tables:
book_id name
1 Aragorn
2 Harry Potter
3 The Great Gatsby
4 Book name, with a comma
Users ids to books ids table:
user_id book_id
31 1
31 2
32 3
34 1
34 4
And I would like to show each user his/her books so something like this:
user_id book_names
31 Aragorn,Harry Potter
32 The Great Gatsby
34 Aragorn,Book name, with a comma
Basically each user get his/her books separated by commas
How can I achieve this in an efficient way?
If you are using Postgres version 8.4 or later, then you have array_agg() at your disposal. One option is to aggregate over the user books table by user_id and then use array_agg() to generate the CSV list of books for each user.
SELECT t1.user_id,
array_to_string(array_agg(t2.name), ',') AS book_names
FROM user_books t1
INNER JOIN books t2
ON t1.book_id = t2.book_id
GROUP BY t1.user_id
In Postgres 9.0 and above, you could use the following to aggregate book names into a CSV list:
string_agg(t2.name, ',' order by t2.name)

Export data from db2 with column names

I want to export data from db2 tables to csv format.I also need that first row should be all the column names.
I have little success by using the following comand
EXPORT TO "TEST.csv"
OF DEL
MODIFIED BY NOCHARDEL coldel: ,
SELECT col1,'COL1',x'0A',col2,'COL2',x'0A'
FROM TEST_TABLE;
But with this i get data like
Row1 Value:COL1:
Row1 Value:COL2:
Row2 Value:COL1:
Row2 Value:COL2:
etc.
I also tried the following query
EXPORT TO "TEST.csv"
OF DEL
MODIFIED BY NOCHARDEL
SELECT 'COL1',col1,'COL2',col2
FROM ADMIN_EXPORT;
But this lists column name with each row data when opened with excel.
Is there a way i can get data in the format below
COL1 COL2
value value
value value
when opened in excel.
Thanks
After days of searching I solved this problem that way:
EXPORT TO ...
SELECT 1 as id, 'COL1', 'COL2', 'COL3' FROM sysibm.sysdummy1
UNION ALL
(SELECT 2 as id, COL1, COL2, COL3 FROM myTable)
ORDER BY id
You can't select a constant string in db2 from nothing, so you have to select from sysibm.sysdummy1.
To have the manually added columns in first row you have to add a pseudo-id and sort the UNION result by that id. Otherwise the header can be at the bottom of the resulting file.
Quite old question, but I've encountered recently the a similar one realized this can be achieved much easier in 11.5 release with EXTERNAL TABLE feature, see the answer here:
https://stackoverflow.com/a/57584730/11946299
Example:
$ db2 "create external table '/home/db2v115/staff.csv'
using (delimiter ',' includeheader on) as select * from staff"
DB20000I The SQL command completed successfully.
$ head /home/db2v115/staff.csv | column -t -s ','
ID NAME DEPT JOB YEARS SALARY COMM
10 Sanders 20 Mgr 7 98357.50
20 Pernal 20 Sales 8 78171.25 612.45
30 Marenghi 38 Mgr 5 77506.75
40 O'Brien 38 Sales 6 78006.00 846.55
50 Hanes 15 Mgr 10 80659.80
60 Quigley 38 Sales 66808.30 650.25
70 Rothman 15 Sales 7 76502.83 1152.00
80 James 20 Clerk 43504.60 128.20
90 Koonitz 42 Sales 6 38001.75 1386.70
Insert the column names as the first row in your table.
Use order by to make sure that the row with the column names comes out first.