Abap: Select the same field from two db tables into one column of the internal table with one select - select

I'm new in using SQL in ABAP, so it must be a silly question, but is this somehow possible?:
as I'm new, I cannot add images directly.
Thanks a lot in advance, kind regards

It appears that what you want to do is create an SQL "Union": Select rows from two different database tables and put the results into one combined table. This is only possible if both SELECTs have the same fields. But you can usually accomplish that pretty easily by substituting the missing rows in each table by a constant value ('' in this example):
SELECT
key1
'' as key2
value1
value2
value3
'' as value4
FROM db1
UNION SELECT
key1
key2
value1
'' as value2
'' as value3
value4
FROM db2
INTO CORRESPONDING FIELDS OF TABLE #it.
When you are using an older release (<7.50), then you can combine the data of multiple SELECTs by using APPENDING instead of INTO:
SELECT
key1
value1
value2
value3
FROM db1
INTO CORRESPONDING FIELDS OF TABLE #it.
SELECT
key1
key2
value1
value4
FROM db2
APPENDING CORRESPONDING FIELDS OF TABLE #it.

Related

How to create new table with multiple columns from json table

I have a table filled with json objects. There is only one column 'data'.
I would like to convert this table into one with multiple columns for all the keys in the json objects.
For example,
Right now my table is like this:
data
{'key1': 'value1', 'key2': 'value2'}
{'key1': 'value3', 'key2': 'value4'}
But I would like it like this:
key1
key2
value1
value2
value3
value4
I'm not trying to query it in this way, I would like to completely create a new table in the format that I've shown above.
I can try running:
INSERT INTO new_table
SELECT data -> 'key1', data -> 'key2'
FROM old_table
But since my json objects have hundreds of columns this may be inefficient. Is there a more efficient way to do so? Any help or suggestions is appreciated.
maybe this way is more simple?
select t.*
from yourtable
cross join json_to_record(jsoncol) t(key1 text,key2 text,key3 text,...);
on the second thought , if you already have a table that matches the columns from your json , you can do this:
create table newTable (key1 text, key2 text , ...);
select t.*
from yourtable
cross join json_populate_record(null::newTable,jsoncol) t;

Concat in loadscript between SQL statements

Hej folks,
it is driving me crazy. I'll try to concat some values from one table to use it in a where clause in another statement. It's like this script.
LIB CONNECT TO 'MSSQLSERVER';
TempTab:
Load KST;
SQL SELECT KST FROM vKST WHERE Region = 'Driver';
Let Test = Concat(distinct KST, ',');
drop Table TempTab;
// ...
LIB CONNECT TO 'ORACLESERVER';
Foo:
Load *;
SQL SELECT Value FROM KSTvalues WHERE KST IN ($(Test));
My problem is that the variable "Test" is only calculated to null. Has anyone a working idea for this?
In this case Concat function should be used in context of a table in order to get all the values from a field.
So to get all values you'll have to load them in a temp table first and in it to perform the concatenation. And then use variable to get the resulted field value.
Have a look at the script below. The concatenation is performed in TempTable and then using peek function to get the value of ConcatField into vConcatValues variable (im dropping the TempTable at the end because its not needed once the variable is populated)
TempTable will have the following content:
And vConcatValues will be:
RawData:
Load * inline [
Values
Value1
Value1
Value2
Value3
Value4
Value5
];
TempTable:
Load
Concat(distinct Values, ',') as ConcatField
Resident
RawData
;
let vConcatValues = peek('ConcatField');
// We dont need the TempTable anymore and can be dropped
Drop Table TempTable;
P.S. probably the sql clause will raise an error, because the values will not be defined as strings. In this case you can use something like this:
TempTable:
Load
Concat(distinct Values, '","') as ConcatField
Resident
Raw
;
Using "," as separator will result in Value1","Value2","Value3","Value4", "Value5 (see the missing " in front and in the end)
We'll have to tweak the variable a bit to fix this:
let vConcatValues = '"' & peek('ConcatField') & '"';
And the result then will be:
"Value1","Value2","Value3","Value4", "Value5"

Can I construct an update query in postgres where the columns to update are nested in a case?

Is there a way in postgres to update multiple columns with a single case clause? I've only ever seen it done the other way around--that is, multiple case clauses for each column:
update table
set col1 = case when condition then value1 else value2 end
set col2 = case when condition then value3 else value4 end
where ...
But what if I want to update several columns and the condition in the case clause is the same for all of them? Wouldn't it be more consise to have a syntax that allows the case clause to be written once and each column nested inside it? Something like this:
update table
case when condition then
set col1 = value1, col2 = value2, ...
else
set col1 = value3, col2 = value4, ...
end
where ...
Is this possible in postgres?

KDB: How to assign string datatype to all columns

When I created the table Tab, I specified the columns as string,
Tab: ([Key1:string()] Col1:string();Col2:string();Col3:string())
But the column datatype (t) is empty. I suppose specifying the column as string has no effect.
meta Tab
c t f a
--------------------
Key1
Col1
Col2
Col3
After I do a bulk upsert in Java...
c.Dict dict = new c.Dict((Object[]) columns.toArray(new String[columns.size()]), data);
c.Flip flip = new c.Flip(dict);
conn.c.ks("upsert", table, flip);
The datatypes are all symbols:
meta Tab
c t f a
--------------------
Key1 s
Col1 s
Col2 s
Col3 s
How can I specify the datatype of the columns as string and have it remain as string?
You cant define a column of the empty table with as strings as they are merely lists of lists of characters
You can just set them as empty lists which is what your code is doing.
But the column will then take on the type of whatever data is inserted into it.
Real question is what is your java process sending symbols when it should be sending strings. You need to make the change there before publishing to KDB
Note if you define as chars you still wont be able to upsert strings
q)Tab: ([Key1:`char$()] Col1:`char$();Col2:`char$();Col3:`char$())
q)Tab upsert ([Key1:enlist"test"] Col1:enlist"test";Col2:enlist"test";Col3:enlist "test")
'rank
[0] Tab upsert ([Key1:enlist"test"] Col1:enlist"test";Col2:enlist"test";Col3:enlist "test")
^
q)Tab: ([Key1:()] Col1:();Col2:();Col3:())
q)Tab upsert ([Key1:enlist"test"] Col1:enlist"test";Col2:enlist"test";Col3:enlist "test")
Key1 | Col1 Col2 Col3
------| --------------------
"test"| "test" "test" "test"
KDB does not allow to define column types as list during creation of table. So that means you can not define your column type as String because that is also a list.
To do that only way is to define column as empty list like:
q) t:([]id:`int$();val:())
Then when you insert data to this table the column will automatically take type of that data.
q)`t insert (4;"row1")
q) meta t
c | t f a
---| -----
id | i
val| C
In your case, one option is to send string data from your Java process as mentioned by user 'emc211' or other option is to convert your data to string in KDB process before insertion.

Nattable grid representation for AND & OR operations

I am using Nattable framework to display and author data in grid UI using Eclipse RCP.
I have following expression:
Group1
Value1 X
Value2 X
Value3
x - value is selected.
I have two use cases, system should do logical 'OR' for value1 and value2,
an another use case it should treat as logical 'AND'.
Use case 1 : Value1 && Value2
Use case 2 : Value1 || Value2
How to represent/handle such data/use-cases in better way(UI gesture) in Nattable?