Is it possible to convert table data to similar type using VALUE operator - type-conversion

I think this question is best illustrated with an example. I've got 2 tables of different data types (but table_type1 is easily convertable to table_type2)
i_tab1 type table_type1 (fields: matnr,maktx,spras)
i_tab2 type table_type2 (fields: mandt,matnr)
Is it possible to use the VALUE operator, possibly with FOR line in i_tab1 (or maybe a similar inline commands) to convert and transfer data of i_tab1 to i_tab2 ? I was thinking of something like the following:
i_tab2 = VALUE # (for line in i_tab1
BASE = gt_itab2 (
MANDT = sy-mandt;
MATNR = line-matnr
)
).

You were close. Here is a solution you might find helpful.
REPORT ZZZ.
TYPES: BEGIN OF tab1_line,
matnr TYPE mara-matnr,
maktx TYPE makt-maktx,
spras TYPE makt-spras,
END OF tab1_line,
BEGIN OF tab2_line,
mandt TYPE t000-mandt,
matnr TYPE mara-matnr,
END OF tab2_line,
table_type1 TYPE STANDARD TABLE OF tab1_line WITH EMPTY KEY,
table_type2 TYPE STANDARD TABLE OF tab2_line WITH EMPTY KEY.
DATA:
g_tab1 TYPE table_type1,
g_tab2 TYPE table_type2.
START-OF-SELECTION.
g_tab2 = VALUE #( BASE g_tab2 FOR i IN g_tab1 ( mandt = sy-mandt matnr = i-matnr ) ).

Related

How to use array operators for type bytea[]?

Is it possible to use array operators on a type of bytea[]?
For example:
CREATE TABLE test (
metadata bytea[]
);
SELECT * FROM test WHERE test.metadata && ANY($1);
// could not find array type for data type bytea[]
If it's not possible, is there an alternative approach without changing the type from bytea[]?
postgresql 12.x
Do not use ANY, just compare the arrays directly using an array constructor and array functions
CREATE TABLE test (
metadata bytea[]
);
INSERT INTO public.test (metadata) VALUES('{"x","y"}');
SELECT * FROM test t WHERE metadata && array[E'\x78'::bytea];
When using ANY, the left-hand expression is evaluated and compared to each element of the right-hand array using the given operator, which must yield a Boolean result. So the original sql was trying to do something like bytea[] && bytea.
This applies not only for bytea[], but any array type e.g text[] or integer[].

How to get only specific rows on DB, when date range fits SQL condition on a 'tsrange' datatype? [duplicate]

I have this query:
some_id = 1
cursor.execute('
SELECT "Indicator"."indicator"
FROM "Indicator"
WHERE "Indicator"."some_id" = %s;', some_id)
I get the following error:
TypeError: 'int' object does not support indexing
some_id is an int but I'd like to select indicators that have some_id = 1 (or whatever # I decide to put in the variable).
cursor.execute('
SELECT "Indicator"."indicator"
FROM "Indicator"
WHERE "Indicator"."some_id" = %s;', [some_id])
This turns the some_id parameter into a list, which is indexable. Assuming your method works like i think it does, this should work.
The error is happening because somewhere in that method, it is probably trying to iterate over that input, or index directly into it. Possibly like this: some_id[0]
By making it a list (or iterable), you allow it to index into the first element like that.
You could also make it into a tuple by doing this: (some_id,) which has the advantage of being immutable.
You should pass query parameters to execute() as a tuple (an iterable, strictly speaking), (some_id,) instead of some_id:
cursor.execute('
SELECT "Indicator"."indicator"
FROM "Indicator"
WHERE "Indicator"."some_id" = %s;', (some_id,))
Your id needs to be some sort of iterable for mogrify to understand the input, here's the relevant quote from the frequently asked questions documentation:
>>> cur.execute("INSERT INTO foo VALUES (%s)", "bar") # WRONG
>>> cur.execute("INSERT INTO foo VALUES (%s)", ("bar")) # WRONG
>>> cur.execute("INSERT INTO foo VALUES (%s)", ("bar",)) # correct
>>> cur.execute("INSERT INTO foo VALUES (%s)", ["bar"]) # correct
This should work:
some_id = 1
cursor.execute('
SELECT "Indicator"."indicator"
FROM "Indicator"
WHERE "Indicator"."some_id" = %s;', (some_id, ))
Slightly similar error when using Django:
TypeError: 'RelatedManager' object does not support indexing
This doesn't work
mystery_obj[0].id
This works:
mystery_obj.all()[0].id
Basically, the error reads Some type xyz doesn't have an __ iter __ or __next__ or next function, so it's not next(), or itsnot[indexable], or iter(itsnot), in this case the arguments to cursor.execute would need to implement iteration, most commonly a List, Tuple, or less commonly an Array, or some custom iterator implementation.
In this specific case the error happens when the classic string interpolation goes to fill the %s, %d, %b string formatters.
Related:
How to implement __iter__(self) for a container object (Python)
Pass parameter into a list, which is indexable.
cur.execute("select * from tableA where id =%s",[parameter])
I had the same problem and it worked when I used normal formatting.
cursor.execute(f'
SELECT "Indicator"."indicator"
FROM "Indicator"
WHERE "Indicator"."some_id" ={some_id};')
Typecasting some_id to string also works.
cursor.execute(""" SELECT * FROM posts WHERE id = %s """, (str(id), ))

Explicit type conversion in postgreSQL

I am joining the two tables using the query below:
update campaign_items
set last_modified = evt.event_time
from (
select max(event_time) event_time
,result
from events
where request = '/campaignitem/add'
group by result
) evt
where evt.result = campaign_items.id
where the result column is of character varying type and the id is of integer type
But the data in the result column contains digits(i.e. 12345)
How would I run this query with converting the type of the result(character) into id
(integer)
Well you don't need to because postgresql will do implicit type conversion in this situation. For example, you can try
select ' 12 ' = 12
You will see that it returns true even though there is extra whitespace in the string version. Nevertheless, if you need explicit conversion.
where evt.result::int = campaign_items.id
According to your comment you have values like convRepeatDelay, these obviously cannot be converted to int. What you should then do is convert your int to char!!
where evt.result = campaign_items.id::char
There are several solutions. You can use the cast operator :: to cast a value from a given type into another type:
WHERE evt.result::int = campaign_items.id
You can also use the CAST function, which is more portable:
WHERE CAST(evt.result AS int) = campaign_items.id
Note that to improve performances, you can add an index on the casting operation (note the mandatory double parentheses), but then you have to use GROUP BY result::int instead of GROUP BY result to take advantage of the index:
CREATE INDEX i_events_result ON events_items ((result::int));
By the way the best option is maybe to change the result column type to int if you know that it will only contain integers ;-)

Postgresql - Interpreted type for NULL is wrong

I have the problem with the following CTE expression because prev_count in new_values is being interpreted as text, but the column I'm updating in counts is type integer. I'm getting this error on the marked line:
ERROR: column "prev_count" is of type integer but expression is of type text
LINE 12: prev_count = new_values.prev_count
Here's the query:
WITH
new_values (word,count,txid,prev_count) AS (
VALUES ('cat',1,5,NULL)),
updated AS (
UPDATE
counts t
SET
count = new_values.count,
txid = new_values.txid,
prev_count = new_values.prev_count -- ERROR HERE
FROM
new_values
WHERE (
t.word = new_values.word
)
RETURNING t.*)
INSERT INTO counts(
word,count,txid,prev_count
) SELECT
word,count,txid,prev_count FROM new_values
WHERE NOT EXISTS (
SELECT 1 FROM updated WHERE (updated.word = new_values.word))
My question is, what's an elegant way to fix the error? I would rather specify the type of prev_count in new_values instead of adding an explicit cast, but I don't see anything like that in the docs.
Adding this here as an explicit answer along with a detailed explanation.
The fix is:
WITH
new_values (word,count,txid,prev_count) AS (
VALUES ('cat',1,5,NULL::text)),
As a_horse_with_no_name suggested in the comments.
Why is this necessary? Because the row specification comes from the VALUES section and NULL is unknown. In this case PostgreSQL helpfully casts to text. But that is not what you want so you have to give a type to the NULL.
This often comes up in other cases too, such as UNION statements where a NULL in the first segment in the column list can be given an implicit type which clashes with the type of the column in another segment. So this is a tricky corner worth knowing about.

Universal function module to retrieve SAP table data

What is the best way to access table data from a SAP system?
I tried it with it RFC_READ_TABLE, but this RFC returns the data in concatenated form within a single column and has a size restriction for row data.
Is there a better way to access SAP data in generic form without creating custom RFCs into the system?
I am searching for a standard RFC solution, not a custom script.
If I understand your question right, you want to read a table, but at time of programming, you don't know which table.
With Select * from (tablename)you can read with a dynamic table name.
The target field can be defined dynamic with create data.
An example (untested, currently I have no access to an SAP-system):
DATA: lv_tablename TYPE string,
ev_filelength TYPE i.
lv_tablename = 'mara'. "e.g. a parameter
DATA dref TYPE REF TO data.
CREATE DATA dref TYPE TABLE OF (lv_tablename).
FIELD-SYMBOLS: <wa> TYPE ANY TABLE.
ASSIGN dref->* to <wa>.
SELECT * FROM (lv_tablename) INTO TABLE <wa>. "Attention for test, may be large result
"<wa> is like a variable with type table mara
TYPES: BEGIN OF t_bseg,
*include structure bseg.
bukrs LIKE bseg-bukrs,
belnr LIKE bseg-belnr,
gjahr LIKE bseg-gjahr,
buzei LIKE bseg-buzei,
mwskz LIKE bseg-mwskz, "Tax code
umsks LIKE bseg-umsks, "Special G/L transaction type
prctr LIKE bseg-prctr, "Profit Centre
hkont LIKE bseg-hkont, "G/L account
xauto LIKE bseg-xauto,
koart LIKE bseg-koart,
dmbtr LIKE bseg-dmbtr,
mwart LIKE bseg-mwart,
hwbas LIKE bseg-hwbas,
aufnr LIKE bseg-aufnr,
projk LIKE bseg-projk,
shkzg LIKE bseg-shkzg,
kokrs LIKE bseg-kokrs,
END OF t_bseg.
DATA: it_bseg TYPE STANDARD TABLE OF t_bseg INITIAL SIZE 0,
wa_bseg TYPE t_bseg.
DATA: it_ekko TYPE STANDARD TABLE OF ekko.
*Select all fields of a SAP database table into in itab
SELECT *
FROM ekko
INTO TABLE it_ekko.
Try this snippet of RFC_READ_TABLE to get data in structured form:
DATA: oref TYPE REF TO cx_root,
text TYPE string,
obj_data TYPE REF TO data.
lt_options TYPE TABLE OF rfc_db_opt,
ls_option TYPE rfc_db_opt,
lt_fields TYPE TABLE OF rfc_db_fld,
ls_field TYPE rfc_db_fld,
lt_entries TYPE STANDARD TABLE OF tab512.
FIELD-SYMBOLS: <fs_tab> TYPE STANDARD TABLE.
TRY.
ls_option-text = `some query`.
APPEND ls_option TO lt_options.
ls_field-fieldname = 'PARTNER'.
APPEND ls_field TO lt_fields.
ls_field-fieldname = 'TYPE'.
APPEND ls_field TO lt_fields.
ls_field-fieldname = 'BU_GROUP'.
APPEND ls_field TO lt_fields.
ls_field-fieldname = 'BU_SORT1'.
APPEND ls_field TO lt_fields.
ls_field-fieldname = 'TITLE'.
APPEND ls_field TO lt_fields.
CALL FUNCTION 'RFC_READ_TABLE' DESTINATION dest
EXPORTING
query_table = 'BUT000'
TABLES
options = lt_options
fields = lt_fields
data = lt_entries.
CATCH cx_root INTO oref.
text = oref->get_text( ).
MESSAGE text TYPE 'E'.
ENDTRY.
IF lt_entries IS NOT INITIAL.
CREATE DATA obj_data TYPE TABLE OF but000.
ASSIGN obj_data->* TO <fs_tab>.
CREATE DATA obj_data TYPE but000.
ASSIGN obj_data->* TO FIELD-SYMBOL(<fs_line>).
LOOP AT lt_entries ASSIGNING FIELD-SYMBOL(<wa_data>).
LOOP AT lt_fields ASSIGNING FIELD-SYMBOL(<fs_fld>).
ASSIGN COMPONENT <fs_fld>-fieldname OF STRUCTURE <fs_line> TO FIELD-SYMBOL(<lv_field>).
IF <lv_field> IS ASSIGNED AND sy-subrc IS INITIAL.
<lv_field> = <wa_data>-wa+<fs_fld>-offset(<fs_fld>-length).
ENDIF.
APPEND <fs_line> TO <fs_tab>.
ENDLOOP.
ENDLOOP.
ENDIF.
IF <fs_tab> IS NOT INITIAL.
"Bingo!
ENDIF.