One2many field issue Odoo 10.0 - postgresql

I have this very weird issue with One2many field.
First let me explain you the scenario...
I have a One2many field in sale.order.line, below code will explain the structure better
class testModule(models.Model):
_name = 'test.module'
name = fields.Char()
class testModule2(models.Model):
_name = 'test.module2'
location_id = fields.Many2one('test.module')
field1 = fields.Char()
field2 = fields.Many2one('sale.order.line')
class testModule3(models.Model):
_inherit = 'sale.order.line'
test_location = fields.One2many('test.module2', 'field2')
CASE 1:
Now what is happening is that when i create a new sales order, i select the partner_id and then add a sale.order.line and inside this line i add the One2many field test_location and then i save.
CASE 2:
Create new sales order, select partner_id then add sale.order.line and inside the sale.order.line add the test_location line [close the sales order line window]. Now after the entry before hitting save i change a field say partner_id and then click save.
CASE 3:
this case is same as case 2 but with the addition that i again change the partner_id field [changes made total 2 times first of case2 and then now], then i click on save.
RESULTS
CASE 1 works fine.
CASE 2 has a issue of
odoo.sql_db: bad query: INSERT INTO "test_module2" ("id", "field2", "field1", "location_id", "create_uid", "write_uid", "create_date", "write_date") VALUES(nextval('test_module2_id_seq'), 27, 'asd', ARRAY[1, '1'], 1, 1, (now() at time zone 'UTC'), (now() at time zone 'UTC')) RETURNING id
ProgrammingError: column "location_id" is of type integer but expression is of type integer[]
LINE 1: ...VALUES(nextval('test_module2_id_seq'), 27, 'asd', ARRAY[1, '...
now for this case i put a debugger on create/write method of sale.order.line to see waht the values are getting passed..
values = {u'product_uom': 1, u'sequence': 0, u'price_unit': 885, u'product_uom_qty': 1, u'qty_invoiced': 0, u'procurement_ids': [[5]], u'qty_delivered': 0, u'qty_to_invoice': 0, u'qty_delivered_updateable': False, u'customer_lead': 0, u'analytic_tag_ids': [[5]], u'state': u'draft', u'tax_id': [[5]], u'test_location': [[5], [0, 0, {u'field1': u'asd', u'location_id': [1, u'1']}]], 'order_id': 20, u'price_subtotal': 885, u'discount': 0, u'layout_category_id': False, u'product_id': 29, u'price_total': 885, u'invoice_status': u'no', u'name': u'[CARD] Graphics Card', u'invoice_lines': [[5]]}
in the above values location_id is getting passed like u'location_id': [1, u'1']}]] which is not correct...so for this i correct the issue in code and the update the values and pass that...
CASE 3
if the user changes the field say 2 or more than 2 times then the values are
values = {u'invoice_lines': [[5]], u'procurement_ids': [[5]], u'tax_id': [[5]], u'test_location': [[5], [1, 7, {u'field1': u'asd', u'location_id': False}]], u'analytic_tag_ids': [[5]]}
here
u'location_id': False
MULTIPLE CASE
if the user does case 1 the on the same record does case 2 or case 3 then sometimes the line will be saved as field2 = Null or False in the database other values like location_id and field1 will have data but not field2
NOTE: THIS HAPPENS WITH ANY FIELD NOT ONLY PARTNER_ID FIELD ON HEADER LEVEL OF SALE ORDER
I tried debugging myself but couldn't find the reason why this is happening .

Related

UPDATE SET with different value for each row

I have python dict with relationship between elements and their values. For example:
db_rows_values = {
<element_uuid_1>: 12,
<element_uuid_2>: "abc",
<element_uuid_3>: [123, 124, 125],
}
And I need to update it in one query. I made it in python through the query generation loop with CASE:
sql_query_elements_values_part = " ".join([f"WHEN '{element_row['element_id']}' "
f"THEN '{ujson.dumps(element_row['value'])}'::JSONB "
for element_row in db_row_values])
query_part_elements_values_update = f"""
elements_value_update AS (
UPDATE m2m_entries_n_elements
SET value =
CASE element_id
{sql_query_elements_values_part}
ELSE NULL
END
WHERE element_id = ANY(%(elements_ids)s::UUID[])
AND entry_id = ANY(%(entries_ids)s::UUID[])
RETURNING element_id, entry_id, value
),
But now I need to rewrite it in plpgsql. I can pass db_rows_values as array of ROWTYPE or as json but how can I make something like WHEN THEN part?
Ok, I can pass dict as JSON, convert it to rows with json_to_recordset and change WHEN THEN to SET value = (SELECT.. WHERE)
WITH input_rows AS (
SELECT *
FROM json_to_recordset(
'[
{"element_id": 2, "value":"new_value_1"},
{"element_id": 4, "value": "new_value_2"}
]'
) AS x("element_id" int, "value" text)
)
UPDATE table1
SET value = (SELECT value FROM input_rows WHERE input_rows.element_id = table1.element_id)
WHERE element_id IN (SELECT element_id FROM input_rows);
https://dbfiddle.uk/?rdbms=postgres_14&fiddle=f8b6cd8285ec7757e0d8f38a1becb960

Writing a query in SQLAlchemy to count occurrences and store IDs

I'm working with a postgres db using SQLAlchemy.
I have a table like this
class Author(Base):
__tablename__ = "Author"
id = Column(BIGINT, primary_key=True)
name = Column(Unicode)
and I want to identify all homonymous authors and save their id in a list.
For example if in the database there are 2 authors named "John" and 3 named "Jack", with ID respectively 11, 22, 33, 44 a 55, I want my query to return
[("John", [11,22]), ("Jack", [33,44,55])]
For now I've been able to write
[x for x in db_session.query(
func.count(Author.name),
Author.name
).group_by(Author.name) if x[0]>1]
but this just gives me back occurrences
[(2,"John"),(3,"Jack")]
Thank you very much for the help!
The way to do this in SQL would be to use PostgreSQL's array_agg function to group the ids into an array:
SELECT
name,
array_agg(id) AS ids
FROM
my_table
GROUP BY
name
HAVING
count(name) > 1;
The array_agg function collects the ids for each name, and the HAVING clause excludes those with only a single row. The output of the query would look like this:
name │ ids
═══════╪════════════════════
Alice │ {2,4,9,10,16}
Bob │ {1,6,11,12,13}
Carol │ {3,5,7,8,14,15,17}
Translated into SQLAlchemy, the query would look like this:
import sqlalchemy as sa
...
q = (
db_session.query(Author.name, sa.func.array_agg(Author.id).label('ids'))
.group_by(Author.name)
.having(sa.func.count(Author.name) > 1)
)
Calling q.all() will return a list of (name, [ids]) tuples like this:
[
('Alice', [2, 4, 9, 10, 16]),
('Bob', [1, 6, 11, 12, 13]),
('Carol', [3, 5, 7, 8, 14, 15, 17]),
]
In SQLAlchemy 1.4/2.0-style syntax equivalent would be:
with Session() as s:
q = (
sa.select(Author.name, sa.func.array_agg(Author.id).label('ids'))
.group_by(Author.name)
.having(sa.func.count(Author.name) > 1)
)
res = s.execute(q)

How to update or insert into same table in DB2

I am trying to update if exists or insert into if not exists in same table in DB2 (v 9.7).
I have one table "V_OPORNAC" (scheme is SQLDBA) which contains three columns with two primary keys: IDESTE (PK), IDEPOZ (PK), OPONAR
My case is, if data (OPONAR) where IDESTE = 123456 AND IDEPOZ = 0 not exits then insert new row, if exits then update (OPONAR). I have tried this:
MERGE INTO SQLDBA.V_OPONAROC AS O1
USING (SELECT IDESTE, IDEPOZ, OPONAR FROM SQLDBA.V_OPONAROC WHERE IDESTE = 123456 AND IDEPOZ = 0) AS O2
ON (O1.IDESTE = O2.IDESTE)
WHEN MATCHED THEN
UPDATE SET
OPONAR = 'test text'
WHEN NOT MATCHED THEN
INSERT
(IDESTE, IDEPOZ, OPONAR)
VALUES (123456, 0, 'test new text')
Executing code above I am getting this error:
Query 1 of 1, Rows read: 0, Elapsed time (seconds) - Total: 0,013, SQL query: 0,013, Reading results: 0
Query 1 of 1, Rows read: 3, Elapsed time (seconds) - Total: 0,002, SQL query: 0,001, Reading results: 0,001
Warning: DB2 SQL Warning: SQLCODE=100, SQLSTATE=02000, SQLERRMC=null, DRIVER=4.21.29
SQLState: 02000
ErrorCode: 100
I figured out, by using "SYSIBM.SYSDUMMY1"
MERGE INTO SQLDBA.V_OPONAROC AS O1
USING (SELECT 1 AS IDESTE, 2 AS IDEPOZ, 3 AS OPONAR FROM SYSIBM.SYSDUMMY1) AS O2
ON (O1.IDESTE = 123456 AND O1.IDEPOZ = 0)
WHEN MATCHED THEN
UPDATE SET
O1.OPONAR = 'test text'
WHEN NOT MATCHED THEN
INSERT
(O1.IDESTE, O1.IDEPOZ, O1.OPONAR)
VALUES (123456, 0, 'test new text')

Spark dataframe transform in time window

I have two dataframes. [AllAccounts]: contains audit for all accounts for all users
UserId, AccountId, Balance, CreatedOn
1, acc1, 200.01, 2016-12-06T17:09:36.123-05:00
1, acc2, 189.00, 2016-12-06T17:09:38.123-05:00
1, acc1, 700.01, 2016-12-07T17:09:36.123-05:00
1, acc2, 189.00, 2016-12-07T17:09:38.123-05:00
1, acc3, 010.01, 2016-12-07T17:09:39.123-05:00
1, acc1, 900.01, 2016-12-08T17:09:36.123-05:00
[ActiveAccounts]: contains audit for only the active account(could be zero or 1) for any user
UserId, AccountId, CreatedOn
1, acc2, 189.00, 2016-12-06T17:09:38.123-05:00
1, acc3, 010.01, 2016-12-07T17:09:39.123-05:00
I want to transform these into a single DF which is of the format
UserId, AccountId, Balance, CreatedOn, IsActive
1, acc1, 200.01, 2016-12-06T17:09:36.123-05:00, false
1, acc2, 189.00, 2016-12-06T17:09:38.123-05:00, true
1, acc1, 700.01, 2016-12-07T17:09:36.123-05:00, false
1, acc2, 189.00, 2016-12-07T17:09:38.123-05:00, true
1, acc3, 010.01, 2016-12-07T17:09:39.123-05:00, true
1, acc1, 900.01, 2016-12-08T17:09:36.123-05:00, false
So based on accounts in ActiveAccounts, i need to flag the rows in first df appropriately. As in the example, acc2 for userId 1 was marked active on 2016-12-06T17:09:38.123-05:00 and acc3 was marked active on 2016-12-07T17:09:39.123-05:00. So btw these time ranges acc2 will be marked true and 2016-12-07T17:09:39 onwards acc3 will be marked true.
What will be a an efficient way to do this.
If I understand properly the account (1, acc1) is active between its creation time and that of (1, acc2).
We can do this in a few steps:
create a data frame with the start/end times for each account
join with AllAccounts
flag the rows of the resulting dataframe
I haven't tested this, so there may be syntax mistakes.
To accomplish the first task, we need to partition the dataframe by user and then look at the next creation time. This calls for a window function:
val window = Window.partitionBy("UserId").orderBy("StartTime")
val activeTimes = ActiveAccounts.withColumnRenamed("CreatedOn", "StartTime")
.withColumn("EndTime", lead("StartTime") over window)
Note that the last EndTime for each user will be null. Now join:
val withActive = AllAcounts.join(activeTimes, Seq("UserId", "AccountId"))
(This should be a left join if you might be missing active times for some accounts.)
Then you have to go through and flag the accounts as active:
val withFlags = withActive.withColumn("isActive",
$"CreatedOn" >= $"StartTime" &&
($"EndTime".isNull || ($"CreatedOn" < $"EndTime)))

What does the exclude_nodata_value argument to ST_DumpValues do?

Could anyone explain what the exclude_nodata_value argument to ST_DumpValues does?
For example, given the following:
WITH
-- Create a raster 4x4 raster, with each value set to 8 and NODATA set to -99.
tbl_1 AS (
SELECT
ST_AddBand(
ST_MakeEmptyRaster(4, 4, 0, 0, 1, -1, 0, 0, 4326),
1, '32BF', 8, -99
) AS rast
),
-- Set the values in rows 1 and 2 to -99.
tbl_2 AS (
SELECT
ST_SetValues(
rast, 1, 1, 1, 4, 2, -99, FALSE
) AS rast FROM tbl_1)
Why does the following select statement return NULLs in the first two rows:
SELECT ST_DumpValues(rast, 1, TRUE) AS cell_values FROM tbl_2;
Like this:
{{NULL,NULL,NULL,NULL},{NULL,NULL,NULL,NULL},{8,8,8,8},{8,8,8,8}}
But the following select statement return -99s?
SELECT ST_DumpValues(rast, 1, FALSE) AS cell_values FROM tbl_2;
Like this:
{{-99,-99,-99,-99},{-99,-99,-99,-99},{8,8,8,8},{8,8,8,8}}
Clearly, with both statements the first two rows really contain -99s. However, in the first case (exclude_nodata_value=TRUE) these values have been masked (but not replaced) by NULLS.
Thanks for any help. The subtle differences between NULL and NODATA within PostGIS have been driving me crazy for several days.