Odoo 10 - Cancelled stock pickings can not be deleted, why? - postgresql

Why cancelled stock pickings can not be deleted in certain cases?
Specifically, I get the message that the item can not be deleted as it has a reference with: Packing Operation - stock.pack.operation]
When a cancelled stock picking can be deleted and when it can not be?

#forvas gives a good explanation of the problem but you don't need to resort to psql to resolve this (although you can).
Cancelling the picking only cancels the Moves (Initial Demand tab). You can't delete the picking if it has Operation lines still. You'll most likely need to Mark As Todo so that you can see the Operations tab to delete each line. At that point you can delete the entire picking.

If you get the message [object with reference: Packing Operation - stock.pack.operation], it means that the picking was in Available state at least (it also could have been in Done state). And when the picking is in Available state, operations and stock move operation links are generated. If the picking is in Done state, quants for the moves are also generated.
In your case, as you were able to cancel the picking through the interface, it means that it didn't get to Done state, so quants weren't generated yet. So you can execute the following queries in PostgreSQL:
Imagine that your picking has the ID 88:
DELETE FROM stock_move_operation_link WHERE operation_id IN (SELECT id FROM stock_pack_operation WHERE picking_id=88);
DELETE FROM stock_pack_operation WHERE picking_id=88;
DELETE FROM stock_move WHERE picking_id=88;
DELETE FROM stock_picking WHERE id=88;
What is stock_move_operation_link used for
When you create a picking, for example, with three lines:
Product A (3 units)
Product A (7 units)
Product B (6 units)
And then you mark it as to do, operations are generated this way (if you don't specify any lot):
Product A (10 units)
Product B (6 units)
So in stock_move_link_operation you'll be able to see, among other data, which moves belong to each operation.

I think you may return the same picking is better than manual deleting the same from the back end side. Because there some Quant related transaction is little bit difficult to remove from the backhand side

Related

Race condition in amplify datastore

When updating an object, how can I handle race condition?
final object = await Amplify.Datastore.query(Object.classtype, where: Object.ID.eq('aa');
Amplify.Datastore.save(object.copywith(count: object.count + 1 ));
user A : execute first statement
user B : execute first statement
user A : execute second statement
user B : execute second statement
=> only updated + 1
Apparently the way to resolve this is to either
1 - use conflict resolution, available from Datastore 0.5.0
One of your users (whichever is slowest) gets sent back the rejected version plus the latest version from server, you get both objects back to resolve discrepancies locally and retry update.
2 - Use a custom resolver
here..
and check ADD expressions
You save versions locally and your vtl is configured to provide additive values to the pipeline instead of set values.
This nice article might also help to understand that
Neither really worked for me, one of my devices could be offline for days at a time and i would need multiple updates to objects to be performed in order, not just the last current version of the local object.
What really confuses me is that there is no immediate way to just increment values, and keep all incremented objects' updates in the outbox instead of just the latest object, then apply them in order when connection is made..
I basically wrote in a separate table to do just that to solve my problem, but of course with more tables and rows, comes more reads and writes and therefore more expense.
Have a look at my attempts here if you want the full code lmk
And then i guess hope for an update to amplify that includes increment values logic to update values atomically out of the box to avoid these common race conditions.
Here is some more context

Data syncing with pouchdb-based systems client-side: is there a workaround to the 'deleted' flag?

I'm planning on using rxdb + hasura/postgresql in the backend. I'm reading this rxdb page for example, which off the bat requires sync-able entities to have a deleted flag.
Q1 (main question)
Is there ANY point at which I can finally hard-delete these entities? What conditions would have to be met - eg could I simply use "older than X months" and then force my app to only ever displays data for less than X months?
Is such a hard-delete, if possible, best carried out directly in the central db, since it will be the source of truth? Would there be any repercussions client-side that I'm not foreseeing/understanding?
I foresee the number of deleted's growing rapidly in my app and i don't want to have to store all this extra data forever.
Q2 (bonus / just curious)
What is the (algorithmic) basis for needing a 'deleted' flag? Is it that it's just faster to check a flag rather than to check for the omission of an object from, say, a very large list. I apologize if it's kind of a stupid question :(
Ultimately it comes down to a decision that's informed by your particular business/product with regards to how long you want to keep deleted entities in your system. For some applications it's important to always keep a history of deleted things or even individual revisions to records stored as a kind of ledger or history. You'll have to make a judgement call as to how long you want to keep your deleted entities.
I'd recommend that you also add a deleted_at column if you haven't already and then you could easily leverage something like Hasura's new Scheduled Triggers functionality to run a recurring job that fully deletes records older than whatever your threshold is.
You could also leverage Hasura's permissions system to ensure that rows that have been deleted aren't returned to the client. There is documentation and examples for ways to work with soft deletes and Hasura
For your second question it is definitely much faster to check for the deleted flag on records than to have to try and diff the entire dataset looking for things that are now missing.

Handling multiple updates to a singe db field

To give a bit of background to my issue, I've got a very basic banking system. The process at the moment goes:
A transaction is added to an Azure Service Bus
An Azure Webjob picks up this message and creates the new row in the SQL DB.
The balance (total) of the account needs to be updated with the value in the message (be it + or -).
So for example if the field is 10 and I get two updates (10, -5) the field needs to be 15 (10 + 10 - 5), it isn't a case of just updating the value, it needs to do some arithmetic.
Now I'm not too sure how to handle the update of the balance as there could be many requests come in so need to update accordingly.
I figured one way is to do the update on the SQL side rather than the web job, but that doesn't help with concurrent updates.
Can I do some locking with the field? But what happens to an update when it is blocked because an update is already in progress? Does it wait or fail? If it waits then this should be OK. I'm using EF.
I figured another way round this is to have another WebJob that will run on a schedule and will add up all the amounts and update the value once, and so this will be the only thing touching that field.
Thanks
One way or another, you will need to serialize write access to account balance field (actually to the whole row).
Having a separate job that picks up "pending" inserts, and eventually updates balance will be ok in case writes are more frequent on your system than reads, or you don't have to always return most recent balance. Otherwise, to get the current balance you will need to do something like
SELECT balance +
ISNULL((SELECT SUM(transaction_amount)
FROM pending_insert pi WHERE pi.user_id = ac.user_id
),0) as actual_balance
FROM account ac
WHERE ac.user_id = :user_id
That is definitely more expensive from performance perspective , but for some systems it's perfectly fine. Another pitfall (again, it may or may not be relevant to your case) is enforcing, for instance, non-negative balance.
Alternatively, you can consistently handle banking transactions in the following way :
begin database transaction
find and lock row in account table
validate total amount if needed
insert record into banking_transaction
update user account, i.e. balance = balance +transasction_amount
commit /rollback
If multiple user accounts are involved, you have to always lock them in the same order to avoid deadlocks.
That approach is more robust, but potentially worse from concurrency point of view (again, it depends on the nature of updates in your application - here the worst case is many concurrent banking transactions for one user, updates to multiple users will go fine).
Finally, it's worth mentioning that since you are working with SQLServer, beware of deadlocks due to lock escalation. You may need to implement some retry logic in any case
You would want to use a parameter substitution method in your sql. You would need to find out how to do that based on the programming language you are using in your web job.
$updateval = -5;
Update dbtable set myvalue = myvalue + $updateval
code example:
int qn = int.Parse(TextBox3.Text)
SqlCommand cmd1 = new SqlCommand("update product set group1 = group1 + #qn where productname = #productname", con);
cmd1.Parameters.Add(new SqlParameter("#productname", TextBox1.Text));
cmd1.Parameters.Add(new SqlParameter("#qn", qn));
then execute.

Could I save Postgres transaction and continue work with db within it later

I know about prepared transaction in Postgres, but seems you can just commit or rollback it later. You cannot even view the transaction's db state before you've committed it. Is any way to save transaction for later use?
What I want to achieve actually is a preview (and correcting) of some changes in db (changes are imports from csv file, so user need to see preview before apply it). I want to make changes, add some changes later, see full state of db and apply it (certainly, commit transaction)
I cannot find a very good reference in docs, but I have a very strong feeling that the answer is: No, you cannot do that.
It would mean that when you "save" the transaction, the database would basically have to maintain all of its locks in place for an indefinite amount of time. Even if it was possible, it would mean horrible failure modes and trouble on all fronts.
For the pattern that you are describing, I would use two separate transactions. Import to a staging table and show that to user (or import to the main table but mark rows as "unapproved"). If user approves, in another transactions move or update these rows.
You can always end up in a situation where user can simply leave or crash without clicking "OK" or "Cancel". If what you're describing was possible, you would end up with a hung transaction holding all these resources. In my proposed solution you end up with wasteful rows in "staging" table that you may still show to user later or remove.
You may want to read up on persistence saga. This is actually a very simple example of a well known and researched problem.
To make the long story short, this pattern breaks down a long-running process like yours into smaller operations that are applied and persisted in some way in separate transactions. If any of them happens to fail (or does not occur as expected), you have compensating actions that usually undo what the steps executed so far have done (e.g. by throwing away stale/irrelevant data).
Here's a decent introduction:
https://blog.couchbase.com/saga-pattern-implement-business-transactions-using-microservices-part/#:~:text=The%20SAGA%20Pattern,completion%20of%20the%20previous%20one.
http://vasters.com/clemensv/2012/09/01/Sagas.aspx
This concept was formally introduced in the 80s, but is well alive and relevant today.

Last Updated Date: Antipattern?

I keep seeing questions floating through that make reference to a column in a database table named something like DateLastUpdated. I don't get it.
The only companion field I've ever seen is LastUpdateUserId or such. There's never an indicator about why the update took place; or even what the update was.
On top of that, this field is sometimes written from within a trigger, where even less context is available.
It certainly doesn't even come close to being an audit trail; so that can't be the justification. And if there is and audit trail somewhere in a log or whatever, this field would be redundant.
What am I missing? Why is this pattern so popular?
Such a field can be used to detect whether there are conflicting edits made by different processes. When you retrieve a record from the database, you get the previous DateLastUpdated field. After making changes to other fields, you submit the record back to the database layer. The database layer checks that the DateLastUpdated you submit matches the one still in the database. If it matches, then the update is performed (and DateLastUpdated is updated to the current time). However, if it does not match, then some other process has changed the record in the meantime and the current update can be aborted.
It depends on the exact circumstance, but a timestamp like that can be very useful for autogenerated data - you can figure out if something needs to be recalculated if a depedency has changed later on (this is how build systems calculate which files need to be recompiled).
Also, many websites will have data marking "Last changed" on a page, particularly news sites that may edit content. The exact reason isn't necessary (and there likely exist backups in case an audit trail is really necessary), but this data needs to be visible to the end user.
These sorts of things are typically used for business applications where user action is required to initiate the update. Typically, there will be some kind of business app (eg a CRM desktop application) and for most updates there tends to be only one way of making the update.
If you're looking at address data, that was done through the "Maintain Address" screen, etc.
Such database auditing is there to augment business-level auditing, not to replace it. Call centres will sometimes (or always in the case of financial services providers in Australia, as one example) record phone calls. That's part of the audit trail too but doesn't tend to be part of the IT solution as far as the desktop application (and related infrastructure) goes, although that is by no means a hard and fast rule.
Call centre staff will also typically have some sort of "Notes" or "Log" functionality where they can type freeform text as to why the customer called and what action was taken so the next operator can pick up where they left off when the customer rings back.
Triggers will often be used to record exactly what was changed (eg writing the old record to an audit table). The purpose of all this is that with all the information (the notes, recorded call, database audit trail and logs) the previous state of the data can be reconstructed as can the resulting action. This may be to find/resolve bugs in the system or simply as a conflict resolution process with the customer.
It is certainly popular - rails for example has a shorthand for it, as well as a creation timestamp (:timestamps).
At the application level it's very useful, as the same pattern is very common in views - look at the questions here for example (answered 56 secs ago, etc).
It can also be used retrospectively in reporting to generate stats (e.g. what is the growth curve of the number of records in the DB).
there are a couple of scenarios
Let's say you have an address table for your customers
you have your CRM app, the customer calls that his address has changed a month ago, with the LastUpdate column you can see that this row for this customer hasn't been touched in 4 months
usually you use triggers to populate a history table so that you can see all the other history, if you see that the creationdate and updated date are the same there is no point hitting the history table since you won't find anything
you calculate indexes (stock market), you can easily see that it was recalculated just by looking at this column
there are 2 DB servers, by comparing the date column you can find out if all the changes have been replicated or not etc etc ect
This is also very useful if you have to send feeds out to clients that are delta feeds, that is only the records that have been changed or inserted since the data of the last feed are sent.