Pattern for updating a recursive linear tree? - mongodb

Let's say you have a factory, where you put together different things. There are requests for those things, and you want to monitor what's needed to construct the thing.
For example for a simplified car you can have a request for:
car 2 (1)
chassis 2 (1)
wheels 8 (4)
tyres 8 (1)
rims 8 (1)
motor 2 (1)
The numbers next to the parts are indicating the amounts needed in real time, the numbers in parentheses are indicating the amounts needed to construct one parent, and the indentications are showing the tree structure. The children of a specific part are showing how much is needed of which to construct the parent.
At any time a wheel could come in available to the inventory, and it would update the amount of wheels needed to 7, and that would update the amount of tyres, and rims amount needed to 7.
Similarly a whole car could come in available, reducing chassis to 1. motor to 1, and wheels to 3 from 7.
It may seem like a simple problem, but I've spent months with it now to figure out a secure way to do so.
The inventories are tracked, and each inventory has different properties like created at, which item is it, and how much is available. Inventories can also be dedicated to a specific request.
When a new "shipment" comes in, it contains new inventories. When new inventories come in, a check runs if any request needs of that inventory.
Once an inventory is dedicated to a request, the request's amount needed updates, and all the children's amount needed is updated as well.
When an inventory is dedicated to a request, a new inventory is created, with the dedicated amount, and the same properties except that it's being dedicated to a request. The original inventory's amount is decreased with the amount used by the request.
There are a lot of possible problems with this.
Let's start with the main problem. Multiple inventories can come in parallel, trying to dedicate themselves to the same request. A recursive function runs which needs to update all the children of the subtree of the request. The parent request is read, given the amount it has got from inventory, and the children is being updated.
To understand:
1. one shipment of `car` comes in
2. checking if any requests needing `car`
3. assigning general inventory of `1 car` as dedicated inventory to request
4. `car` request amount needed is reduced with `1`
5. `car` request reads children, and for each children:
5.1. read available inventory for child request
5.2. update child request amount needed with `parentRequest.amountNeeded * childRequest.amountNeededPerParent - childRequestAvailableInventory`
5.3. run step 5. for children of children recursively
So every request has a field that shows how much inventory is needed to construct the parent request. The formula for it is parentRequest.amountNeeded * request.amountNeededPerParent - requestAvailableInventory.
At every given point any request can get inventory, and if that happens, the tree of the request much be updated cascading down, updating their amount needed.
First issue:
Between reading children, and reading the child's available inventory, the child request may get updated.
Second issue:
Between reading child's available inventory, and updating the child's amount needed the child request, and available inventory for it can update.
Third issue:
I'm using mongodb, and cannot update request's amount needed, and create dedicated inventory at the exact same time. So it's not guaranteed that the request's amount needed value will be in sync with the request's dedicated inventory amount.
Draft function:
const updateChildRequestsAmountNeeded = async (
parentRequest: Request & Document,
) => {
const childRequests = await RequestModel.find({
parentRequestId: parentRequest._id,
}).exec();
return Promise.all(
childRequests.map(async (childRequest) => {
const availableInventory = await getAvailableInventory({
requestId: childRequest._id,
});
const amountNeeded =
(parentRequest.amountNeeded * childRequest.amountNeededPerParent)- availableInventory;
childRequest.set({ amountNeeded });
await childRequest.save();
await updateChildRequestsAmountNeeded(childRequest)
}),
);
};
See examples of when it can go wrong:
initial state for each case:
A amountNeeded: 5
B amountNeeded: 5 (amountNeededPerParent: 1)
A available: 0
B available: 0
1. parent amount needed decreases (A1, and A2 are the same requests, the number is indicating the ID of the parallel processes)
1. A1 gets inventory (1)
1. A2 gets inventory (2)
2. A1 amount needed updated (4)
2. A2 amount needed updated (2)
3. A2's children read (B2) (needed 5)
3. A1's children read (B1) (needed 5)
6. B2 amount needed updated (to 2)
6. B1 amount needed updated (to 4)
2. request gets inventory while updating:
1. A gets inventory (1)
2. A amount needed updated (4)
3. A's children read (B)
4. B available inventory read (0)
5. B gets inventory (1)
6. B amount needed updated (4)
7. B amount needed updated (4) (should be 3)
I've tried to find a way to solve the issue, and never overwrite amount needed with outdated data, but couldn't find a way. Maybe it's mongodb that is a wrong approach, or the whole data structure, or there is a pattern for updating recursive data atomically.

Related

Send an Alert to user based on changes in the database row

There are 10, 000 users. Each can define up to 500 conditions for an enterprise supply chain inventory.
An example of a condition could be
Group1
Item in InventoryX > 5000 AND colourItem == Red
AND Group2
Item in InventoryY > 4000 and colourItem == Green
Whenever the state of the database (single row in InventoryX, InventoryY, and colourItem columns) meets to the condition mentioned above, the user who has created the alert should be notified.
The first solution that comes to mind is to continuously keep polling the database, at a given time interval (say 1 minute) but the problem with it would be every minute there will be 10000 X 500 polls.
This is difficult to scale.
We also need to keep in mind that the user's are given a simple front-end to create conditions, and they can update these conditions as per their whim. No hard coding can work.
What would be a better architecture/project to be used to achieve the same?
Database = PostgreSQL.
https://www.postgresql.org/docs/current/sql-notify.html
Appears difficult to implement since there is no ease of usage for so many conditions.
You only have three options:
Poll the database for changes (which as you say, gets expensive).
Check all the rules as the changes are made.
Make a note of changed data as changes are made and check that changed subset in batches.
Whether you prefer #2 or #3 depends on how many rules there are, how long it takes to check them for each changed row and whether you can usefully summarise changes or merge alerts.
Both #2 and #3 would use one or more triggers. Option #2 would run the rules and add entries to an "alerts" queueing table (or NOTIFY a listening process to send an alert, or set a flag in memcached/redis etc).
Option #3 would just note either the IDs of changed rows, or perhaps the details of the change and you would have another process read the changes and generate alerts. This gives you the opportunity to notice that the same change was made twice and only send 1 alert or similar if that is useful to you.

NetSuite Assembly Build error, "The total inventory detail quantity must be...", what is causing this?

Attempting to declare production of an assembly using the assembly build. As far as I can tell we have enough item inventory to assign the lots-bins-item-quantities to everything in the BOM. Cannot declare. Usually get an error about a particular item. Have seen that item be different on different attempts. Have looked at that item and appears we have enough. I've even decremented the BOM blow out to a lower quantity to assign inventory detail that matches.
I do notice that the "Buildable" quantity populates as about a half unit less than the build quantity I enter. This matches the "buildable" quantity shown on the WO. I don't know if these two issues are related. I can't tell how it's coming up with the buildable number or the constraint from matching the WO quantity.
Thank you.
You don't indicate if this is via SuiteScript, or via the user interface, so I'll assume this is the client interface.
The Builable figure is based on the minimum quantity of each component committed to the work order (if committment is enabled). Given that you are seeing a .5 less than the quantity required, then this item is buildable to a degree, but requires more component stock.
The message you are seeing on the assembly build can be because :
The quantity entered on one or more components doesn't match the bin location quantities e.g 7 are required but only 5 have been provided in bins ; and/or;
One or more items are serialised or lot controlled, and the inventory detail has not been completed to include this information. e.g. 5 are required, but the lot numbers XX and YY are used to a quantity of 4; and/or
The item being built is serialised or lot controlled, and the lot/serial reference of the built assembly has not been entered, e.g the serial number 5 are being built and serial numbers 123, 124, 125, 126 & 127 should be provided to create the serialised assembly.
See SuiteAnswer 28169 and SuiteAnswer 28170 for information on creating a build of serial & lot assemblies.

Odoo 10 - Cancelled stock pickings can not be deleted, why?

Why cancelled stock pickings can not be deleted in certain cases?
Specifically, I get the message that the item can not be deleted as it has a reference with: Packing Operation - stock.pack.operation]
When a cancelled stock picking can be deleted and when it can not be?
#forvas gives a good explanation of the problem but you don't need to resort to psql to resolve this (although you can).
Cancelling the picking only cancels the Moves (Initial Demand tab). You can't delete the picking if it has Operation lines still. You'll most likely need to Mark As Todo so that you can see the Operations tab to delete each line. At that point you can delete the entire picking.
If you get the message [object with reference: Packing Operation - stock.pack.operation], it means that the picking was in Available state at least (it also could have been in Done state). And when the picking is in Available state, operations and stock move operation links are generated. If the picking is in Done state, quants for the moves are also generated.
In your case, as you were able to cancel the picking through the interface, it means that it didn't get to Done state, so quants weren't generated yet. So you can execute the following queries in PostgreSQL:
Imagine that your picking has the ID 88:
DELETE FROM stock_move_operation_link WHERE operation_id IN (SELECT id FROM stock_pack_operation WHERE picking_id=88);
DELETE FROM stock_pack_operation WHERE picking_id=88;
DELETE FROM stock_move WHERE picking_id=88;
DELETE FROM stock_picking WHERE id=88;
What is stock_move_operation_link used for
When you create a picking, for example, with three lines:
Product A (3 units)
Product A (7 units)
Product B (6 units)
And then you mark it as to do, operations are generated this way (if you don't specify any lot):
Product A (10 units)
Product B (6 units)
So in stock_move_link_operation you'll be able to see, among other data, which moves belong to each operation.
I think you may return the same picking is better than manual deleting the same from the back end side. Because there some Quant related transaction is little bit difficult to remove from the backhand side

How does pages work if the DB is manipulated between next

The below code i have is working as intended, but is there a better way to do it?
I am consuming a db like a queue and process in batches of a max number. I'm thinking on how i can refactor it to use page.hasNext() and page.nextPageable()
However I can't find any good tutorial/documentation on what happens if the DB is manipulated between getting a page and getting the next page.
List<Customer> toBeProcessedList = customerToBeProcessedRepo
.findFirstXAsCustomer(new PageRequest(0, MAX_NR_TO_PROCESS));
while (!toBeProcessedList.isEmpty()) {
//do something with each customer and
//remove customer, and it's duplicates from the customersToBeProcessed
toBeProcessedList = customerToBeProcessedRepo
.findFirstXAsCustomer(new PageRequest(0, MAX_NR_TO_PROCESS));
}
If you use the paging support for each page requested a new sql statement gets executed, and if you don't do something fancy (and probably stupid) they get executed in different transactions. This can lead to getting elements multiple times or not seeing them at all, when the user moves from page to page.
Example: Page size 3; Elements to start: A, B, C, D, E, F
User opens the first page and sees
A, B, C (total number of pages is 2)
element X gets inserted after B; User moves to the next page and sees
C, D, E (total number of pages is now 3)
if instead of adding X, C gets deleted, the page 2 will show
E, F
since D moves to the first page.
In theory one could have a long running transaction with read stability (if supported by the underlying database) so one gets consistent pages, BUT this opens up questions like:
When does this transaction end, so the user gets to see new/changed data
When does this transaction end, when the user moves away?
This approach would have some rather high resource costs, while the actual benefit is not at all clear
So in 99 of 100 cases the default approach is pretty reasonable.
Footnote: I kind of assumed relational databases, but other stores should behave in basically the same way.

How to get Goal Funnel Step data such as "entered" and "proceeded" through Query API?

When looking at Goal Funnel report in the Google Analytics website. I can see not only the number of goal starts and completion but also how many visits to each step.
How can I find the step data through the Google Analytics API?
I am testing with the query explorer and testing on a goal with 3 steps, which 1st step marked as Required
I was able to get the start and completion by running by using goalXXStarts and goalXXCompletions:
https://www.googleapis.com/analytics/v3/data/ga?ids=ga%3A90593258&start-date=2015-09-12&end-date=2015-10-12&metrics=ga%3Agoal7Starts%2Cga%3Agoal7Completions
However I can't figure out a way to get the goal second step data.
I tried using ga:users or ga:uniquePageViews with the URL of the step 2, and previousPagePath as step 1 (required = true) and add to that the ga:users or ga:uniquePageViews from the next stage with ga:previousPagePath of step 1 (since its required=true) for backfill.
I also tried other combinations, but could never reach the right number or close to it.
One technique that can be used to perform conversion funnel analysis with the Google Analytics Core Reporting API is to define a segment for each step in the funnel. If the first step of the funnel is a 'required' step, then that step must also be included in segments for each of the subsequent steps.
For example, if your funnel has three steps named A, B, and C, then you will need to define a segment for A, another for B, and another again for C.
If step A is required then:
Segment 1: viewed page A,
Segment 2: viewed page A and viewed page B,
Segment 3: viewed page A and viewed page C.
Otherwise, if step A is NOT required then:
Segment 1: viewed page A,
Segment 2: viewed page B,
Segment 3: viewed page C.
To obtain the count for each step in the funnel, you perform a query against each segment to obtain the number of sessions where that segment matches. Additionally, you can query the previous and next pages, including entrances and exits, for each step (if you need to); in which case, query previousPagePath and pagePath as dimensions along with metrics uniquePageviews, entrances and exits. Keep in mind the difference between 'hit-level' vs 'session-level' data when performing, constructing and interpreting the results of each query.
You can also achieve similar results by using sequential segmentation which will offer you finer control over how the funnel steps are counted, as well as allowing for non-sequential funnel analysis if required.