I wrote a trigger that update a new row in the same table.
l
When I update the fist row, my trigger tries to update all the relevant rows and I get an error "stack depth limit exceeded"
To increase this mimit is not a solution since my table will be bigger and bigger.
But I never need to update from the first rows the best solution is to limit the maximun of "loops" the trigger can do.
I don't know how to have a "loop counter" and I don't find any way to retrieve the real stack usage that should be another solution.
Any idea regarding this problem?
You can use pg_trigger_depth(), but I strongly suggest not to and review DB design.
For documentation about function see: https://www.postgresql.org/docs/current/functions-info.html
I quote:
" pg_trigger_depth () → integer
Returns the current nesting level of PostgreSQL triggers (0 if not
called, directly or indirectly, from inside a trigger)."
Why is it bad? Not to repeat what other people wrote, I suggest to read the answer here: https://dba.stackexchange.com/questions/163142/is-pg-trigger-depth-bad-to-use-for-preventing-trigger-cascading-recursion
Related
If we have 2 queues, we can simply use SelectOutPut, if (queue1.size() < queue2.size()) go to Queue1, else go to Queue2.
But what if we have 12 queues?
Using only if else will be a nightmare. So what will be our approach?
Note:
Going through all the queues through a for loop could be the answer. If it's possible, then how?
I do not know the purpose of the conditions when you need smaller size or bigger and etc.
You can probably use a PriorityQueue the complexity will be bigger but it will looks cleaner in the code.
How do I use a PriorityQueue?
each time you poll from the PQ you can choose whether the smaller will be first or the bigger ( in size I assume ) and you insert in right after - so it can be used after (if needed).
if you are still want to do this with a loop - please give more details on what you trying to achieve, since its kinda hard to guess
Good Luck !
you can add all the queues into a collection and then do this
Queue queue=top(collection,q->-q.size());
go to queue
This is a scalability related question.
We want to read some rows from a table and, after processing some of them, stop the query. The stop criteria is data dependent (we do not know in advance how many or what rows are we interested in).
This is scalability sensitive when the number of rows of the table grows far beyond the number of rows we really are interested in.
If we use the standard PQExec, all rows are returned and we are forced to consume them (we have to call PQGetResult until it returns null). So this does not scale.
We are now trying "row by row" reading.
We first used PQsendQuery and PQsetSingleRowMode. However, we still have to call PQGetResult until it returns null.
Our last approach is PQsendQuery, PQsetSingleRowMode and when we are done we cancel the query as follows
void CloseRowByRow() {
PGcancel *c = PQgetCancel(conn);
char errbuf[256];
PQcancel(c, errbuf, 256);
PQfreeCancel(c);
while (res) {
PQclear(res);
res = PQgetResult(conn);
}
}
This produces some performance benefits but we are wondering if this is the best we can do.
So here comes the question: Is there any other way?
Use DECLARE and FETCH to define & read from a server-side cursor, this is exactly what they are meant for. You would use standard APIs, FETCH will just let you retrieve the results in batches of a controlled size. See the examples in the docs for more details.
I have a large spreadsheet: 700+ rows, each having references to the previous row. I use reference functions: ROW(), COLUMN() and INDIRECT(), ADDRESS(). (Yes, I have considered fixing values every 50-100 rows to reduce calculation trail.)
Until recently I used OpenOffice.org and it worked fine. LibreOffice, however, when the file is opened, seems to give up after some rows and further calculations become Error 522. Sometimes a change makes it re-calculate it all and errors disappear and doesn't reappear when I undo the change. I have also found out about Ctrl-Shift-F9 (must be re-calculate), which also makes errors disappear.
Even though the file has been saved and re-saved by LibreOffice several times it still reports false Error 522 when I open the file, so it doesn't seem to be compatibility problem.
Is the problem that a very long branched out calculation trail makes the software think it will never get to the initial values and therefore it must be circular? (Which my idea of fixing values would solve.) Or could there be something else I may have missed?
UPDATE
I don't see how INDEX() would help. I want to refer to a cell immediately above or a cell from a row immediately above. Cell d46 could point to d45 or b45 or $a45, and that would work when copying a row, but not when inserting or deleting a row: If you insert a row just above, the references pointing 1 row above would start pointing 2 rows above, so each time I would have to edit the formulae. The row (each row) contains several references to the row just above, so I thought the easiest way would be INDIRECT(ADDRESS(ROW()-1,COLUMN())) for the same column or INDIRECT(ADDRESS(ROW()-1,1)) for column A... Any better solutions?
I do not know the specifics of the problem, but it sounds like it would help to simplify the formulas, as you suggested.
Another possibility is to write macros to handle some of the calculation work. Besides Basic, macros can be written in Java, which you seem to be familiar with. Macros can be called from a spreadsheet function, or called when the document is loaded.
It may also help to use a more powerful tool such as LibreOffice Base with MySQL. Often spreadsheets that need a lot of INDIRECT() and ADDRESS() are really using database-type logic.
I need to update a KDB table with new/updated/deleted rows while it is being read by other threads. Since writing to K structures while other threads access will not be thread safe, the only way I can think of is to clone the whole table and apply new changes to that. Even to do that, I need to first clone the table, then find a way to insert/update/delete rows from it.
I'd like to know if there are functions in C to:
1. Clone the whole table
2. Delete existing rows
3. Insert new rows easily
4. Update existing rows
Appreciate suggestions on new approaches to the same problem as well.
Based on the comments...
You need to do a set of operations on the KDB database "atomically"
You don't have "control" of the database, so you can't set functions (though you don't actually need to be an admin to do this, but that's a different story)
You have a separate C process that is connecting to the database to do the operations you require. (Given you said you don't have "access" to the database as admin, you can't get KDB to load your C binary to use within-process anyway).
Firstly I'm going to assume you know how to connect to KDB+ and issue via the C API (found here).
All you need to do then is to concatenate your "atomic" operation into a set of statements that you are going to issue in one call from C. For example say you want to update a table and then delete some entry. This is what your call might look like:
{update name:`me from table where name=`you; delete from `table where name=`other;}[]
(Caution: this is just a dummy example, I've assumed your table is in-memory so that the delete operation here would work just fine, and not saved to disk, etc. If you need specific help with the actual statements you require in your use case then that's a different question for this forum).
Notice that this is an anonymous function that will get called immediately on issue ([]). There is the assumption that your operations within the function will succeed. Again, if you need actual q query help it's a different question for this forum.
Even if your KDB database is multithreaded (started with -s or negative port number), it will not let you update global variables inside a peach thread. Therefore your operation should work just fine. But just in case there's something else that could interfere with your new anonymous function, you can wrap the function with protected evaluation.
I'm not clear about below queries and curious to know what is the different between them even though both retrieves same results. (Database used sports2000).
FOR EACH Customer WHERE State = "NH",
FIRST Order OF Customer:
DISPLAY Customer.Cust-Num NAME Order-Num Order-Date.
END.
FOR EACH Customer WHERE State = "NH":
FIND FIRST Order OF Customer NO-ERROR.
IF AVAILABLE Order THEN
DISPLAY Customer.Cust-Num NAME Order-Num Order-Date.
END.
Please explain me
Regards
Suga
As AquaAlex says your first snippet is a join (the "," part of the syntax makes it a join) and has all of the pros and cons he mentions. There is, however, a significant additional "con" -- the join is being made with FIRST and FOR ... FIRST should never be used.
FOR LAST - Query, giving wrong result
It will eventually bite you in the butt.
FIND FIRST is not much better.
The fundamental problem with both statements is that they imply that there is an order which your desired record is the FIRST instance of. But no part of the statement specifies that order. So in the event that there is more than one record that satisfies the query you have no idea which record you will actually get. That might be ok if the only reason that you are doing this is to probe to see if there is one or more records and you have no intention of actually using the record buffer. But if that is the case then CAN-FIND() would be a better statement to be using.
There is a myth that FIND FIRST is supposedly faster. If you believe this, or know someone who does, I urge you to test it. It is not true. It is true that in the case where FIND returns a large set of records adding FIRST is faster -- but that is not apples to apples. That is throwing away the bushel after randomly grabbing an apple. And if you code like that your apple now has magical properties which will lead to impossible to cure bugs.
OF is also problematic. OF implies a WHERE clause based on the compiler guessing that fields with the same name in both tables and which are part of a unique index can be used to join the tables. That may seem reasonable, and perhaps it is, but it obscures the code and makes the maintenance programmer's job much more difficult. It makes a good demo but should never be used in real life.
Your first statement is a join statement, which means less network traffic. And you will only receive records where both the customer and order record exist so do not need to do any further checks. (MORE EFFICIENT)
The second statement will retrieve each customer and then for each customer found it will do a find on order. Because there may not be an order you need to do an additional statement (If Available) as well. This is a less efficient way to retrieve the records and will result in much more unwanted network traffic and more statements being executed.