FileNet: TransactionRolledbackException while creating index on DB2 from FEM - db2

I am trying to add an index to a document class' field, but - after some minutes - I receive the error TransactionRolledBackException.
I suspect that's because this class already has thousands of created objects. In fact, whenever I try to create a index for a new class, I do not receive this error.
I also suspected that it was just a timeout for FEM, while the index creation proceeds in the background, but this is not true, because I tried this creation 4 days ago, with no results.
Closing and reopening the FEM, the parameter always shows the missing index:
Is there some parameter to set in order to avoid the timeout, or is it possible to create an index directly on the DB2?

Related

Postgres count(*) optimization idea

I'm currently working on a project involving keeping track of users and their actions with my database (PostgreSQL as the RDMS), and I have run into an issue when trying to perform COUNT(*) on occurrences of each user. What I want is to be able to, efficiently, count the number of times each user appears from every record, and also be able to achieve looking at counts on a particular date range.
So, the problem is how do we achieve counting the total number of times a user appears from the tables contents, and how do we count the total number on a date range.
What I've tried
As you might know, Postgres doesn't support COUNT(*) very well using indices, so we have to consider other ways to reduce the # of records it looks at in order to speed up the query. So my first approach is to create a table to keep track of the number of times a user has a log message associated with them, and on what day (similar to the idea behind a materialized view, but I dont want continually refresh the materialized view with my count query). Here is what I've come up with:
CREATE TABLE users_counts(user varchar(65536), counter int default 0, day date);
CREATE RULE inc_user_date_count
AS ON INSERT TO main_table
DO ALSO UPDATE users_counts SET counter = counter + 1
WHERE user = NEW.user AND day = DATE(NEW.date_);
What this does is every time a new record is inserted into my 'main_table', we update the current users_counts table to increment the records whose date is equal to the new records date, and the user names are the same.
NOTE: the date_ column in 'main_table' is a timestamp so I must cast the new records date_ to be a DATE type.
The problem is, what if the user column value doesn't already exist in my new table 'users_count' for the current day, then nothing is updated.
Here is my question:
How do I write the rule such that we check if a user exists for the current day, if so increment that counter, otherwise insert new row with user, day, and counter of 1;
I also would like to know if my approach makes sense to do, or is there any ideas I am missing that I just haven't thought about. As my database grows, it is increasingly inefficient to perform counting, so I want to avoid any performance bottlenecks.
EDIT 1: I was able to actually figure this out by creating a separate RULE but I'm not sure if this is correct:
CREATE RULE test_insert AS ON INSERT TO main_table
DO ALSO INSERT INTO users_counts(user, counter, day)
SELECT NEW.user, 1, DATE(NEW.date)
WHERE NOT EXISTS (SELECT user FROM users.log_messages WHERE user = NEW.user_);
Basically, an insert happens if the user doesn't already exist in my CACHED table called user_counts, and the first rule above updates the count.
What I'm unsure of is how do I know when which rule is called first, the update rule or insert.. And there must be a better way, how do I combine the two rules? Can this be done with a function?
It is true that postgresql is notoriously slow when it comes to count(*) queries. However if you do have a where clause that limits the number of entries the query will be much faster. If you are using postgresql 9.2 or newer this query will be just as fast as it's in mysql because of index only scans which was added in 9.2 but it's best to explain analyze your query to make sure.
Does my solution make sense?
Very much so provided that your explain analyze show that index only scans are not being used. Trigger based solutions like the one that you have adapted find wide usage. But as you have realized the problem with the initial state arises (whether to do an update or an insert).
which rule is called first
Multiple rules on the same table and same event type are applied in
alphabetical name order.
from http://www.postgresql.org/docs/9.1/static/sql-createrule.html
the same applies for triggers. If you want a particular rule to be executed first change it's name so that it comes up higher in the alphabetical order.
how do I combine the two rules?
One solution is to modify your rule to perform an upsert (Look right at the bottom of that page for a sample upsert ). The other is to populate the counter table with initial values. The trick is to create the trigger at the same time to avoid errors. This blog post explains it really well.
While the initial setup will be slow each individual insert will probably be faster. The two opposing factors being the slowness of a WHERE NOT EXISTS query vs the overhead of catching an exception.
Tip: A block containing an EXCEPTION clause is significantly more
expensive to enter and exit than a block without one. Therefore, don't
use EXCEPTION without need.
Source the postgresql documentation page linked above.

postgresql rails 4 issue with record_changed? is defined by Active Record

Getting an error : record_changed? is defined by Active Record
could not find any info on this in documentation or online,
what is causing this?
Well, the simple answer is - you have a field called "record" in your table. Active Record is automatically trying to create a field called "record_changed?" to see if the field has changed, but has already created this method to see if the row has changed.

Entity Framework and Identity Insert

I am writing an application that exports data and serializes it to file for archiving old data.
There may be occasions where for some reason select data needs to be re-imported. This has been causing me a problem because of an identity column.
To get around this I am performing the work inside a transaction scope. Setting the Identity Insert On for that table and then updating my transaction e.g.
using (TR.TransactionScope scope = new TR.TransactionScope(TR.TransactionScopeOption.RequiresNew))
{
// allow transaction nbr to be inserted instead of auto generated
int i = context.ExecuteStoreCommand("SET IDENTITY_INSERT dbo.Transactions ON");
try
{
// check if it already exists before restoring
var matches = context.Transactions.Where(tr => tr.transaction_nbr == t.transaction_nbr);
if (matches.Count() == 0)
{
Transaction original = t;
context.Transactions.AddObject(original);
context.SaveChanges();
restoreCount++;
But I receive an exception saying:
Explicit value must be specified for identity column in table either when IDENTITY_INSERT >is set to ON or when a replication user is inserting into a NOT FOR REPLICATION identity >column.
I am assuming the entity framework is trying to do some sort of block insert without specifying the columns. Is there anyway to do this in the entity framework.
The object is large and has a number of associated entities that are also deserialized and need inserting so I want to let the entity framework do this if possible as it will save me a lot of extra work.
Any help is appreciated.
Gert Arnold - that was the answer to my question. Thank you.
I did read that elsewhere and had set the value in the object browser so thought this was suffice. I also double checked the value by right clicking the .edmx and Open With to see the details in an XML editor as suggested in another post.
When I checked the value in the XML editor initially it too was "None" so assumed this wasn't my problem. But I guess just going in there and saving rectified the problem first time around.
The second time round after I must have updated the model from the database I had to repeat this step upon your suggestion. But the second time round the StoreGeneratorPattern was different to the one set in the object browser so I needed to manually change it in the XML.
In my circumstance this is fine since normally the records are inserted via another mechanism so the identity is just getting in the way for me as the identity will always be inserted as an old (used to exist) identity value which is being temporarily restored.
Thanks for your help.

atk4.2 form submit-how to get new record id before insert to pass in arguments

I am referencing the 2 step newsletter example at http://agiletoolkit.org/codepad/newsletter. I modified the example into a 4 step process. The following page class is step 1, and it works to insert a new record and get the new record id. The problem is I don't want to insert this record into the database until the final step. I am not sure how to retrieve this id without using the save() function. Any ideas would be helpful.
class page_Ssp_Step1 extends Page {
function init(){
parent::init();
$p=$this;
$m=$p->add(Model_Publishers);
$form=$p->add('Form');
$form->setModel($m);
$form->addSubmit();
if($form->isSubmitted()){
$m->save();//inserts new record into db.
$new_id=$m->get('id');//gets id of new record
$this->api->memorize('new_id',$new_id);//carries id across pages
$this->js()->atk4_load($this->api->url('./Step2'))->execute();
}
}
}
There are several ways you could do this, either using atk4 functionality, mysql transactions or as a part of the design of your application.
1) Manage the id column yourself
I assume you are using an auto increment column in MySQL so one option would be to not make this auto increment but use a sequence and select the next value and save this in your memorize statement and add it in the model as a defaultValue using ->defaultValue($this->api->recall('new_id')
2) Turn off autocommit and create a transaction around the inserts
I'm from an oracle background rather than MySQL but MySQL also allows you to wrap several statements in a transaction which either saves everything or rollsback so this would also be an option if you can create a transaction, then you might still be able to save but only a complete transaction populating several tables would be committed if all steps complete.
In atk 4.1, the DBlite/mysql.php class contains some functions for transaction support but the documentation on agiletoolkit.org is incomplete and it's unclear how you change the dbConnect being used as currently you connect to a database in lib/Frontend.php using $this->dbConnect() but there is no option to pass a parameter.
It looks like you may be able to do the needed transaction commands using this at the start of the first page
$this->api->db->query('SET AUTOCOMMIT=0');
$this->api->db->query('START TRANSACTION');
then do inserts in various pages as needed. Note that everything done will be contained in a transaccion so if the user doesnt complete the process, nothing will be saved.
On the last insert,
$this->api->db->query('COMMIT');
Then if you want to, turn back on autocommit so each SQL statement is committed
$this->api->db->query('SET AUTOCOMMIT=1');
I havent tried this but hopefully that helps.
3) use beforeInsert or afterInsert
You can also look at overriding the beforeInsert function on your model which has an array of the data but I think if your id is an auto increment column, it won't have a value until the afterInsert function which has a parameter of the Id inserted.
4) use a status to indicate complete record
Finally you could use a status column on your record to indicate it is only at the first stage and this only gets updated to a complete status when the final stage is completed. Then you can have a housekeeping job that runs at intervals to remove records that didn't complete all stages. Any grid or crud where you display these records would be limited with AddCondition('status','C') in the model or added in the page so that incomplete ones never get shown.
5) Manage the transaction as non sql
As suggested by Romans, you could store the result of the form processing in session variables instead of directly into the database and then use a SQL to insert it once the last step is completed.

Sybase select variable logic

Ok, I have a question relating to an issue I've previously had. I know how to fix it, but we are having problems trying to reproduce the error.
We have a series of procedures that create records based on other records. The records are linked to the primary record by way of a link_id. In a procedure that grabs this link_id, the query is
select #p_link_id = id --of the parent
from table
where thingy_id = (blah)
Now, there are multiple rows in the table for the activity. Some can be cancelled. The code I have doesn't disinclude cancelled rows in the select statement, so if there are previously cancelled rows, those ids will appear in the select. There is always going to be one 'open' record that is selected if I disinclude cancelled rows. (append where status != 'C')
This solves this issue. However, I need to be able to reproduce the issue in our development environment.
I've gone through a process where I've entered a whole heap of data, opening, cancelling, etc to try and get this select statement to return an invalid id. However, whenever I run the select, the ids are in order (sequence generated), but in the case where this error occured, the select statement returned what seems to be the first value into the variable.
For example.
ID Status
1 Cancelled
2 Cancelled
3 Cancelled
4 Open
Given the above, if I do a select for the ID I want, I want to get '4'. In the error, the result is 1. However, even if I enter in 10 cancelled records, I still get the last one in the select.
In oracle, I know that if you select into a variable and more than one record is returned, you get an error (I think). Sybase apparently can assign multiple values into a variable without erroring.
I'm thinking that there's either something to do with how the data is selected from the table, where the id's without a sort order don't return in ascending order, or there's a dboption where a select into a variable will save the first or last value queried.
Edit: it looks like we can reproduce this error by rolling back stored procedure changes. However, the procs don't go anywhere near this link_id column. Is it possible that changes to the database architecture could break an index or something?
If more than one row is returned, the value that is stored will be the last value in the list, according to this.
If you haven't specified an order for retrieval via ORDER BY, then the order returned will be at the convenience of the database engine. It may very well vary by the database instance. It may be in the order created, or even appear "random" because of where the data is placed within the database block structure.
The moral of the story:
Always make singleton SELECTs return a single row
When #1 can't be done, use an ORDER BY to make sure the one you care about comes last