Codefluent execution timeout - codefluent

I have a collection load function in my Codefluent that I use in a background process at night. Since my database is growing, it encounters a SQL execution timeout. It is fine by me that the execution takes long since it is at night and it is a background process. How can I set the timeout for a specific function?

If it is a background process, use a specific configuration setting for this process only with commandTimeout attribute in the CodeFluent section.
Or you can override the CommandTimeOut of the current command before execution (CodeFluent.Runtime.CodeFluentContext.Get("XXX").Persistence.BaseCommand.CommandTimeout = 180;) where XX is the store name

Related

1-Hour Timeout on SSAS 2014 + ADOMD.Net - but no Timeouts Set to an Hour

I've run into a mystifying XMLA timeout error when running an ADOMD.Net command from a .Net application. The Visual Basic routine iterates over a list of mining models residing on a SQL Server Analysis Services 2014 instance and performs a cross-validation test on each one. Whenever the time elapsed on the cross-validation test reaches the 60 minute mark, the XML for Analysis parser throws an error, saying that the request timed out. For any routine operations taking less than one hour, I can use the same ADOMD.Net connections with the same server and application without any hitches. The culprit in such cases is often the ExternalCommandTimeout setting on the server, which defaults to 3600 seconds, i.e one hour. In this case, however, all of the following timeout properties on the server are set to zero: CommitTimeout, ExternalCommandTimeout, ExternalConnectionTimeout, ForceCommitTimeout, IdleConnectionTimeout, IdleOrphanSessionTimeout, MaxIdleSessionTimeout and ServerTimeout.
There are only three other timeout properties available, none of which is set to one hour: MinldleSessionTimeout (currently at 2700), DatabaseConnectionPoolConnectTimeout (now at 60 seconds) and DatabaseConnectionPoolTimeout (at 120000). The MSDN documentation lists another three timeout properties that aren't visible with the Advanced Properties checked in SQL Server Management Studio 2017:
AdminTimeout, DefaultLockTimeoutMS and DatabaseConnectionPoolGeneralTimeout. The first two default to no timeout and the third defaults to one minute. MSDN also mentions a few "forbidden" timeout properties, like SocketOptions\ LingerTimeout, InitialConnectTimeout, ServerReceiveTimeout, ServerSendTimeout, which all carry the warning, "An advanced property that you should not change, except under the guidance of Microsoft support." I do not see any means of setting these through the SSMS 2017 GUI though.
Since I've literally run out of timeout settings to try, I'm stumped as to how to correct this behavior and allow my .Net app to wait on those cross-validations through ADOMD. Long ago I was able to solve a few arcane SSAS timeout issues by appending certain property settings to the connection strings, such as "Connect Timeout=0;CommitTimeout=0;Timeout=0" and so on. Nevertheless, attempting to assign an ExternalCommandTimeout value through the connection string in this manner results in the XMLA error
"The ExternalCommandTimeout property was not recognized." I have not tested each and every one of the SSAS server timeouts in this manner, but this exception signifies that ADOMD.Net connection strings can only accept a subset of the timeout properties.
Am I missing a timeout setting somewhere? Does anyone have any ideas on what else could cause this kind of esoteric error? Thanks in advance. I've put this issue on the back burner about as long as I can and really need to get it fixed now. I wonder if perhaps ADOMD.Net has its own separate timeout settings, perhaps going by different names, but I can't find any documentation to that effect...
I tracked down the cause of this error: buried deep in the VB.Net code on the front end was a line that set the CommandTimeout property of the ADOMD.Net Command object to 3600 seconds. This overrode the connection string settings mentioned above, as well as all of the server-level settings. The problem was masked by the fact that cross-validation retrieval operations were also timing out in the Visual Studio 2017 GUI. That occurred because the VS instance was only recently installed and the Connection and Query Timeouts hadn't yet been set to 0 under Options menu/Business Intelligence Designers/Analysis Services Designs/General.

Stop 2 Conflicting Scripts Running At The Same Time

I have two scripts that do the same thing but for different companies, and during the process they both use the same tables.
It's imperative that only one script runs at once, as sometimes the timings vary greatly, and they are scheduled rather close together purposely. My question is, what is the best method to ensure these scripts do not run together? I tried to have a global field, set to 1 at the beginning of the script, and 0 at the end, so when the 2nd script runs, if global field = 1 - exit script -
This did not work, as both these scripts are scheduled server side, and I have read that the GLOBAL variable is local in this instance.
I assume, we are talking about FileMaker Server schedules.
Global variable will be reset every time you run a scheduled script. Every script will run on it's own session. You can not use them to ensure the scripts do not clash.
As far as I know, FileMaker Server does not run two schedules at the same time. The second script will be delayed until the first one finishes.
FileMaker Server can run simultaneous schedules if they are script schedules, thus an overlap can occur.
What you need to do is set a field that is not a global, so that the schedules can check against the value of that field.
A single record table would be ideal for this.
Make sure that you commit after setting the field, or you may get record locking issues.
Create an OS-level script that uses the fmsadmin command line to run one script, then run the second.
Set the FM Server schedule to run the OS script (which then runs the PSoS scripts).

Cannot find a record just created in a different thread with JPA

I am using the Play! framework, and have a difficulty with in the following scenario.
I have a server process which has a 'read-only' transaction. This to prevent any possible database lock due to execution as it is a complicated procedure. There are one or two record to be stored, but I do that as a job, as I found doing them in the main thread could result in a deadlock under higher load.
However, in one occasion I need to create an object and subsequently use it.
However, when I create the object using a Job, wait for the resulting id (with a Promise return) and then search in the database for it, it cannot be found.
Is there an easy way to have the JPA search 'afresh' in the DB at this point? I implemented a 5 sec. pause to test, so I am sue it is not because the procedure hadn't finished yet.
Check if there is a transaction wrapped around your INSERT and if there is one check that the transaction is COMMITed.

Issue with Entity Framework 4.2 Code First taking a long time to add rows to a database

I am currently using Entity Framework 4.2 with Code First. I currently have a Windows 2008 application server and a database server running on Amazon EC2. The application server has a Windows Service installed that runs once per day. The service executes the following code:
// returns between 2000-4000 records
var users = userRepository.GetSomeUsers();
// do some work
foreach (var user in users)
{
var userProcessed = new UserProcessed { User = user };
userProcessedRepository.Add(userProcessed);
}
// Calls SaveChanges() on DbContext
unitOfWork.Commit();
This code takes a few minutes to run. It also maxes out the CPU on the application server. I have tried the following measures:
Remove the unitOfWork.Commit() to see if it is network related when the application server talks to the database. This did not change the outcome.
Changed my application server from a medium instance to a high CPU instance on Amazon to see if it is resource related. This caused the server not to max out the CPU anymore and the execution time improved slightly. However, the execution time was still a few minutes.
As a test I modified the above code to run three times to see if execution time for the second and third loop using the same DbContext. Every consecutive loop took longer to run that the previous one but that could be related to using the same DbContext.
Am I missing something? Is it really possible that something as simple as this takes minutes to run? Even if I don't commit to the database after each loop? Is there a way to speed this up?
Entity Framework (as it stands) isn't really well suited to this kind of bulk operation. Are you able to use one of the bulk insert methods with EC2? Otherwise, you might find that hand-coding the T-SQL INSERT statements is significantly faster. If performance is important then that probably outweighs the benefits of using EF.
My guess is that your ObjectContext is accumulating a lot of entity instances. SaveChanges seems to have a phase that has time linear in the number of entities loaded. This is likely the reason for the fact that it is taking longer and longer.
A way to resolve this is to use multiple, smaller ObjectContexts to get rid of old entity instances.

Play Model save function isn't actually writing to the database

I have a play model called "JobStatus" and it's just got one property, an enum with a JobState, (Running/notRunning).
The class extends model and is implemented as a singleton. You call it's getInstance() method to get the only record in the underlying table.
I have a job that runs every month and in the job I will toggle the state of the JobStatus object back and forth at various times and call .save().
I've noticed it isn't actually saving.
When the job starts off, it's first line of code is
JobStatus thisJobStatus = jobStatus.getInstance();
...// exit if already running
thisJobStatus.JobState = JobState.Running;
thisJobStatus.save()
then when the job is done it will change the status back to NotRunning and save again.
The issue is that when I look in the MySql database the actual record value is never changed.
This causes a catastrophic failure because when other nodes try to run the job they check the state and since they're seeing it as NotRunning, they all try to run the job also.
So my clever scheme for managing job state is failing because the actual value isn't getting commited to the DB.
How do I force Play to write to the DB right away when I call .save() on a model?
Thanks
Josh
try adding this to your JobStatus and call it after save.
public static void commit(){
JobStatus.em().getTransaction().commit();
JobStatus.em().getTransaction().begin();
JobStatus.em().flush();
JobStatus.em().clear();
}
I suppose you want to mark your job as "running" pretty much as the first thing when the job starts? In that case, you shouldn't have any other ongoing database statements yet...
To commit your changes in the database immediately (instead of after the job has ended), add the following commands after the thisJobStatus.save(); method call:
JPA.em().flush();
JPA.em().getTransaction().commit();
Additionally, since you're using MySQL, you might want to lock the row immediately upon retriveval using the SELECT ... FOR UPDATE clause. (See MySQL Reference Manual for more information.) Of course, you wouldn't want to have that in your getInstance() method, otherwise every fetch operation would lock the record.