Monotouch data sync - why does my code sometimes cause sqlite errors? - iphone

I have the following calls (actually a few more than this - it's the overall method that's in question here):
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshEventData);
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshLocationData);
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshActData);
1st point is - is it OK to call methods that call WCF services like this? I tried daisy chaining them and it was a mess.
An example of one of the refresh methods being called above is (they all follow the same pattern, just call different services and populate different tables):
public void RefreshEventData (object state)
{
Console.WriteLine ("in RefreshEventData");
var eservices = new AppServicesClient (new BasicHttpBinding (), new EndpointAddress (this.ServciceUrl));
//default the delta to an old date so that if this is first run we get everything
var eventsLastUpdated = DateTime.Now.AddDays (-100);
try {
eventsLastUpdated = (from s in GuideStar.Data.Database.Main.Table<GuideStar.Data.Event> ()
orderby s.DateUpdated descending
select s).ToList ().FirstOrDefault ().DateUpdated;
} catch (Exception ex1) {
Console.WriteLine (ex1.Message);
}
try {
eservices.GetAuthorisedEventsWithExtendedDataAsync (this.User.Id, this.User.Password, eventsLastUpdated);
} catch (Exception ex) {
Console.WriteLine ("error updating events: " + ex.Message);
}
eservices.GetAuthorisedEventsWithExtendedDataCompleted += delegate(object sender, GetAuthorisedEventsWithExtendedDataCompletedEventArgs e) {
try {
List<Event> newEvents = e.Result.ToList ();
GuideStar.Data.Database.Main.EventsAdded = e.Result.Count ();
lock (GuideStar.Data.Database.Main) {
GuideStar.Data.Database.Main.Execute ("BEGIN");
foreach (var s in newEvents) {
GuideStar.Data.Database.Main.InsertOrUpdateEvent (new GuideStar.Data.Event {
Name = s.Name,
DateAdded = s.DateAdded,
DateUpdated = s.DateUpdated,
Deleted = s.Deleted,
StartDate = s.StartDate,
Id = s.Id,
Lat = s.Lat,
Long = s.Long
});
}
GuideStar.Data.Database.Main.Execute ("COMMIT");
LocationsCount = 0;
}
} catch (Exception ex) {
Console.WriteLine("error InsertOrUpdateEvent " + ex.Message);
} finally {
OnDatabaseUpdateStepCompleted (EventArgs.Empty);
}
};
}
OnDatabaseUpdateStepCompleted - just iterates an updateComplete counter when it's called and when it knows that all of the services have come back ok it removes the waiting spinner and the app carries on.
This works OK 1st time 'round - but then sometimes it doesn't with one of these: http://monobin.com/__m6c83107d
I think the 1st question is - is all this OK? I'm not used to using threading and locks so I am wandering into new ground for me. Is using QueueUserWorkItem like this ok? Should I even be using lock before doing the bulk insert/update? An example of which:
public void InsertOrUpdateEvent(Event festival){
try {
if (!festival.Deleted) {
Main.Insert(festival, "OR REPLACE");
}else{
Main.Delete<Event>(festival);
}
} catch (Exception ex) {
Console.WriteLine("InsertOrUpdateEvent failed: " + ex.Message);
}
}
Then the next question is - what am I doing wrong that is causing these sqlite issues?
w://

Sqlite is not thread safe.
If you want to access Sqlite from more than one thread, you must take a lock before you access any SQLite related structures.
Like this:
lock (db){
// Do your query or insert here
}

Sorry, no specific answers, but some thoughts:
Is SqlLite even threadsafe? I'm not sure - it may be that it's not (to the wrapper isn't). Can you lock on a more global object, so no two threads are inserting at the same time?
It's possible that the MT GC is getting a little overenthusiastic, and releasing your string before it's been used. Maybe keep a local reference to it around during the insert? I've had this happen with view controllers, where I had them in an array (tabcontrollers, specificially), but if I didn't keep an member variable around with the reference, they got GC'ed.
Could you get the data in a threaded manner, then queue everything up and insert them in a single thread? Atleast as a test anyway.

Related

Entity Framework Core - Error Handling on multiple contexts

I am building an API where I get a specific object sent as a JSON and then it gets converted into another object of another type, so we have sentObject and convertedObject. Now I can do this:
using (var dbContext = _dbContextFactory.CreateDbContext())
using (var dbContext2 = _dbContextFactory2.CreateDbContext())
{
await dbContext.AddAsync(sentObject);
await dbContext.SaveChangesAsync();
await dbContext2.AddAsync(convertedObject);
await dbContext2.SaveChangesAsync();
}
Now I had a problem where the first SaveChanges call went ok but the second threw an error with a datefield that was not properly set. The first SaveChanges call happened so the data is inserted in the database while the second SaveChanges failed, which cannot happen in my use-case.
What I want to do is if the second SaveChanges call goes wrong then I basically want to rollback the changes that have been made by the first SaveChanges.
My first thought was to delete cascade but the sentObject has a complex structure and I don't want to run into circular problems with delete cascade.
Is there any tips on how I could somehow rollback my changes if one of the SaveChanges calls fails?
You can call context.Database.BeginTransaction as follows:
using (var dbContextTransaction = context.Database.BeginTransaction())
{
context.Database.ExecuteSqlCommand(
#"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'"
);
var query = context.Posts.Where(p => p.Blog.Rating >= 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
context.SaveChanges();
dbContextTransaction.Commit();
}
(taken from the docs)
You can therefore begin a transaction for dbContext in your case and if the second command failed, call dbContextTransaction.Rollback();
Alternatively, you can implement the cleanup logic yourself, but it would be messy to maintain that as your code here evolves in the future.
Here is an example code that is working for me, no need for calling the rollback function. Calling the rollback function can fail. If you do it inside the catch block for example then you have a silent exception that gets thrown and you will never know about it. The rollback happens automatically when the transaction object in the using statement gets disposed. You can see this if you go to SSMS and look for the open transactions while debugging. See this for reference: https://github.com/dotnet/EntityFramework.Docs/issues/327
Using Transactions or SaveChanges(false) and AcceptAllChanges()?
using (var transactionApplication = dbContext.Database.BeginTransaction())
{
try
{
await dbContext.AddAsync(toInsertApplication);
await dbContext.SaveChangesAsync();
using (var transactionPROWIN = dbContextPROWIN.Database.BeginTransaction())
{
try
{
await dbContext2.AddAsync(convertedApplication);
await dbContext2.SaveChangesAsync();
transaction2.Commit();
insertOperationResult = ("Insert successfull", false);
}
catch (Exception e)
{
Logger.LogError(e.ToString());
insertOperationResult = ("Insert converted object failed", true);
return;
}
}
transactionApplication.Commit();
}
catch (DbUpdateException dbUpdateEx)
{
Logger.LogError(dbUpdateEx.ToString());
if (dbUpdateEx.InnerException.ToString().ToLower().Contains("overflow"))
{
insertOperationResult = ("DateTime overflow", true);
return;
}
//transactionApplication.Rollback();
insertOperationResult = ("Duplicated UUID", true);
}
catch (Exception e)
{
Logger.LogError(e.ToString());
transactionApplication.Rollback();
insertOperationResult = ("Insert Application: Some other error happened", true);
}
}

avoid concurrent access of postgres db

We have two .net services (.Net core console applications) which are accessing a postgres db table.
Service 1 inserts some 500 rows every 1 minute. It runs as a background thread.
Service 2 reads data from the same table continuously. There is an MQTT publisher which keeps reading data from this table when any new data is requested. This also happens very frequently i.e atleast 4/5 times a minute.
We are getting "FATAL: sorry, too many clients already " error.
What I am assuming is since write and read is happening simultaneously too frequently, the connection is not getting dispose properly.
Is there a way to avoid read whenever a write is happening.
EDITED
Thanks for the reply.. I know some connection pooling is happening but not sure where.. so my question was how to avoid concurrent access of postgres db..
Was not sure what part of code I can post to make the question clear
I am having using clause on dbcontext and also disposed like the below..
This is retrieval section
using (PlatinumDBContext platinumDBContext = new PlatinumDBContext())
{
try
{
var data = platinumDBContext.TrendPoints.Where(x => ids.Contains(x.TrendPointID) && x.TimeStamp >= DateTime.Now.AddHours(-timeinHours));
result = data.Select(x => new Last24hours
{
Label = x.TrendPointID.ToString(),
Value = (double)x.TrendPointValue,
time = x.TimeStamp.ToString("MM/dd/yyyy HH:mm:ss")
}).ToList();
}
catch (Exception oE)
{
}
finally {
platinumDBContext.Dispose();
}
}
This is the insertion section
using (PlatinumDBContext platinumDBContext = new PlatinumDBContext())
{
try
{
foreach (var point in trendPoints)
{
if (point != null)
{
TrendPoint item = new TrendPoint();
item.CreatedDate = DateTime.Now;
item.ObjectState = ObjectState.Added;
item.TrendPointID = point.TrendID;
item.TrendPointValue = double.IsNaN(point.Value) ? decimal.MinValue : (decimal)point.Value;
item.TimeStamp = new DateTime(point.TimeStamp);
platinumDBContext.Add(item);
}
}
platinumDBContext.SaveChanges();
}
catch (Exception ex)
{
}
finally
{
platinumDBContext.Dispose();
}
}
Regards,
Geervani

Connectjboss 4 with Firebird 3 via Jaybird 2.214-jdk1.6 [duplicate]

As soon as my code gets to my while(rs.next()) loop it produces the ResultSet is closed exception. What causes this exception and how can I correct for it?
EDIT: I notice in my code that I am nesting while(rs.next()) loop with another (rs2.next()), both result sets coming from the same DB, is this an issue?
Sounds like you executed another statement in the same connection before traversing the result set from the first statement. If you're nesting the processing of two result sets from the same database, you're doing something wrong. The combination of those sets should be done on the database side.
This could be caused by a number of reasons, including the driver you are using.
a) Some drivers do not allow nested statements. Depending if your driver supports JDBC 3.0 you should check the third parameter when creating the Statement object. For instance, I had the same problem with the JayBird driver to Firebird, but the code worked fine with the postgres driver. Then I added the third parameter to the createStatement method call and set it to ResultSet.HOLD_CURSORS_OVER_COMMIT, and the code started working fine for Firebird too.
static void testNestedRS() throws SQLException {
Connection con =null;
try {
// GET A CONNECTION
con = ConexionDesdeArchivo.obtenerConexion("examen-dest");
String sql1 = "select * from reportes_clasificacion";
Statement st1 = con.createStatement(
ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_READ_ONLY,
ResultSet.HOLD_CURSORS_OVER_COMMIT);
ResultSet rs1 = null;
try {
// EXECUTE THE FIRST QRY
rs1 = st1.executeQuery(sql1);
while (rs1.next()) {
// THIS LINE WILL BE PRINTED JUST ONCE ON
// SOME DRIVERS UNLESS YOU CREATE THE STATEMENT
// WITH 3 PARAMETERS USING
// ResultSet.HOLD_CURSORS_OVER_COMMIT
System.out.println("ST1 Row #: " + rs1.getRow());
String sql2 = "select * from reportes";
Statement st2 = con.createStatement(
ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_READ_ONLY);
// EXECUTE THE SECOND QRY. THIS CLOSES THE FIRST
// ResultSet ON SOME DRIVERS WITHOUT USING
// ResultSet.HOLD_CURSORS_OVER_COMMIT
st2.executeQuery(sql2);
st2.close();
}
} catch (SQLException e) {
e.printStackTrace();
} finally {
rs1.close();
st1.close();
}
} catch (SQLException e) {
} finally {
con.close();
}
}
b) There could be a bug in your code. Remember that you cannot reuse the Statement object, once you re-execute a query on the same statement object, all the opened resultsets associated with the statement are closed. Make sure you are not closing the statement.
Also, you can only have one result set open from each statement. So if you are iterating through two result sets at the same time, make sure they are executed on different statements. Opening a second result set on one statement will implicitly close the first.
http://java.sun.com/javase/6/docs/api/java/sql/Statement.html
The exception states that your result is closed. You should examine your code and look for all location where you issue a ResultSet.close() call. Also look for Statement.close() and Connection.close(). For sure, one of them gets called before rs.next() is called.
You may have closed either the Connection or Statement that made the ResultSet, which would lead to the ResultSet being closed as well.
Proper jdbc call should look something like:
try {
Connection conn;
Statement stmt;
ResultSet rs;
try {
conn = DriverManager.getConnection(myUrl,"","");
stmt = conn.createStatement();
rs = stmt.executeQuery(myQuery);
while ( rs.next() ) {
// process results
}
} catch (SqlException e) {
System.err.println("Got an exception! ");
System.err.println(e.getMessage());
} finally {
// you should release your resources here
if (rs != null) {
rs.close();
}
if (stmt != null) {
stmt.close();
}
if (conn != null) {
conn.close();
}
}
} catch (SqlException e) {
System.err.println("Got an exception! ");
System.err.println(e.getMessage());
}
you can close connection (or statement) only after you get result from result set. Safest way is to do it in finally block. However close() could also throe SqlException, hence the other try-catch block.
I got same error everything was correct only i was using same statement interface object to execute and update the database.
After separating i.e. using different objects of statement interface for updating and executing query i resolved this error. i.e. do get rid from this do not use same statement object for both updating and executing the query.
Check whether you have declared the method where this code is executing as static. If it is static there may be some other thread resetting the ResultSet.
make sure you have closed all your statments and resultsets before running rs.next. Finaly guarantees this
public boolean flowExists( Integer idStatusPrevious, Integer idStatus, Connection connection ) {
LogUtil.logRequestMethod();
PreparedStatement ps = null;
ResultSet rs = null;
try {
ps = connection.prepareStatement( Constants.SCRIPT_SELECT_FIND_FLOW_STATUS_BY_STATUS );
ps.setInt( 1, idStatusPrevious );
ps.setInt( 2, idStatus );
rs = ps.executeQuery();
Long count = 0L;
if ( rs != null ) {
while ( rs.next() ) {
count = rs.getLong( 1 );
break;
}
}
LogUtil.logSuccessMethod();
return count > 0L;
} catch ( Exception e ) {
String errorMsg = String
.format( Constants.ERROR_FINALIZED_METHOD, ( e.getMessage() != null ? e.getMessage() : "" ) );
LogUtil.logError( errorMsg, e );
throw new FatalException( errorMsg );
} finally {
rs.close();
ps.close();
}
A ResultSetClosedException could be thrown for two reasons.
1.) You have opened another connection to the database without closing all other connections.
2.) Your ResultSet may be returning no values. So when you try to access data from the ResultSet java will throw a ResultSetClosedException.
It happens also when using a ResultSet without being in a #Transactional method.
ScrollableResults results = getScrollableResults("select e from MyEntity e");
while (results.next()) {
...
}
results.close();
if MyEntity has eager relationships with other entities. the second time results.next() is invoked the ResultSet is closed exception is raised.
so if you use ScrollableResults on entities with eager relationships make sure your method is run transactionally.
"result set is closed" happened to me when using tag <collection> in MyBatis nested (one-to-many) xml <select> statement
A Spring solution could be to have a (Java) Spring #Service layer, where class/methods calling MyBatis select-collection statements are annotated with
#Transactional(propagation = Propagation.REQUIRED)
annotations being:
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;
this solution does not require to set the following datasource properties (i.e., in JBoss EAP standalone*.xml):
<xa-datasource-property name="downgradeHoldCursorsUnderXa">**true**\</xa-datasource-property>
<xa-datasource-property name="resultSetHoldability">**1**</xa-datasource-property>

Handle concurrency in Entity Framework

I am looking for the best way to handle concurrency while using Entity Framework. The simplest and most recommended (also on stack) solution is described here:
http://msdn.microsoft.com/en-us/library/bb399228.aspx
And it looks like:
try
{
// Try to save changes, which may cause a conflict.
int num = context.SaveChanges();
Console.WriteLine("No conflicts. " +
num.ToString() + " updates saved.");
}
catch (OptimisticConcurrencyException)
{
// Resolve the concurrency conflict by refreshing the
// object context before re-saving changes.
context.Refresh(RefreshMode.ClientWins, orders);
// Save changes.
context.SaveChanges();
Console.WriteLine("OptimisticConcurrencyException "
+ "handled and changes saved");
}
But is it enough? What if something changes between Refresh() and the second SaveChanges()? There will be uncaught OptimisticConcurrencyException?
EDIT 2:
I think this would be the final solution:
int savesCounter = 100;
Boolean saveSuccess = false;
while (!saveSuccess && savesCounter > 0)
{
savesCounter--;
try
{
// Try to save changes, which may cause a conflict.
int num = context.SaveChanges();
saveSuccess = true;
Console.WriteLine("Save success. " + num.ToString() + " updates saved.");
}
catch (OptimisticConcurrencyException)
{
// Resolve the concurrency conflict by refreshing the
// object context before re-saving changes.
Console.WriteLine("OptimisticConcurrencyException, refreshing context.");
context.Refresh(RefreshMode.ClientWins, orders);
}
}
I am not sure if Iunderstand how the Refresh() works. Does it refresh whole context? If yes, why does it take additional arguments (entities objects)? Or does it refreshes only objects specified?
For example in this situation what should be passed as Refresh() second argument:
Order dbOrder = dbContext.Orders.Where(x => x.ID == orderID);
dbOrder.Name = "new name";
//here whole the code written above to save changes
should it be dbOrder?
Yes, even the second save may cause an OptimisticConcurrencyException if - as you say - something changes between Refresh() and SaveChanges().
The example given is just a very simple retry logic, if you need to retry more than once or resolve the conflict in a more complex way, you're better off creating a loop that will retry n times than nesting try/catch more than this single level.

Bulk inserts with EntityFramework 4.0 causes abort of transaction

We are receiving a file from a client (Silverlight) via WCF and on the serverside I parse this file. Each line in the file is transformed into an object and stored into the database. if the file is very large (10000 entries and more), I get the following error (MSSQLEXPRESS):
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
I tried a lot (TransactionOptions timeout set and so on), but nothings works. The above exception message is either raised after 3000, sometimes after 6000 objects processed, but I can't succeed in processing all objects.
I append my source, hopefully somebody got an idea and can help me:
public xxxResponse SendLogFile (xxxRequest request
{
const int INTERMEDIATE_SAVE = 100;
using (var context = new EntityFramework.Models.Cubes_ServicesEntities())
{
// start a new transactionscope with the timeout of 0 (unlimited time for developing purposes)
using (var transactionScope = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(0)
}))
{
try
{
// open the connection manually to prevent undesired close of DB
// (MSDTC)
context.Connection.Open();
int timeout = context.Connection.ConnectionTimeout;
int Counter = 0;
// read the file submitted from client
using (var reader = new StreamReader(new MemoryStream(request.LogFile)))
{
try
{
while (!reader.EndOfStream)
{
Counter++;
Counter2++;
string line = reader.ReadLine();
if (String.IsNullOrEmpty(line)) continue;
// Create a new object
DomainModel.LogEntry le = CreateLogEntryObject(line);
// an attach it to the context, set its state to added.
context.AttachTo("LogEntry", le);
context.ObjectStateManager.ChangeObjectState(le, EntityState.Added);
// while not 100 objects were attached, go on
if (Counter != INTERMEDIATE_SAVE) continue;
// after 100 objects, make a call to SaveChanges.
context.SaveChanges(SaveOptions.None);
Counter = 0;
}
}
catch (Exception exception)
{
// cleanup
reader.Close();
transactionScope.Dispose();
throw exception;
}
}
// do a final SaveChanges
context.SaveChanges();
transactionScope.Complete();
context.Connection.Close();
}
catch (Exception e)
{
// cleanup
transactionScope.Dispose();
context.Connection.Close();
throw e;
}
}
var response = CreateSuccessResponse<ServiceSendLogEntryFileResponse>("SendLogEntryFile successful!");
return response;
}
}
There is no bulk insert in entity framework. You call SaveChanges after 100 records but it will execute 100 separate inserts with database round trip for each insert.
Setting timeout of the transaction is also dependent on transaction max timeout which is configured on machine level (I think default value is 10 minutes). How lond does it take before your operation fails?
The best way you can do is rewriting your insert logic with common ADO.NET or with bulk insert.
Btw. throw exception and throw e? That is incorrect way to rethrow exceptions.
Important edit:
SaveChanges(SaveOptions.None) !!! means do not accept changes after saving so all records are still in added state. Because of that the first call to SaveChanges will insert first 100 records. The second call will insert first 100 again + next 100, the third call will insert first 200 + next 100, etc.
I had exactly same issue. I did EF code to insert bulk 1000 records each time.
I was working since the beginning, with a little problem with msDTC that I put to allow remot clients and admin , but after that it was ok. I did lot of work with this, but one day it JUST STOP WORKING.
I am getting
The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements.
VERY WEIRD! Sometimes the error changes. My suspect is the msDTC somehow , strange behaviors.
I am changing now for not using TransactionScope!
I hate when it did work and just stop. I also tried to run this in a vm, another enourmous waste of time...
My code:
private void AddTicks(FileHelperTick[] fhTicks)
{
List<ForexEF.Entities.Tick> Ticks = new List<ForexEF.Entities.Tick>();
var str = LeTicks(ref fhTicks, ref Ticks);
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions()
{
IsolationLevel = System.Transactions.IsolationLevel.Serializable,
Timeout = TimeSpan.FromSeconds(180)
}))
{
ForexEF.EUR_TICKSContext contexto = null;
try
{
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
int count = 0;
foreach (var tick in Ticks)
{
count++;
contexto = AddToContext(contexto, tick, count, 1000, true);
}
contexto.SaveChanges();
}
finally
{
if (contexto != null)
contexto.Dispose();
}
scope.Complete();
}
}
private ForexEF.EUR_TICKSContext AddToContext(ForexEF.EUR_TICKSContext contexto, ForexEF.Entities.Tick tick, int count, int commitCount, bool recreateContext)
{
contexto.Set<ForexEF.Entities.Tick>().Add(tick);
if (count % commitCount == 0)
{
contexto.SaveChanges();
if (recreateContext)
{
contexto.Dispose();
contexto = new ForexEF.EUR_TICKSContext();
contexto.Configuration.AutoDetectChangesEnabled = false;
}
}
return contexto;
}
It times out due the TransactionScope default Maximum Timeout, check the machine.config for that.
Check out this link:
http://social.msdn.microsoft.com/Forums/en-US/windowstransactionsprogramming/thread/584b8e81-f375-4c76-8cf0-a5310455a394/