I have an Sqlite database mapped in an Entity Framework context.
I write on this database from several threads (bad idea, i know). However i tried using a global lock for my application like this:
partial class MyDataContext : ObjectContext
{
public new int SaveChanges()
{
lock (GlobalWriteLock.Lock)
{
try
{
int result = base.SaveChanges();
Log.InfoFormat("fff Save changes performed for {0} entries", result);
return result;
}
catch (UpdateException e)
{
throw e;
}
}
}
}
Still, i get the database file locked exception all the way down from sqlite itself. How can this be possible?
The only explanation I can see is that the base.SaveChanges method returns before the database gets unlocked and continues work asynchronously after returning.
Is this the case? If yes, how can I overcome this issue?
Note: My commits are usually updates of 1-100 entries and/or inserts of about 1-100 entries at a time.
Related
When we pass our DbContext an object whose values have not changed, and try to perform an Update we get a 500 internal server error.
A user may open a dialog box to edit a record, change a value, change it back and then send the record to the database. Also we provide a Backup and Restore function and when the records are restored, some of them will not have changed since the backup was performed.
I was under the impression that a PUT would delete and re-create the record so I didn't feel there would be a problem.
For example, having checked that the Activity exists my ActivityController is as follows:
var activityEntityFromRepo = _activityRepository.GetActivity(id);
// Map(source object (Dto), destination object (Entity))
_mapper.Map(activityForUpdateDto, activityEntityFromRepo);
_activityRepository.UpdateActivity(activityEntityFromRepo);
// Save the updated Activity entity, added to the DbContext, to the SQL database.
if (await _activityRepository.SaveChangesAsync())
{
var activityFromRepo = _activityRepository.GetActivity(id);
if (activityFromRepo == null)
{
return NotFound("Updated Activity could not be found");
}
var activity = _mapper.Map<ActivityDto>(activityFromRepo);
return Ok(activity);
}
else
{
// The save failed.
var message = $"Could not update Activity {id} in the database.";
_logger.LogWarning(message);
throw new Exception(message);
};
My ActivityRepository is as follows:
public void UpdateActivity(Activity activity)
{
_context.Activities.Update(activity);
}
If any of the fields have changed then we don't get the error. Do I have to check every record for equality before the PUT? It seems unnecessary.
Perhaps I have missed something obvious. Any suggestions very welcome.
There is a lot of code missing here.
In your code you call your SaveChangesAsync (not the EF SaveChangesAsync).
Probably (but there is not the code to be sure) your SaveChangesAsync is something that returns false if there is an exception (and is not a good pattern because you "loose" the exception info) or if DbSet.SaveChangesAsync returns 0.
I think (but there is a lot of missing code) that this is your case. If you don't make any changes, SaveChangesAsync returns 0.
EDIT
The System.Exception is raised by your code (last line). EF never throws System.Exception.
I have an ASP.NET Core 2.0 Site that has a scaffolded controller built directly from a simple model and simple context. I seeded the data by simply checking for the number of records in the GET method and if 0, then I added 100 records. GET is retrieving records as I would expect repeatedly.
I'm using the inmemory database provider.
services.AddDbContext<MyDbContext>
(opt => opt.UseInMemoryDatabase("CodeCampInMemoryDb"));
When I do a PUT with a record that I know existed in my GET, I get a concurrency error as shown at the bottom of this post. I've not used this method of changing the EntityState of a record I created myself, so I'm not sure how this was suppose to work in the first place, but clearly now it is not working.
Maybe it has something to do with a transaction being processed on the inmemory database? I'm not sure how to avoid that if that is the problem.
// PUT: api/Sessions/5
[HttpPut("{id}")]
public async Task<IActionResult> PutSessionRec([FromRoute] int id, [FromBody] SessionRec sessionRec)
{
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
if (id != sessionRec.Id)
{
return BadRequest();
}
_context.Entry(sessionRec).State = EntityState.Modified;
try
{
await _context.SaveChangesAsync();
}
catch (DbUpdateConcurrencyException xx)
{
if (!SessionRecExists(id))
{
return NotFound();
}
else
{
throw;
}
}
return NoContent();
}
Microsoft.EntityFrameworkCore.DbUpdateConcurrencyException: Attempted to update or delete an entity that does not exist in the store.
at Microsoft.EntityFrameworkCore.Storage.Internal.InMemoryTable`1.Update(IUpdateEntry entry)
at Microsoft.EntityFrameworkCore.Storage.Internal.InMemoryStore.ExecuteTransaction
If you marked a property as a Timestamp and don't provide it, you will get this exception every time. You either need to load the entity with the latest Timestamp and update that (not ideal) or you have to send the Timestamp down to the client and have the client send it back up (correct way). However, if you are using an older version of JSON serialization, you have to convert the byte[] to base 64 and then convert it back.
I am working on a program that read from a file and insert line by line into Oracle 11g database using JTA/EclipseLink 2.3.x JPA with container managed transaction.
I've developed the code below, but I'm bugged by the fact that the failed lines need to be known and being fixed manually.
public class CreateAccount {
#PersistenceContext(unitName="filereader")
private EntityManager em;
private ArrayList<String> unprocessed;
public void upload(){
//reading the file into unprocessed
for (String s : unprocessed) {
this.process(s);
}
}
private void process(String s){
//Setting the entity with appropriate properties.
//Validate the entity
em.persist(account);
}
}
This first version takes a few seconds to commit 5000 rows to database, as it seems taking advantage of caching the prepared statement. This works fine when all entities to persist are valid. However, I am concerning that even if I validate the entity, it is still possible to fail due to various unexpected reason, and when any entity throw an exception during commit, I cannot find the particular record that caused it, and all entities had been rolled back.
I had tried another approach that start a new transaction and commit for each line without using managed transaction using the following code in process(String s).
for (String s : unprocessedLines) {
try {
em.getTransaction().begin();
this.process(s);
em.getTransaction().commit();
} catch (Exception e) {
// Any exception that a line caused can be caught here
e.printStackTrace();
}
}
The second version works well for logging erroneous line as exception caused by individual lines were caught and handled, but it takes over 300s to commit the same 5000 lines to database. The time it takes is not reasonable when a large file is being processed.
Is there any workaround that I could check and insert record quickly and at the same time being notified of any failed lines?
Well this is more likely a guess, but why don't you try to keep the transaction and commiting it in batch, then you'll keep the rollback exception at the same time will keep the speed:
try {
em.getTransaction().begin();
for (String s : unprocessedLines) {
this.process(s);
}
em.getTransaction().commit();
} catch (RollbackException exc) {
// here you have your rollback reason
} finally {
if(em.getTransaction.isActive()) {
em.getTransaction.rollback(); // well of course you should declare em.getTransaction as a varaible above instead of constantly invoking it as I do :-)
}
}
My solution turned out to be a binary search, and start with a block of reasonable number, e.g. last = first + 1023 to minimize the depth of the tree.
However, note that this work only if the error is deterministic, and is worse than committing each record once if the error rate is very high.
private boolean batchProcess(int first, int last){
try {
em.getTransaction().begin();
for (String s : unprocessedLines.size(); i++) {
this.process(s);
}
em.getTransaction().commit();
} catch (Exception e) {
e.printStackTrace();
if(em.getTransaction.isActive()) {
em.getTransaction.rollback();
}
if( first == last ){
failedLine.add(unprocessedLines(first));
} else {
int mid = (first + last)/2+1
batchProcess(first, mid-1);
batchProcess(mid, last);
}
}
}
For container managed transaction, one may need to do the binary search out of the context of the transaction, otherwise there will be RollbackException because the container had already decided to rollback this transaction.
In the following case where two DbContexts are nested due to method calls:
public void Method_A() {
using (var db = new SomeDbContext()) {
//...do some work here
Method_B();
//...do some more work here
}
}
public void Method_B() {
using (var db = new SomeDbContext()) {
//...do some work
}
}
Question:
Will this nesting cause any issues? (and will the correct DbContext be disposed at the correct time?)
Is this nesting considered bad practice, should Method_A be refactored into:
public void Method_A() {
using (var db = new SomeDbContext()) {
//...do some work here
}
Method_B();
using (var db = new SomeDbContext()) {
//...do some more work here
}
}
Thanks.
Your DbContext derived class is actually managing at least three things for you here:
the metadata that describes your database and your entity model,
the underlying database connection, and
a client side "cache" of entities loaded using the context, for change tracking, relationship fixup, etc. (Note that although I term this a "cache" for want of a better word, this is generally short lived and is just to support EFs functionality. It's not a substitute for proper caching in your application if applicable.)
Entity Framework generally caches the metadata (item 1) so that it is shared by all context instances (or, at least, all instances that use the same connection string). So here that gives you no cause for concern.
As mentioned in other comments, your code results in using two database connections. This may or may not be a problem for you.
You also end up with two client caches (item 3). If you happen to load an entity from the outer context, then again from the inner context, you will have two copies of it in memory. This would definitely be confusing, and could lead to subtle bugs. This means that, if you don't want to use shared context objects, then your option 2 would probably be better than option 1.
If you are using transactions, there are further considerations. Having multiple database connections is likely to result in transactions being promoted to distributed transactions, which is probably not what you want. Since you didn't make any mention of db transactions, I won't go into this further here.
So, where does this leave you?
If you are using this pattern simply to avoid passing DbContext objects around in your code, then you would probably be better off refactoring MethodB to receive the context as a parameter. The question of how long-lived context objects should be comes up repeatedly. As a rule of thumb, create a new context for a single database operation or for a series of related database operations. (See, for example this blog post and this question.)
(As an alternative, you could add a constructor to your DbContext derived class that receives an existing connection. Then you could share the same connection between multiple contexts.)
One useful pattern is to write your own class that creates a context object and stores it as a private field or property. Then you make your class implement IDisposable and its Dispose() method disposes the context object. Your calling code news up an instance of your class, and doesn't have to worry about contexts or connections at all.
When might you need to have multiple contexts active at the same time?
This can be useful when you need to write code that is multi-threaded. A database connection is not thread-safe, so you must only ever access a connection (and therefore an EF context) from one thread at a time. If that is too restrictive, you need multiple connections (and contexts), one per thread. You might find this interesting.
You can alter your code by passing to Method_B the context. If you do so, the creation of the second db call SomeDbContext will not be necessary.
there a question an answer in stackoverflow in this link
Proper use of "Using" statement for datacontext
It is a bit late answer, but still people may be looking so here is another way.
Create class, that cares about disposing for you. In some scenarios, there would be a function usable from different places in solution. This way you avoid creating multiple instances of DbContext and you can use nested calls as many as you like.
Pasting simple example.
public class SomeContext : SomeDbContext
{
protected int UsingCount = 0;
public static SomeContext GetContext(SomeContext context)
{
if (context != null)
{
context.UsingCount++;
}
else
{
context = new SomeContext();
}
return context;
}
private SomeContext()
{
}
protected bool MyDisposing = true;
protected override void Dispose(bool disposing)
{
if (UsingCount == 0)
{
base.Dispose(MyDisposing);
MyDisposing = false;
}
else
{
UsingCount--;
}
}
public override int SaveChanges()
{
if (UsingCount == 0)
{
return base.SaveChanges();
}
else
{
return 0;
}
}
}
Example of usage
public class ExmapleNesting
{
public void MethodA()
{
using (var context = SomeContext.GetContext(null))
{
// manipulate, save it, just do not call Dispose on context in using
MethodB(context);
}
MethodB();
}
public void MethodB(SomeContext someContext = null)
{
using (var context = SomeContext.GetContext(someContext))
{
// manipulate, save it, just do not call Dispose on context in using
// Even more nested functions if you'd like
}
}
}
Simple and easy to use.
If you think number of connections to Database,and impact of times that new connections must be opened, not an important problem and you have no limitation for support your application to run at best performance, everything is OK.
Your code works well. Because create just a db context has a low impact in your performance,meta data will be cached after first loading, and connection to your database just occurs when the code need to execute a query. With liitle performance consideration and code design, I offer you to make context factory to have just an instance of each Db Context for each instance of your application.
You can take a look at this link for more performance considerations
http://msdn.microsoft.com/en-us/data/hh949853
I'm using the LINQ Entity Framework and I've came across the scenario where I need to access the newly inserted Identity record before performing multiple operations using procedure.
Following is the code sinppet:
public void SaveQuote(Domain.Quote currentQuote)
{
try
{
int newQuoteId;
//Add quote and quoteline details to db
if (currentQuote != null)
{
using (QuoteContainer quoteContainer = new QuoteContainer())
{
**quoteContainer.AddToQuote(currentQuote);**
newQuoteId = currentQuote.QuoteId;
}
}
else return;
// Execution of some stored Procedure by using above newly generated QuoteId
}
catch (Exception ex)
{
throw ex;
}
}
In the next function
quoteContainer.SaveChanges(); will get called to commit the DB changes.
Can any one suggest whether the above approach is correct?
correct so far.
remember: you cannot get IDENTITY until insert has occured! on an update, your entity already holds the IDENTITY (mainly PK)