I am currently working on implementing aspnetboilerplate's transaction management
Below is the method I am using to insert a order and products associated with the order
public class OrderController
{
IOrderAppService _orderAppService;
public OrderController(IOrderAppService orderAppService)
{
_orderAppService = orderAppService;
}
public void TestOrder()
{
_orderAppService.TestTransaction();
}
}
public class OrderAppService : IOrderAppService
{
//repositories are injected here
public void TestTransaction()
{
//Created 'order' and 'products' here
//Committing the created objects
CommitOrderTransaction();
}
private void CommitOrderTransaction()
{
using (var unitOfWork = _unitOfWorkManager.Begin())
{
//Inserts the Order record
CommitInsertOrderHeader(); // Order Header is saved in database by using SaveChanges() method
//Inserts the Product records associated with OrderId
CommitInsertOrderDetails();
unitOfWork.Complete();
}
}
}
As the aspnetboilerplate documentation tells that,
"if current unit of work is transactional, all changes in the transaction are rolled back if an exception occurs, even saved changes."
In my case when an exception occurs on inserting the OrderDetails, I would like the header record to be rolled back as well but I still have the Order header record in database.
you don't need to handle transaction manually. ABP handles it for you! All application service methods are automatically set as UnitOfWork. It's an atomic operation. So if any exception occurs in the middle of transactions all the db operations are being rolled back.
further information check out https://aspnetboilerplate.com/Pages/Documents/Unit-Of-Work
If you are calling SaveChanges() twice and you aren't using a TransactionScope across both, then you won't be able to rollback the first call. I don't know what UnitOfWork is doing here, but if the DbContext you are working with isn't being used in that UoW, then nothing is going to happen. DbContext is technically its own Unit of Work already. You should be adding Orders and Order Details to the same DbContext and calling SaveChanges() just once. Then you'd be able to roll back both in that scenario.
Related
An example from Pro JPA:
#Stateless
public class AuditServiceBean implements AuditService {
#PersistenceContext(unitName = "EmployeeService")
EntityManager em;
public void logTransaction(int empId, String action) {
// verify employee number is valid
if (em.find(Employee.class, empId) == null) {
throw new IllegalArgumentException("Unknown employee id");
}
LogRecord lr = new LogRecord(empId, action);
em.persist(lr);
}
}
#Stateless
public class EmployeeServiceBean implements EmployeeService {
#PersistenceContext(unitName = "EmployeeService")
EntityManager em;
#EJB
AuditService audit;
public void createEmployee(Employee emp) {
em.persist(emp);
audit.logTransaction(emp.getId(), "created employee");
}
// ...
}
And the text:
Even though the newly created Employee is not yet in the database, the
audit bean can find the entity and verify that it exists. This works
because the two beans are actually sharing the same persistence
context.
As far as I understand Id is generated by the database. So how can emp.getId() be passed into audit.logTransaction() if the transaction has not been committed yet and id has not been not generated yet?
it depends on the strategy of GeneratedValue. if you use something like Sequence or Table strategy. usually, persistence provider assign the id to the entities( it has some reserved id based on allocation size) immediately after calling persist method.
but if you use IDENTITY strategy id different provider may act different. for example in hibernate, if you use Identity strategy, it performs the insert statement immediately and fill the id field of entity.
https://thoughts-on-java.org/jpa-generate-primary-keys/ says:
Hibernate requires a primary key value for each managed entity and
therefore has to perform the insert statement immediately.
but in eclipselink, if you use IDENTITY strategy, id will be assigned after flushing. so if you set flush mode to auto(or call flush method) you will have id after persist.
https://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Entities/Ids/GeneratedValue says:
There is a difference between using IDENTITY and other id generation
strategies: the identifier will not be accessible until after the
insert has occurred – it is the action of inserting that caused the
identifier generation. Due to the fact that insertion of entities is
most often deferred until the commit time, the identifier would not be
available until after the transaction has been flushed or committed.
in implementation UnitOfWorkChangeSet has a collection for new entities which will have no real identity until inserted.
// This collection holds the new objects which will have no real identity until inserted.
protected Map<Class, Map<ObjectChangeSet, ObjectChangeSet>> newObjectChangeSets;
JPA - Returning an auto generated id after persist() is a question that is related to eclipselink.
there are good points at https://forum.hibernate.org/viewtopic.php?p=2384011#p2384011
I am basically referring to some remarks in Java Persistence with
Hibernate. Hibernate's API guarantees that after a call to save() the
entity has an assigned database identifier. Depending on the id
generator type this means that Hibernate might have to issue an INSERT
statement before flush() or commit() is called. This can cause
problems at rollback time. There is a discussion about this on page
490 of Java Persistence with Hibernate.
In JPA persist() does not return a database identifier. For that
reason one could imagine that an implementation holds back the
generation of the identifier until flush or commit time.
Your approach might work fine for now, but you could run into troubles
when changing the id generator or JPA implementation (switching from
Hibernate to something else).
Maybe this is no issue for you, but I just thought I bring it up.
I have an application in which I verify the following behavior: the first requests after a long period of inactivity take a long time, and timeout sometimes.
Is it possible to control how the entity framework manages dispose of the objects? Is it possible mark some Entities to never be disposed?
...in order to avoid/improve the warmup time?
Regards,
The reasons that similar queries will have an improved response time are manifold.
Most Database Management Systems cache parts of the fetched data, so that similar queries in the near future will be faster. If you do query Teachers with their Students, then the Teachers table will be joined with the Students table. This join result is quite often cached for a while. The next query for Teachers with their Students will reuse this join result and thus become faster
DbContext caches queried object. If you select a Single teacher, or Find one, it is kept in local memory. This is to be able to detect which items are changed when you call SaveChanges. If you Find the same Teacher again, this query will be faster. I'm not sure if the same happens if you query 1000 Teachers.
When you create a DbContext object, the initializer is checked to see if the model has been changed or not.
So it might seem wise not to Dispose() a created DbContext, yet you see that most people keep the DbContext alive for a fairly short time:
using (var dbContext = new MyDbContext(...))
{
var fetchedTeacher = dbContext.Teachers
.Where(teacher => teacher.Id = ...)
.Select(teacher => new
{
Id = teacher.Id,
Name = teacher.Name,
Students = teacher.Students.ToList(),
})
.FirstOrDefault();
return fetchedTeacher;
}
// DbContext is Disposed()
At first glance it would seem that it would be better to keep the DbContext alive. If someone asks for the same Teacher, the DbContext wouldn't have to ask the database for it, it could return the local Teacher..
However, keeping a DbContext alive might cause that you get the wrong data. If someone else changes the Teacher between your first and second query for this Teacher, you would get the old Teacher data.
Hence it is wise to keep the life time of a DbContext as short as possible.
Is there nothing I can do to improve the speed of the first query?
Yes you can!
One of the first things you could do is to set the initialize of your database such that it doesn't check the existence and model of the database. Of course you can only do this when you are fairly sure that your database exists and hasn't changed.
// constructor; disables initializer
public SchoolDBContext() : base(...)
{
//Disable initializer
Database.SetInitializer<SchoolDBContext>(null);
}
Another thing could be, if you already have fetched your object to update the database, and you are sure that no one else changed the object, you can Attach it, instead of fetching it again, as is shown in this question
Normal usage:
// update the name of the teacher with teacherId
void ChangeTeacherName(int teacherId, string name)
{
using (var dbContext = new SchoolContext(...))
{
// fetch the teacher, change the name and save
Teacher fetchedTeacher = dbContext.Teachers.Find(teacherId);
fetchedTeader.Name = name;
dbContext.SaveChanges();
}
}
Using Attach to update an earlier fetched Teacher:
void ChangeTeacherName (Teacher teacher, string name)
{
using (var dbContext = new SchoolContext(...))
{
dbContext.Teachers.Attach(teacher);
dbContext.Entry(teacher).Property(t => t.Name).IsModified = true;
dbContext.SaveChanges();
}
}
Using this method doesn't require to fetch the Teacher again. During SaveChanges the value of IsModified of all properties of all Attached items is checked. If needed they will be updated.
This is my first post :) I'm new to MVC .NET. And have some questions in regards to Entity Framework functionality and performance. Questions inline...
class StudentContext : DbContext
{
public StudentContext() : base("myconnectionstring") {};
public DbSet<Student> Students {get; set; }
...
}
Question: Does DbSet read all the records from the database Student table, and store it in collection Students (i.e. in memory)? Or does it simply hold a connection to this table, and (record) fetches are done at the time SQL is executed against the database?
For the following:
private StudentContext db = new StudentContext();
Student astudent = db.Students.Find(id);
or
var astudent = from s in db.Students
where s.StudentID == id)
select s;
Question: Which of these are better for performance? I'm not sure how the Find method works under-the-hood for a collection?
Question: When are database connections closed? During the Dispose() method call?
If so, should I call the Dispose() method for a class that has the database context instance? I've read here to use Using blocks.
I'm guessing for a Controller class get's instantiated, does work including database access, calls it's associated View, and then (the Controller) gets out of scope and is unloaded from memory. Or the garbase collector. But best call Dispose() to do cleanup explicitly.
The Find method looks in the DbContext for an entity which has the specified key(s). If there is no matching entity already loaded, the DbContext will makes a SELECT TOP 1 query to get the entity.
Running db.Students.Where(s => s.StudentID == id) will get you a sequence containing all the entities returned from a SQL query similar to SELECT * FROM Students WHERE StudentID = #id. That should be pretty fast; you can speed it up by using db.Students.FirstOrDefault(s => s.StudentID == id), which adds a TOP 1 to the SQL query.
Using Find is more efficient if you're loading the same entity more than once from the same DbContext. Other than that Find and FirstOrDefault are pretty much equivalent.
In neither case does the context load the entire table, nor does it hold open a connection. I believe the DbContext holds a connection until the DbContext is disposed, but it opens and closes the connection on demand when it needs to resolve a query.
I have an ObjectContext with an update method. The method takes a generic object as a parameter. I need to attach this object to the ObjectContext and update the database with the changes the object had. example, I create a new object that has the same key as and entity in the database but some of the fields are different. I want to attach the object to its corresponding entity in the database and have it save the changes the new object has. Here is what i have in the Update method:
public void Update(BaseObject data, entitySetName)
{
AttachTo(entitySetName, data);
Refresh(RefreshMode.ClientWins, data);
SaveChanges();
}
After the refresh, the data get overwritten by the fields from the database. Leaving out the refresh also does not update the database record. Am I missing a step?
The DetectChanges() method will update the entitystate to modified if any changes have been made.
From MSDN: "In POCO entities without change-tracking proxies, the state of the modified properties changes to Modified when the DetectChanges method is called. After the changes are saved, the object state changes to Unchanged."
context.DetectChanges();
Additionally you could just set the state to modified so your method always trys to update regardless of whether anything has changed or not with:
ObjectStateManager.ChangeObjectState(data, EntityState.Modified);
Use simply:
public void Update(BaseObject data, entitySetName)
{
AttachTo(entitySetName, data);
ObjectStateManager.ChangeObjectState(data, EntityState.Modified);
SaveChanges();
}
I just jumped on a feature written by someone else that seems slightly inefficient, but my knowledge of JPA isn't that good to find a portable solution that's not Hibernate specific.
In a nutshell the Dao method called within a loop to insert each one of the new entities does a "entityManager.merge(object);".
Isnt' there a way defined in the JPA specs to pass a list of entities to the Dao method and do a bulk / batch insert instead of calling merge for every single object?
Plus since the Dao method is annotated w/ "#Transactional" I'm wondering if every single merge call is happening within its own transaction... which would not help performance.
Any idea?
No there is no batch insert operation in vanilla JPA.
Yes, each insert will be done within its own transaction. The #Transactional attribute (with no qualifiers) means a propagation level of REQUIRED (create a transaction if it doesn't exist already). Assuming you have:
public class Dao {
#Transactional
public void insert(SomeEntity entity) {
...
}
}
you do this:
public class Batch {
private Dao dao;
#Transactional
public void insert(List<SomeEntity> entities) {
for (SomeEntity entity : entities) {
dao.insert(entity);
}
}
public void setDao(Dao dao) {
this.dao = dao;
}
}
That way the entire group of inserts gets wrapped in a single transaction. If you're talking about a very large number of inserts you may want to split it into groups of 1000, 10000 or whatever works as a sufficiently large uncommitted transaction may starve the database of resources and possibly fail due to size alone.
Note: #Transactional is a Spring annotation. See Transactional Management from the Spring Reference.
What you could do, if you were in a crafty mood, is:
#Entity
public class SomeEntityBatch {
#Id
#GeneratedValue
private int batchID;
#OneToMany(cascade = {PERSIST, MERGE})
private List<SomeEntity> entities;
public SomeEntityBatch(List<SomeEntity> entities) {
this.entities = entities;
}
}
List<SomeEntity> entitiesToPersist;
em.persist(new SomeEntityBatch(entitiesToPersist));
// remove the SomeEntityBatch object later
Because of the cascade, that will cause the entities to be inserted in a single operation.
I doubt there is any practical advantage to doing this over simply persisting individual objects in a loop. It would be an interesting to look at the SQL that the JPA implementation emitted, and to benchmark.