I am wondering what the best practice would be for updating a record using JPA? I currently have devised my own pattern, but I suspect it is by no means the best practice. What I do is essentially look to see if the record is in the db, if I don't find it, I call the enityManager.persist(object<T>) method. if it does exist I call the entityManager.Merge(Object<T>) method.
The reason that I ask, is that I found out that the the merge method looks to see if the record is in the database allready, and if it is not in the db, then it proceeds to add it, if it is, it makes the changes necessary. Also, do you need to nestle the merge call in getTransaction().begin() and getTransaction.commit()? Here is what I have so far...
try{
launchRet = emf.find(QuickLaunch.class, launch.getQuickLaunchId());
if(launchRet!=null){
launchRet = emf.merge(launch);
}
else{
emf.getTransaction().begin();
emf.persist(launch);
emf.getTransaction().commit();
}
}
If the entity you're trying to save already has an ID, then it must exist in the database. If it doesn't exist, you probably don't want to blindly recreate it, because it means that someone else has deleted the entity, and updating it doesn't make much sense.
The merge() method persists an entity that is not persistent yet (doesn't have an ID or version), and updates the entity if it is persistent. You thus don't need to do anything other than calling merge() (and returning the value returned by this call to merge()).
A transaction is a functional atomic unit of work. It should be demarcated at a higher level (in the service layer). For example, transfering money from an account to another needs both account updates to be done in the same transaction, to make sure both changes either succeed or fail. Removing money from one account and failing to add it to the other would be a major bug.
Related
How i can get int value from database?
Table has 4 columns
Id, Author, Like, Dislike.
I want to get Dislike amount and add 1.
i try
var db = new memyContext();
var amountLike = db.Memy.Where(s => s.IdMema == id).select(like);
memy.like=amountLike+1;
I know that this is bad way.
Please help
I'm not entirely sure what your question is here, but there's a few things that might help.
First, if you're retrieving via something that reasonably only has one match, or in a scenario where you want just one thing, then you should be use SingleOrDefault or FirstOrDefault, respectively - not Where. Where is reserved for scenarios where you expect multiple things to match, i.e. the result will be a list of objects, not an object. Since you're querying by an id, then it's fairly obvious that you expect just one match. Therefore:
var memy = db.Memy.SingleOrDefault(s => s.IdMema == id);
Second, if you just need to read the value of Like, then you can use Select, but here there's two problems with that. First, Select can only be used on enumerables, as already discussed here, you need a single object, not a list of objects. In truth, you can sidestep this in a somewhat convoluted way:
var amountLike = db.Memy.Select(x => x.Like).SingleOrDefault(x => x.IdMema == id);
However, this is still flawed, because you not only need to read this value, but also write back to it, which then needs the context of the object it belongs to. As such, your code should actually look like:
var memy = db.Memy.SingleOrDefault(s => s.IdMema == id);
memy.Like++;
In other words, you pull out the instance you want to modify, and then modify the value in place on that instance. I also took the liberty of using the increment operator here, since it makes far more sense that way.
That then only solves part of your problem, as you need to persist this value back to the database as well, of course. That also brings up the side issue of how you're getting your context. Since this is an EF context, it implements IDisposable and should therefore be disposed when you're done with it. That can be achieved simply by calling db.Dispose(), but it's far better to use using instead:
using (var db = new memyContext())
{
// do stuff with db
}
And while we're here, based on the tags of your question, you're using ASP.NET Core, which means that even this is sub-optimal. ASP.NET Core uses DI (dependency injection) heavily, and encourages you to do likewise. An EF context is generally registered as a scoped service, and should therefore be injected where it's needed. I don't have the context of where this code exists, but for illustration purposes, we'll assume it's in a controller:
public class MemyController : Controller
{
private readonly memyContext _db;
public MemyController(memyContext db)
{
_db = db;
}
...
}
With that, ASP.NET Core will automatically pass in an instance of your context to the constructor, and you do not need to worry about creating the context or disposing of it. It's all handled for you.
Finally, you need to do the actual persistence, but that's where things start to get trickier, as you now most likely need to deal with the concept of concurrency. This code could be being run simultaneously on multiple different threads, each one querying the database at its current state, incrementing this value, and then attempting to save it back. If you do nothing, one thread will inevitably overwrite the changes of the other. For example, let's say we receive three simultaneous "likes" on this object. They all query the object from the database, and let's say that the current like count is 0. They then each increment that value, making it 1, and then they each save the result back to the database. The end result is the value will be 1, but that's not correct: there were three likes just added.
As such, you'll need to implement a semaphore to essentially gate this logic, allowing only one like operation through at a time for this particular object. That's a bit beyond the scope here, but there's plenty of stuff online about how to achieve that.
I'm a little confused about put method with optional paramter.
suppose the mode is
Pet {
name
catagory
tag (optional)
}
when I want to create a Pet, I can use post method, the tag can be omitted.
when I want to update a Pet, the problem comes to me. According to the http spec, PUT method will update the entity by replaces the whole resource, which means I need to pass tag parameter. If I didn't pass tag, the default value will be empty, but it will cause the existing tag be override to empty.
For patch method, it will only update partial parameter no matter if it is optional. It's clear to understand.
I don't know if I misunderstand something, currently, in PUT method, I need to figure out what parameter is passed, and then update correspond field. But this seems the same with PATCH method.
An important thing to understand is that the HTTP specification describes the semantics (what do the different requests mean), but does not describe the implementation (how do you do it). That's deliberate - the specification basically says that your server should pretend to be a key/value store, but it doesn't restrict how you implement that.
PUT is roughly analogous to saving a file: "here is an array of bytes, save it using this key". In cases where your storage is a file system, then you just write the array of bytes to disk. If your storage is an in memory cache, then you just update your cached copy.
If your storage is some RDBMS database? Then you have some work to do, identifying which rows in your database need to be changed, and what commands need to be sent to the database to make that happen.
The point is that the client doesn't care -- as a server, you can change your underlying storage from RDBMS to document stores to files systems to whatever, and that's not any of the client's business.
in PUT method, I need to figure out what parameter is passed, and then update correspond field. But this seems the same with PATCH method.
Yes. In both cases, you need to figure out how to edit your resource in place.
PUT may feel a little bit easier, in that it is semantically equivalent to "delete the old version, then create a new version". You don't have to worry about merging the provided data to the state you already have stored.
I've been implementing an auditing system for mongo that tracks call and user information for each mongo transaction.
IE user bill
made a call to x endpoint
at y time
and changed z field from foo to bar
inserts and updates are easy because I tie a stored call info object to any objects updated in that call. (through a set property or updating the property directly on a replace or upsert call.)
all that works great.
Deletes are a hairy beast though.
when I delete by id I can easily track that information. BUT when I delete by filter
IE delete from users where username like bill.
mongo doesn't return the deleted ids back in the response. if I query to get those objects before I delete them who knows what could happen between the time I get those objects and when I actually delete them.
(Knock Knock, race condition. who's there?)
any ideas on how to keep the atomicity of that delete and have a reliable way to tie that delete call to the delete transaction?
I'm using the Entity Framework to model a simple parent child relationship between a document and it's pages. The following code is supposed to (in this order):
make a few property updates to the document
delete any of the document's existing pages
insert a new list of pages passed into the method.
The new pages do have the same keys as the deleted pages because there is an index that consists of the document number and then the page number (1..n).
This code works. However, when I remove the first call to SaveChanges, it fails with:
System.Data.SqlClient.SqlException: Cannot insert duplicate key row in object
'dbo.DocPages' with unique index 'IX_DocPages'.
Here is the working code with two calls to SaveChanges:
Document doc = _docRepository.GetDocumentByRepositoryDocKey(repository.Repository_ID, repositoryDocKey);
if (doc == null) {
doc = new Document();
_docRepository.Add(doc);
}
_fieldSetter.SetDocumentFields(doc, fieldValues);
List<DocPage> pagesToDelete = (from p in doc.DocPages
select p).ToList();
foreach (DocPage page in pagesToDelete) {
_docRepository.DeletePage(page);
}
_docRepository.GetUnitOfWork().SaveChanges(); //IF WE TAKE THIS OUT IT FAILS
int pageNo = 0;
foreach (ConcordanceDatabase.PageFile pageFile in pageList) {
++pageNo;
DocPage newPage = new DocPage();
newPage.PageNumber = pageNo;
newPage.ImageRelativePath = pageFile.Filespec;
doc.DocPages.Add(newPage);
}
_docRepository.GetUnitOfWork().SaveChanges(); //WHY CAN'T THIS BE THE ONLY CALL TO SaveChanges
If I leave the code as written, EF creates two transactions -- one for each call to SaveChanges. The first updates the document and deletes any existing pages. The second transaction inserts the new pages. I examined the SQL trace and that is what I see.
However, if I remove the first call to SaveChanges (because I'd like the whole thing to run in a single transaction), EF mysteriously does not do the deletes at all but rather generates only the inserts?? -- which result in the duplicate key error. I wouldn't think that waiting to call SaveChanges should matter here?
Incidentally, the call to _docRepository.DeletePage(page) does a objectContext.DeleteObject(page). Can anyone explain this behavior? Thanks.
I think a more likely explanation is that EF does do the deletes, but probably it does them after the insert, so you end up passing through an invalid state.
Unfortunately you don't have low level control over the order DbCommands are executed in the database.
So you need two SaveChanges() calls.
One option is to create a wrapping TransactionScope.
Then you can call SaveChanges() twice and it all happens inside the same transaction.
See this post for more information on the related techniques
Hope this helps
Alex
Thank you Alex. This is very interesting. I did indeed decide to wrap the whole thing up in a transaction scope and it did work fine with the two SaveChanges() -- which, as you point out, appear to be needed due to the primary key conflict with the deletes and subsequent inserts. A new issue now arises based on the article to which you linked. It properly advises to call SaveChanges(false) -- instructing EF to hold onto it's changes because the outer transaction scope will actually control whether those changes ever actually make it to the database. Once the controlling code calls scope.Complete(), the pattern is to then call EF's context.AcceptAllChanges(). But I think this will be problematic for me because I'm forced to call SaveChanges TWICE for the problem originally described. If both those calls to SaveChanges specify False for the accept changes parameter, then I suspect the second call will end up repeating the SQL from the first. I fear I may be in a Catch-22.
We are using L2E and REST in our project, and while I have been able to retrieve data from the db without issue, I am still not able to update or add new records to the db. I imagine that it's a syntax problem (we're still new to to linq), but I haven't been able to figure it out. We initially load the data in the dataservicecontext, and when updates are made they are stored in the CurrencyManager.Current of the binding source. However, when I call SaveChanges nothing gets modified in the db, and I don't know why.
For example,
Loading the data:
var customerQuery = Program.Proxy.Customers.Where(p => p.ContactId == g);
Saving the data:
Program.Proxy.SaveChanges();
I've confirmed that the updated copy of the entity in memory is being tracked, so I don't need to call AddObject, but I get an error ("The closed type Lynxphere.WindowsClient.LynxphereDataServices.Customers does not have a corresponding Customers settable property.") if I try to call AddLink. And I'm not even sure if this step is necessary. Help would be greatly appreciated.
Have a look on my repository pattern with a Save() function, published in the project below.
There is a EntityProductRepository implemented.
That might help you to do the Updates and Inserts correctly.
openticket.codeplex.com