How do I get the save results from a JayData saveChanges() call? - jaydata

In JayData, how do I find out how my saveChanges() call went? In Breeze, the save command returns a saveResults object. Is there anything equivalent in JayData?

Disclaimer: I work for the JayData project
Unfortunately individual results of the batch operation are not accessible in the current version when using context.saveChanges(). In general you'll be able to receive the status of the results through the then() and fail() branches promise handlers (jQuery required).
You'll have more detailed info about the exception result (error details) if you are using the instance save(), remove() etc methods as you'll get detailed error response in the fail() branch.
If the result contains entity updates then those updates are automatically merged into the live entity instance - both with saveChanges() and instance.save().
If you need to process the protocol result use context.prepareRequest() to intercept the http communication.

Related

How to resolve: The transaction operation cannot be performed Exception

I am getting this error, and I have not been able to resolve:
System.Data.SqlClient.SqlException: 'The transaction operation cannot be performed because there are pending requests working on this transaction.'
What is going on is that a usual data operation is taking place as part of a Controller Action.
At the same time, there is a Filter that is running that logs the action to a database.
this._orderEntryContext.ServerLog.Add(serverLog);
return this._orderEntryContext.SaveChanges() > 0;
This is where the error occurs.
So it seems to me that there is two SaveChanges going on at the same time, and so the transaction gets fouled up.
Not sure how to resolve. They are both using the same context that is gotten through DI. A workaround was to create a second context manually, but I would rather stick to the DI pattern. But I don't know how to create a second Db Context in DI, or even if that is a good idea.
Perhaps I should be using SaveChangesAsync() on both calls to ensure that they do not step on each other?
Turns out the answer to this was to make the Context a transient service:
services.AddDbContext<OrderEntryContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")), ServiceLifetime.Transient);
Then, I changed all repositories to also be transient:
services.AddTransient<AssociateRepository, AssociateRepository>();

playframework 1.2.x: await / async and JPA transactions

I have a PUT request that is too long to run. I'd like to make it async, using continuations (await/promise feature).
I create a job (LongJobThatUpdatesThePassedEntity) that modifies my entity
public static void myLongPut(#required Long id, String someData) {
MyJpaModel myJpaModel = MyJpaModel.findById(id);
//straightforward modifications
updateMyJpaModel(someData);
myJpaModel.save();
//long processing modifications to entity, involving WS calls
Promise<String> delayedResult = new LongJobThatUpdatesThePassedEntity(id).now();
await(delayedResult);
render(myJpaModel.refresh());
}
How are the DB transactions managed?
is there a commit before the job's call?
the job has it's own DB transaction?
if there is an issue in the LongJobThatUpdatesThePassedEntity that rollsback, the modifications done in updateMyJpaModel are persisted?
can I do render(myJpaModel.refresh()) at the end?
will it contain the straighforward modifications and the long ones?
thank's
I can answer most of your question for Play 1.4.3, which is the version I'm currently using. I don't expect that much has changed since Play 1.2.
How are the DB transactions managed?
Play! handles the transactions for jobs and controller actions using an "invocation", which is a Play-specific concept. In short, for any invocation, each plugin gets a chance to do some setup and cleanup before and after the invoked method runs. For database access, the JPAPlugin.withinFilter method starts and closes the transaction using the JPA class's helper methods.
is there a commit before the job's call?
When you call await(Future<T>), it has the effect of closing the current transaction and starting a new one. The specific mechanism is that it throws a "Suspend" exception, which bubbles up to PlayHandler$NettyInvocation and causes the afterInvocation callbacks to be called. This causes JPAPlugin.afterInvocation to call
JPA.closeTx() which either commits or rollsback the transaction, as appropriate.
When the Job exits and the await() continuation is resumed. This is also handled as an invocation, so the transaction is started in the same way as before, using JPAPlugin.withinFilter(). However, unlike before, the controller action is not the target of the invocation, but instead ActionInvoker.invoke() calls invokeWithContinuation, which restores the saved continuation state and resumes execution by returning from await().
JPA.withTransaction looks like it has some special logic to retain the same entity manager across the continuation suspend/resume. I think without this, you wouldn't be able to call refresh().
In your code, I think there's a race condition between when await() closes the transaction and the Job starts its transaction. That is, it's possible that the Job's transaction begins before the controller commits the "before await" transaction. To avoid this, you can explicitly call JPA.closeTx() before calling Job.now().
Based on code inspection, it looks like the way Play! is implemented, it so happens that the Job will exit and the Job's transaction will be closed before the "after await()" transaction is opened. I don't know if any
documentation that says this is an intended part of the await() contract, so if this is essential for your appliaction, you can avoid using undocumented behavior by committing the transaction just before your Job.doJobWithResult() method returns.
the job has it's own DB transaction?
Yes, unless its annotated to not have a transaction.
if there is an issue in the LongJobThatUpdatesThePassedEntity that rollsback, the modifications done in updateMyJpaModel are persisted?
Based on the explanation above, each of the three transactions are independent. If one is rolled back, I don't see how it would affect the others.

Querying Azure Mobile App TableController

I'm using Azure Mobile Apps and TableControllers in my project. Development has been going quite smoothly, until now. One of my tables relies on quite a bit of business logic in order to return the appropriate entities back to the client. To perform this business logic I need to get some parameters from the client (specifically, a date range).
I know I could use an APIController to return the data, but won't that break the entity syncing that's provided by the SyncTables in Xamarin?
My current logic in my GetAll is:
public IQueryable<WorkItemDTO> GetAllWorkItem()
{
//Return all the work items that the user owns or has been assigned as a resource.
var query = MappedDomainManager.QueryEntity().Where(x => x.OwnerId == UserProfileId || x.Resources.Where(r => r.AssignedResourceId == UserProfileId).Count() > 0);
return query.Project().To<WorkItemDTO>();
}
What I would like is to be able to somehow pass through a start and end date that I can then use to build up my list of WorkItemDTO objects. The main problem is that a WorkItem entity can actually spawn off multiple WorkItemDTO objects as a WorkItem can be set to be recurring. So for example say a WorkItem is recurring once a week, and the user wants to see a calendar for 1 month, that single WorkItem will spawn 4 separate concrete WorkItemDTO objects.
Then when a user modifies one of those WorkItemDTO objects on the client side, I want it to be sent back as a patch that creates its own WorkItem entity.
Does anyone know how I can get a TableController to receive parameters? Or how to get an APIController to work so that client syncing isn't affected?
Any help would be appreciated.
Thanks
Jacob
On the server, you can add a query parameter to the table controller get method easily, by adding a parameter with the right name and type.
For instance, you could add a dateFilter query parameter as follows:
public IQueryable<WorkItemDTO> GetAllWorkItem(string dateFilter)
This would be called by passing a dateFilter=value query parameter. You can use any data type that ASP.NET Web API supports in serialization. (Note that if you don't have a GetAll that takes no query parameters, you will get an Http 405 Method Not allowed if you do a Get without this query parameter.)
On the client, as noted by #JacobJoz, you just use the method IMobileServiceTableQuery.WithParameters to construct the query that is passed to PullAsync. If you have multiple incremental sync queries against the same table and they use different values for the parameters, you should make sure to include those in the queryId to pull.
That is, if you have one query with parameters foo=bar and another that is foo=baz for the same sync table, make sure you use two different query IDs, one that includes "bar" and one that includes "baz". Otherwise, the 2 incremental syncs can interfere with one another, as the queryId is used as a key to save the last updated timestamp for that sync table. See How offline synchronization works.
The part that is unfortunately hard is passing the query parameter as part of the offline sync pull. Offline sync only works with table controllers, FYI.
There is an overloaded extension method for PullAsync that takes a dictionary of parameters, but unfortunately it requires a string query rather than IMobileServiceTableQuery:
PullAsync(this IMobileServiceSyncTable table, string queryId, string query, IDictionary<string, string> parameters, CancellationToken cancellationToken)
(I've filed a bug to fix this: Add a generic PullAsync overload that accepts query parameters).
The problem is that there's no easy way to convert from IMobileServiceTableQuery to an OData query string, since you'd need to access internal SDK methods. (I filed another issue: Add extension method ToODataString for IMobileServiceTableQuery.)
I've looked through the source code for MobileServiceTableQuery on github. It looks like it exposes a method called WithParameters. I have chained that method call onto CreateQuery in order to generate the query to the server, and it seems to do what I want.
Here is the client code:
var parameters = new Dictionary<string, string>();
parameters.Add("v1", "hello");
var query = WorkItemTable.CreateQuery().WithParameters(parameters);
await WorkItemTable.PullAsync("RetrieveWorkItems", query);
On the server I have a GetAll implementation that looks like this:
public IQueryable<WorkItem> GetAllWorkItem(string v1)
{
//return IQueryable after processing business logic based on parameter
}
The parameterized version of the method gets called successfully. I'm just not entirely sure what the impacts are from an incremental pull perspective.

RequestFactory Diff Calculation and 'static' find method

Am bit stuck by these three questions:
1) I see that diff is calculated in AutoBeanUtils's diff method. I saw a tag called parentObject in the entity which is used in the comparison to calculate diff.
parent = proxyBean.getTag(Constants.PARENT_OBJECT); in AbstractRequestContext class.
Does that mean there are two copies for a given entity thats loaded on to the browser? If my entity actual size is say 1kb, actual data loaded will be 2kb (as two copies of entity are getting loaded onto the browser) ?
2) On the server side:
Suppose I have to fetch an entity from the database, the static find<EntityName> should be such that I have to make a db call every time, or is there a way where I can fine tune that behavior? [Sorry I did not understand the locator concept very well.]
3) What happens if there is a crash on the server side(for any reason which need not be current request specific) when a diff is sent from the client?
Thanks a lot.
when you .edit() a proxy, it makes a copy and stores the immutable proxy you passed as argument as the PARENT_OBJECT of the returned proxy.
you'd generally make a DB call every time the method is called (this is the same for a Locator's find() method), which will be no more than twice for each request. You can use some sort of cache if you need, but if you use JPA or JDO this is taken care of for you (you have to use a session-per-request pattern, aka OpenSessionInView)
If there's any error while decoding the request, a global error will be returned, that will be passed to onFailure of all Receivers for the failed RequestContext request.
See https://code.google.com/p/google-web-toolkit/wiki/RequestFactoryMovingParts#Flow

GWT request factory: Please help explain a complete end-to-end WriteOperation.DELETE scenario?

Been looking at a lot of gwt request factory examples lately but still can't quite find the complete picture:
GWT request factory's sweet spot is CRUD (create/read/update/delete). Having said that:
Even in the "update" case, it is not clear to me who is responsible for firing a EntityProxyChange (Event)
I read somewhere (forget where) that the client side request factory keeps a local cache of EntityProxy that it has 'seen', and, if it 'sees' a new one then it fires a EntityProxyChange (Event)
does that mean that, if my "updatePerson()" method returns a (newly updated) PersonProxy, then does the local client side request factory infrastructure 'see' this newly updated person (i.e., by virtue of its updated versionId) and then does it automatically fire a EntityProxyChange (Event) ?
In the "delete" case, say I create a function called "deletePerson()" in my request context, I understand how the request arrives at the server and one might do, e.g., a SQL DELETE to delete the entity, but, who is responsible for firing a EntityProxyChange (Event) w/WriteOperation=DELETE ? do these events get fired at the server side? the client side?
I've had a look at the listwidget example (http://code.google.com/p/listwidget), but, on a 'itemlist' delete, it kinda 'cheats' by just doing a brute force refresh of the entire list (though I do understand that that detail is not necessarily what listwidget is trying to illustrate in the first place); I would have expected to see a EntityProxyChange (Event) handler that listens for WriteOperation.DELETE events and then would just remove just that entity from the ListDataProvider.
Does ServiceLayer/ServiceLayerDecorator.isLive() factor into any of this?
See http://code.google.com/p/google-web-toolkit/wiki/RequestFactoryMovingParts#Flow
The client-side doesn't keep a cache (that was probably the case in the early iterations a year ago though, but has never been the case in non-milestones releases), and the server-side is responsible for "firing" the events (you'll see them in the JSON response payload).
When the wiki page referenced above says:
Entities that can be no longer retrieved ...
What it really means is that isLive has returned false, and isLive's implementation defaults to doing a get by the ID and checking for a non-null result.