Querying Azure Mobile App TableController - entity-framework

I'm using Azure Mobile Apps and TableControllers in my project. Development has been going quite smoothly, until now. One of my tables relies on quite a bit of business logic in order to return the appropriate entities back to the client. To perform this business logic I need to get some parameters from the client (specifically, a date range).
I know I could use an APIController to return the data, but won't that break the entity syncing that's provided by the SyncTables in Xamarin?
My current logic in my GetAll is:
public IQueryable<WorkItemDTO> GetAllWorkItem()
{
//Return all the work items that the user owns or has been assigned as a resource.
var query = MappedDomainManager.QueryEntity().Where(x => x.OwnerId == UserProfileId || x.Resources.Where(r => r.AssignedResourceId == UserProfileId).Count() > 0);
return query.Project().To<WorkItemDTO>();
}
What I would like is to be able to somehow pass through a start and end date that I can then use to build up my list of WorkItemDTO objects. The main problem is that a WorkItem entity can actually spawn off multiple WorkItemDTO objects as a WorkItem can be set to be recurring. So for example say a WorkItem is recurring once a week, and the user wants to see a calendar for 1 month, that single WorkItem will spawn 4 separate concrete WorkItemDTO objects.
Then when a user modifies one of those WorkItemDTO objects on the client side, I want it to be sent back as a patch that creates its own WorkItem entity.
Does anyone know how I can get a TableController to receive parameters? Or how to get an APIController to work so that client syncing isn't affected?
Any help would be appreciated.
Thanks
Jacob

On the server, you can add a query parameter to the table controller get method easily, by adding a parameter with the right name and type.
For instance, you could add a dateFilter query parameter as follows:
public IQueryable<WorkItemDTO> GetAllWorkItem(string dateFilter)
This would be called by passing a dateFilter=value query parameter. You can use any data type that ASP.NET Web API supports in serialization. (Note that if you don't have a GetAll that takes no query parameters, you will get an Http 405 Method Not allowed if you do a Get without this query parameter.)
On the client, as noted by #JacobJoz, you just use the method IMobileServiceTableQuery.WithParameters to construct the query that is passed to PullAsync. If you have multiple incremental sync queries against the same table and they use different values for the parameters, you should make sure to include those in the queryId to pull.
That is, if you have one query with parameters foo=bar and another that is foo=baz for the same sync table, make sure you use two different query IDs, one that includes "bar" and one that includes "baz". Otherwise, the 2 incremental syncs can interfere with one another, as the queryId is used as a key to save the last updated timestamp for that sync table. See How offline synchronization works.
The part that is unfortunately hard is passing the query parameter as part of the offline sync pull. Offline sync only works with table controllers, FYI.
There is an overloaded extension method for PullAsync that takes a dictionary of parameters, but unfortunately it requires a string query rather than IMobileServiceTableQuery:
PullAsync(this IMobileServiceSyncTable table, string queryId, string query, IDictionary<string, string> parameters, CancellationToken cancellationToken)
(I've filed a bug to fix this: Add a generic PullAsync overload that accepts query parameters).
The problem is that there's no easy way to convert from IMobileServiceTableQuery to an OData query string, since you'd need to access internal SDK methods. (I filed another issue: Add extension method ToODataString for IMobileServiceTableQuery.)

I've looked through the source code for MobileServiceTableQuery on github. It looks like it exposes a method called WithParameters. I have chained that method call onto CreateQuery in order to generate the query to the server, and it seems to do what I want.
Here is the client code:
var parameters = new Dictionary<string, string>();
parameters.Add("v1", "hello");
var query = WorkItemTable.CreateQuery().WithParameters(parameters);
await WorkItemTable.PullAsync("RetrieveWorkItems", query);
On the server I have a GetAll implementation that looks like this:
public IQueryable<WorkItem> GetAllWorkItem(string v1)
{
//return IQueryable after processing business logic based on parameter
}
The parameterized version of the method gets called successfully. I'm just not entirely sure what the impacts are from an incremental pull perspective.

Related

CQRS - How to handle if a command requires data from db (query)

I am trying to wrap my head around the best way to approach this problem.
I am importing a file that contains bunch of users so I created a handler called
ImportUsersCommandHandler and my command is ImportUsersCommand that has List<User> as one of the parameters.
In the handler, for each user that I need to import I have to make sure that the UserType is valid, this is where the confusion comes in. I need to do a query against the database, to get list of all possible user types and than for each user I am importing, I want to verify that the user type id in the import matches one that is in the db.
I have 3 options.
Create a query GetUserTypesQuery and get the rest of this and then pass it on to the ImportUsersCommand as a list and verify inside the command handler
Call the GetUserTypesQuery from the command itself and not pass it (command calling another query)
Do not create a GetUsersTypeQuery and just do the query results within the command (still a query but no query/handler involved)
I feel like all these are dirty solutions and not the correct way to apply CQRS.
I agree option 1 sounds the best but would maybe suggest adding a pre handler to validate your input?
So ImportUsersCommandHandler deals with importing you data (and only that) and add a handler that runs before that validates (in your example, checks the user types and maybe other stuff) and bails out of it does not pass. So it queries the db, checks the usertypes and does whatever it needs to if it fails. Otherwise it just passes down to your business handler (ImportUsersCommandHandler).
I am used to using Mediatr in NET Core and this pattern works well (this is what we do) so sorry if this does not fit with your environment/setup!

Get int value from database

How i can get int value from database?
Table has 4 columns
Id, Author, Like, Dislike.
I want to get Dislike amount and add 1.
i try
var db = new memyContext();
var amountLike = db.Memy.Where(s => s.IdMema == id).select(like);
memy.like=amountLike+1;
I know that this is bad way.
Please help
I'm not entirely sure what your question is here, but there's a few things that might help.
First, if you're retrieving via something that reasonably only has one match, or in a scenario where you want just one thing, then you should be use SingleOrDefault or FirstOrDefault, respectively - not Where. Where is reserved for scenarios where you expect multiple things to match, i.e. the result will be a list of objects, not an object. Since you're querying by an id, then it's fairly obvious that you expect just one match. Therefore:
var memy = db.Memy.SingleOrDefault(s => s.IdMema == id);
Second, if you just need to read the value of Like, then you can use Select, but here there's two problems with that. First, Select can only be used on enumerables, as already discussed here, you need a single object, not a list of objects. In truth, you can sidestep this in a somewhat convoluted way:
var amountLike = db.Memy.Select(x => x.Like).SingleOrDefault(x => x.IdMema == id);
However, this is still flawed, because you not only need to read this value, but also write back to it, which then needs the context of the object it belongs to. As such, your code should actually look like:
var memy = db.Memy.SingleOrDefault(s => s.IdMema == id);
memy.Like++;
In other words, you pull out the instance you want to modify, and then modify the value in place on that instance. I also took the liberty of using the increment operator here, since it makes far more sense that way.
That then only solves part of your problem, as you need to persist this value back to the database as well, of course. That also brings up the side issue of how you're getting your context. Since this is an EF context, it implements IDisposable and should therefore be disposed when you're done with it. That can be achieved simply by calling db.Dispose(), but it's far better to use using instead:
using (var db = new memyContext())
{
// do stuff with db
}
And while we're here, based on the tags of your question, you're using ASP.NET Core, which means that even this is sub-optimal. ASP.NET Core uses DI (dependency injection) heavily, and encourages you to do likewise. An EF context is generally registered as a scoped service, and should therefore be injected where it's needed. I don't have the context of where this code exists, but for illustration purposes, we'll assume it's in a controller:
public class MemyController : Controller
{
private readonly memyContext _db;
public MemyController(memyContext db)
{
_db = db;
}
...
}
With that, ASP.NET Core will automatically pass in an instance of your context to the constructor, and you do not need to worry about creating the context or disposing of it. It's all handled for you.
Finally, you need to do the actual persistence, but that's where things start to get trickier, as you now most likely need to deal with the concept of concurrency. This code could be being run simultaneously on multiple different threads, each one querying the database at its current state, incrementing this value, and then attempting to save it back. If you do nothing, one thread will inevitably overwrite the changes of the other. For example, let's say we receive three simultaneous "likes" on this object. They all query the object from the database, and let's say that the current like count is 0. They then each increment that value, making it 1, and then they each save the result back to the database. The end result is the value will be 1, but that's not correct: there were three likes just added.
As such, you'll need to implement a semaphore to essentially gate this logic, allowing only one like operation through at a time for this particular object. That's a bit beyond the scope here, but there's plenty of stuff online about how to achieve that.

Acumatica REST - CustomerLocation entity does not return records

Using REST API, able to pull down customers, contacts, and addresses via the Customer entity, however, when I try to get CustomerLocation entities, I am just getting an empty set.
[]
Using latest version as of the writing of this question (2018R1 dated something like Aug 17 2018).
I've tried the following:
CustomerLocation?$expand=LocationContact
CustomerLocation?$expand=LocationContact,LocationContact/Address
Neither of them return any data.
The CustomerLocation entity is linked to a Generic Inquiry that is defined to allow creation of new records, so it was causing an error trying to persist the data when trying to do a Put to it since I was not supplying a body or a valid structure.
How I got this to work was to create my own Generic Inquiry, linking it to an entity in my extended endpoint, and adding a Detail property within the entity that would serve as the collection of detail records returned by the Generic Inquiry. Then put all of the fields from the Generic Inquiry within the Results Fields.
Now, I can get the records from the Generic Inquiry by doing a Put request via my endpoint entity:
AICustomerLocationGI?$expand=Results
Note: It's important to do a Put instead of a Get if you want to avoid getting BQL Delegate errors on some DACs.
That returned all records at once, but got me where I initially needed to be. Next, I added a parameter, Greater Than condition, and sorting on Address ID to the Generic Inquiry and defined the generic inquiry to return the top 100 records. By passing the last Address ID of the previous batch of records in the body of the Put request, this gave me a paging mechanism for returning the records.

Aborting asynchronous apex (future call) from trigger? Queuable Interface solution?

I'm currently working on something that involves iterating through a Sales Order and Sales Order Products via a trigger on the Sales Order object. I've created an Apex class that is called from the Sales Order after update trigger. The trigger passes a string (Sales Order Id) to the static method of the class. This future call method queries for Sales Order Products that belong to the Sales Order id, and makes a web service call for each item in the collection. This all works great, however I would like for this process to be more robust and handle errors more intelligently. What I would like to be able to do is abort the whole process when the method encounters something it doesn't like, let's say it identifies a product in the order it doesn't like as an example. The only process I've found that can handle aborting is via the Queueable Interface, and calling the class via System.enqueueJob(). This however doesn't help me as I cannot for the life of me figure out a way to pass any parameters to this class when System.enqueueJob() is invoked, since the class methods are static and the interface forces the process to run from the execute() method, which only takes a Context parameter. Am I going down the wrong road with this? The only other possibility I was thinking of was to just create methods for all of the subprocesses in my class and return from those if they encounter any errors and set a bool flag that can be used to skip processes afterward in the class. Sorry if this doesn't make sense, if so let me know and I'll try to provide more information.
You can pass parameters to a Queueable job in the constructor. i.e.:
System.enqueueJob(new myQueueableClass(salesOrderId));
You need to add a constructor in your Queueable class that will accept the Sales Order Id and store it in a private variable also declared inside the Queueable class, which then can be accessed by the execute() method.

Breeze dataservice abstractrest - with sparse save response

I'm working with breeze labs AbstractRestDataServiceAdapter. In our data service adapter implementation's _createSaveRequest method our state.isModified branch emulates your oData adapters and only sends modified fields in the save request.
My issue is that our REST server returns a sparse response, i.e. input data and any fields on the entity that were updated. The result is that from a client perspective fields not returned in the saved entity are being wiped out.
I had seen merge logic in prior debugging sessions, so I initially thought I might be able to influence the save response processing via MergeStrategy, but it appears MergeStrategy doesn't apply in a save scenario. It appears AbstractRestDataServiceAdapter assumes the server is returning the full entity in a save response.
What options do we have for managing a sparse response from the server that preserves the state of fields not returned in the save response?
Is there a particular AbstractRestDataServiceAdapter method that we should override to manage merging the save response?
Take a look at the changeRequestSucceeded method of the breeze.labs.dataservice.abstractrest.js adapter which processes each entity-specific response in particular the lines at the top:
var saved = saveContext.adapter._getResponseData(response);
if (saved && typeof saved === 'object') {
// Have "saved entity" data; add its type (for JsonResultsAdapter) & KeyMapping
saved.$entityType = saveContext.originalEntities[index].entityType;
addKeyMapping();
} else {
...
}
saveContext.saveResult.entities.push(saved);
return saved;
Notice the references to saveContext.originalEntities[index].
Suppose you know that the data in the saved object represent just the specific properties that you need to merge into the entity in cache.
Well you're in excellent position here, in YOUR version of this method in YOUR concrete implementation of this adapter, to combine the property values of saved and the property values of saveContext.originalEntities[index] before pushing that merged result into saveContext.saveResult.entities.
There is no requirement to actually return entities from the server on a save. Both the breeze.dataService.odata and the breeze.dataservice.mongo adapter have this issue where the server only returns some or parts of the saved entites. The only requirement is that the dataService adapter saveChanges method return an object with two properties, i.e. something like
return { entities: entities, keyMappings: keyMappings };
If you actually have just the modified fields returned from the server then will have to manage the merge yourself, but this isn't really that complex. For each entity 'saved' simply locate it in the cache, set the changed properties and call acceptChanges and the return the 'located' entity in the result shown above.