Business logic in RIA service - wcf-ria-services

I am writing a RIA service. I need to decide where to put the business logic.
I see two possibilities
Use the CRUD methods being called by submitchanges and put the business logic there. The main problem is that in some cases I will need to do more effort to detect what has changed in my object as I don't really know which field changed in my object. The methods themself are expected to become big as needing to deal with multiple entity changes.
Give the client specific invoke/named update methods for some of the update ops. These will be called by the UI when doing specific data model change and therefore the effort on the server side will be smaller (will know better the operation being done) and maybe the complication of the server methods will be reduced..
Amit

The second approach makes more sence to me. Client is used only as a terminal to store and show data but there is no contraindications to invoke different server functions as long as you can keep your data consistent. Huge perk is simplicity, ive been tought the easier the better.

Related

How to structure a RESTful backend API with a database?

I want to make an API using REST which interacts (stores) data in a database.
While I was reading some design patterns and I came across remote facade, and the book I was reading mentions that the role of this facade is to translate the course grained methods from the remote calls into fine grained local calls, and that it should not have any extra logic. As an explaination, it says that the program should still work without this facade.
Here's an example
Yet I have two questions:
Considering I also have a database, does it make sense to split the general call into specific calls for each attribute? Doesn't it make more sense to just have a general "get data" method that runs one query against the database and converts it into an usable object, to reduce the number of database calls? So instead of splitting the get address to get street, get city, get zip, make on db call for all that info.
With all this in mind, and, in my case using golang, how should the project be structured in terms of files and functions?
I will have the main file with all the endpoints from the REST API, calling the controllers that handle these requests.
I will have a set of files that define those controllers. Are these controllers the remote facade? Should those methods not have logic in that case, and just call the equivalent local methods?
Should the local methods call the database directly, or should they use some sort of helper class that accesses the database?
Assuming all questions are positive, does the following structure make sense?
Main
Controllers
Domain
Database helper
First and foremost, as Mike Amundsen has stated
Your data model is not your object model is not your resource model is not your affordance model
Jim Webber did say something very similar, that by implementing a REST architecture you have an integration model, in the form of the Web, which is governed by HTTP and the other being the domain model. Resources adept and project your domain model to the world, though there is no 1:1 mapping between the data in your database and the representations you send out. A typical REST system does have many more resources than you have DB entries in your domain model.
With that being said, it is hard to give concrete advice on how you should structure your project, especially in terms of a certain framework you want to use. In regards to Robert "Uncle Bob" C. Martin on looking at the code structure, it should tell you something about the intent of the application and not about the framework¹ you use. According to him Architecture is about intent. Though what you usually see is the default-structure imposed by a framework such as Maven, Ruby on Rails, ... For golang you should probably read through certain documentation or blogs which might or might not give you some ideas.
In terms of accessing the database you might either try to follow a micro-service architecture where each service maintains their own database or you attempt something like a distributed monolith that acts as one cohesive system and shares the database among all its parts. In case you scale to the broad and a couple of parallel services consume data, i.e. in case of a message broker, you might need a distributed lock and/or queue to guarantee that the data is not consumed by multiple instances at the same time.
What you should do, however, is design your data layer in a way that it does scale well. What many developers often forget or underestimate is the benefit they can gain from caching. Links are basically used on the Web to reference from one resource to an other and giving the relation some semantic context by the utilization of well-defined link-relation names. Link relations also allow a server to control its own namespace and change URIs as needed. But URIs are not only pointers to a resource a client can invoke but also keys for a cache. Caching can take place on multiple locations. On the server side to avoid costly calculations or look ups on the client side to avoid sending requests out in general or on intermediary hops which allow to take away pressure from heavily requested servers. Fielding made caching even a constraint that needs to be respected.
In regards to what attributes you should create queries for is totally dependent on the use case you attempt to depict. In case of the address example given it does make sense to return the address information all at once as the street or zip code is rarely queried on its own. If the address is part of some user or employee data it is more vague whether to return that information as part of the user or employee data or just as a link that should be queried on its own as part of a further request. What you return may also depend on the capabilities of the media-type client and your service agree upon (content-type negotiation).
If you implement something like a grouping for i.e. some football players and certain categories they belong to, such as their teams and whether they are offense or defense players, you might have a Team A resource that includes all of the players as embedded data. Within the DB you could have either an own table for teams and references to the respective player or the team could just be a column in the player table. We don't know and a client usually doesn't bother as well. From a design perspective you should however be aware of the benefits and consequences of including all the players at the same time in regards to providing links to the respective player or using a mixed approach of presenting some base data and a link to learn further details.
The latter approach is probably the most sensible way as this gives a client enough information to determine whether more detailed data is needed or not. If needed a simple GET request to the provided URI is enough, which might be served by a cache and thus never reach the actual server at all. The first approach has for sure the disadvantage that it doesn't reuse caching optimally and may return way more data then actually needed. The approach to include links only may not provide enough information forcing the client to perform a follow-up request to learn data about the team member. But as mentioned before, you as the service designer decide which URIs or queries are returned to the client and thus can design your system and data model accordingly.
In general what you do in a REST architecture is providing a client with choices. It is good practice to design the overall interaction flow as a state machine which is traversed through receiving requests and returning responses. As REST uses the same interaction model as the Web, it probably feels more natural to design the whole system as if you'd implement it for the Web and then apply the design to your REST system.
Whether controllers should contain business logic or not is primarily an opinionated question. As Jim Webber correctly stated, HTTP, which is the de-facto transport layer of REST, is an
application protocol whose application domain is the transfer of documents over a network. That is what HTTP does. It moves documents around. ... HTTP is an application protocol, but it is NOT YOUR application protocol.
He further points out that you have to narrow HTTP into a domain application protocol and trigger business activities as a side-effect of moving documents around the network. So, it's the side-effect of moving documents over the network that triggers your business logic. There is no straight rule whether to include business logic in your controller or not, but usually you try to keep the business logic in yet their own layer, i.e. as a service that you just invoke from within the controller. That allows to test the business logic without the need of the controller and thus without the need of a real HTTP request.
While this answer can't provide more detailed information, partly due to the broad nature of the question itself, I hope I could shed some light in what areas you should put in some thoughts and that your data model is not necessarily your resource or affordance model.

Entity Framework - How to update multiple table without view model

Actualy I want to update many tables as user clicked on SAVE UPDATE. So all changes user made (add/delete/update) will be handled in web api. I dont want to use view model because my superior said it kinda repeating model of what I've done in Angular. Any idea?
Well, let's look at this from a conceptual level.
Since you have a Save button, this means you already have a form and you have Angular models which of course are JavaScript ones. There must be a submit button on that form and you have access to all the data in your angular app. Up to this point you don't need anything from the back-end.
Next, you need to submit the data to WebAPI. We're talking about a transfer from a front end app to a back end app. WebAPI has endpoints and they take models to capture the data. These models will be very close to your front end ones if not exactly the same. This doesn't mean that you shouldn't have them though.
When you build a system where you mix different stacks, then stuff like this where models exist in multiple places, in different formats, does happen.
There are ways of dealing with it, you could have a process where when you compile your MVC application, you add some steps to automatically generate the JavaScripts models for you, so it's not all manual, but at the end of the day you still need models on both sides if you want to make any reasonable sense of your code.
At the end of the day, you will need models on Angular side and you will need models on WebApi side. You'll need to keep them in sync somehow as well.
I would have a chat with the other person and explain these things.
If you don't want to end up in a position like this in the future, you could stick to the classic MVC way of doing things.
You didn't need to use Angular, for example. I am not saying this decision was wrong ( it entirely depends on how much it helps the project and there are valid reasons to throw it in the mix ), but the situation you are in now is a direct result of that. MVC has it's own way of building forms which uses back end models so it doesn't need JavaScript specific ones.
You could, of course, completely eliminate the back-end models and simply get the data from the submitted form ( WebApi can do that ) and then you could simply transfer that data to your Entity Framework DTOs, but it's too much hassle and it's easy to make mistakes so I wouldn't use it.

How many layers your small REST system should have?

I am building simple REST deployable using Spring Boot. Decided to create it by using failing acceptance test first followed with TDD until its green.
My module is pretty simple, I have 3 API's:
Retrieving list of data from datastore.
Adds item to datastore.
Deletes item from datastore.
I feel like it is good idea to abstract datastore and have maybe backed by Map data structure for testing purposes and use it with either NoSQL or SQL db if I want to for deployments/releases and end to end testing.
On the service layer side I am unsure since it would just delegate call to repository with no logic.
So standard approach would be controller->service->repository. In my case service does not do much(possible some exception handling but not more) and I will end up with interface and implementation as an extra as well as few more lines of code. I fell like going for controller->repository solution in my situation but it is not a practice I have seen and not sure how others would see it.
What's the best way to implement this sort of system?
I feel like it is good idea to abstract datastore
You are right. The abstraction is called 'Repository' in DDD (Domain Driven Design) for example.
On the service layer side I am unsure since it would just delegate call to repository with no logic.
I'm pretty sure there are data that you want to validate. So you should have a layer in the middle (e.g. the domain layer) which will be in charge of this validation.
Even so, if you feel like your application is simple and doesn't require such layers, go without. You will have less supple design, but more simplicity at first. Be careful: while evolving your app, you could run into trouble.
Hope this will help.
This is rather an opinion based question, but if you are asking whether a 3 layer architecture is a must, to that I say no. Be pragmatic, if you don' see a reason for a class/layer/module to exist, it does not need to exist.
A repository has a purpose (to store/retrieve), and the api layer has a purpose, to offer those things through HTTP.
Here is an article for building small services with the sparkframework: https://dzone.com/articles/building-simple-restful-api

iOS Client: "Caching" Server-side data to persistent storage

I'm building an iOS client app to interface with an existing backend architecture. To reduce latency, API calls and payloads, it'd be nice to "cache" model data client-side for faster indexing and then make updates to both client/server sides accordingly as needed.
The current theoretical stack would look something like this:
Server Side >>>>>>>>>>>>>>>>> Client Side
-----------------------------------------
PHP >> JSON >> CORE DATA >> UIKit Objects
NOTE: It's also worth noting that the iOS client, while itself adhering to MVC internally would in essence be a "View" in a larger MVC client-server architecture. Thus, just like one updates the model after a user action or updates the view after a model change, the server would need to sync with a client change and the client would need to sync with a server-side change.
Some Context:
A. Many diverse data structures may be coming over the pipe and would have to be constructed into UIViews dynamically. A schema will likely have to be defined (I'm not sure if there's a "best way" to adhere to a JSON schema client-side other than remembering what the acceptable object structures are). I've realized the need to separate model data pertaining to the creation of custom views ("View" Models) from model data of what will be presented in those views ("Regular" Models).
B. End-users should be able to immediately CRUD (create, read, update, destroy) most data presented in these views (but not CRUD the views themselves). They may later need to view this in a web interface or other context.
C. RestKit looks like a good candidate for getting from the API to JSON to COREDATA. I need to find out if it structurally supports callbacks when client model copies need to be pushed to the server. Perhaps the best way is noting in the client model when a change has occurred and notifying whatever RestKit-based HTTP manager to pass it along to the server.
Ultimate Question:
Can anyone speak to best practices, pitfalls, tips, and frameworks with this type of architecture? (Particularly when it comes to performance and the distribution of work between client and server, but general advice is also much appreciated.)
I've done some work around this, hopefully I can provide some insight.
In regards to A) Yes if you're planning to use CoreData (or RestKit) you'll need to know you schema up front. You'll have no way to map dynamic objects otherwise unless you have some generic object type where you're just stashing the JSON string or something, but this doesn't sound like what you're trying to do since you mention users editing those objects.
B) RestKit will handle pushing to the server for you, but you'll still want some control over this I imagine. We handled it by always saving locally first then pushing up to the server on a successful save. This also enabled us to work when there's no network. You'll just have to handle the edge cases of what happens when the server rejects the update / create / delete your user is performing.
C) RestKit will likely get you 80% of the way there, as it did for us. Just having something to understand REST endpoints and object mapping, and abstracting the HTTP requests was a huge help. In terms of the system understanding changes, we kept a flag on managed objects as to whether the object needed a sync or not. We could fetch based on those flags and push the server up.
One thing with RestKit is that you can have other attributes in your CoreData model that aren't necessarily a part of the JSON schema, but you might need within your app. For instance I already mentioned the flag for knowing whether an object needed sync. We also kept pre-computed fields that we used to search on and some other random pieces of information for determining the order of objects to push up to the server (dependencies).
Hope this helps. If you have more specific questions I might have more answers.

EF + WCF in three-layered application with complex object graphs. Which pattern to use?

I have an architectural question about EF and WCF.
We are developing a three-tier application using Entity Framework (with an Oracle database), and a GUI based on WPF. The GUI communicates with the server through WCF.
Our data model is quite complex (more than a hundred tables), with lots of relations. We are currently using the default EF code generation template, and we are having a lot of trouble with tracking the state of our entities.
The user interfaces on the client are also fairly complex, sometimes an object graph with more than 50 objects are sent down to a single user interface, with several layers of aggregation between the entities. It is an important goal to be able to easily decide in the BLL layer, which of the objects have been modified on the client, and which objects have been newly created.
What would be the clearest approach to manage entities and entity states between the two layers? Self tracking entities? What are the most common pitfalls in this scenario?
Could those who have used STEs in a real production environment tell their experiences?
STEs are supposed to solve this scenario but they are not silver bullet. I have never used them in real project (I don't like them) but I spent some time playing with them. The main pitfalls I found are:
Coupling your data layer with your client application - you must share entity assembly between projects (it also means it is .NET only solution but it should not be a problem in your case)
Large data transfers - you pass 50 entities to clients, client change single entity and you will pass 50 entities back. It will require some fighting with STEs to avoid passing unnecessary data
Unnecessary updates to database - normally when EF works with attached entities it track changes on property level but with STEs it track changes on entity level. So if user modify single property in entity with 100 properties it will generate update with setting all of them. It will require modifying template and adding property level change tracking to avoid this.
Client application should use STEs directly (binding STEs to UI) to get most of its self tracking ability. Otherwise you will have to implement code which will move data from UI back to self tracking entity and modify its state.
They are not proxied = they don't support lazy loading (in case of WCF service it is good behavior)
I described today the way to solve this without STEs. There is also related question about tracking over web services (check #Richard's answer and provided links).
We have developed a layered application with STE's. A user interface layer with ASP.NET and ModelViewPresenter, a business layer, a WCF service layer and the data layer with Entity Framework.
When I first read about STE's the documentation said that they are easier then using custom DTO's. They should be the 'quick and easy way' and that only on really big projects you should use hand written DTO's.
But we've run in a lot of problems using STE's. One of the main problems is that if your entities come from multiple service calls (for example in a master detail view) and so from different contexts you will run into problems when composing the graphs on the server and trying to save them. So our server function still have to check manually which data has changed and then recompose the object graph on the server. A lot has been written about this topic but it's still not easy to fix.
Another problem we ran into was that the STE's wouldn't work without WCF. The change tracking is activated when the entities are serialized. We've originally designed an architecture where WCF could be disabled and the service calls would just be in process (this was a requirement for our unit tests, which would run a lot faster without wcf and be easier to setup). It turned out that STE's are not the right choice for this.
I've also noticed that developers sometimes included a lot of data in their query and just send it to the client instead of really thinking about which data they needed.
After this project we've decided to use custom DTO's with automapper from server to client and use the POCO template in our data layer in a new project.
So since you already state that your project is big I would opt for custom DTO's and service functions that are a specifically created for one goal instead of 'Update(Person person)' functions that send a lot of data
Hope this helps :)