Should I make restful web app multiple modules [closed] - rest

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Our group used to build web applications using jersey. Usually we would have two modules, dao and api. Dao consists of entity classes and dao interfaces/implementation classes, while Api module has all the rest api classes. Recently, we have been transitioning from Jersey to Spring MVC and Spring Boot. Since we are using Spring Data and JPA, there seems to have no need of having a dao package for all the interfaces and implementations. Instead, JPA repositories take care of all the data access. Services are built over the repository layer. It feels more natural to have the JPA repository stuff in the same module as services, controller, etc.
What is the best practice of organizing a restful web api project? Should I set up the project as a two module application just like what we did before, that is having all the entities in one module (model or dao) and the rest goes to another module called api? If this is the way to go, should I keep persistence.xml which contains all the ORM mapping for entities within the entity module or the api module?

This is mostly my opinion of 1 vs 2 modules, but I don't think there is one single answer. It probably depends on how large the web app is. Is it a single API in the same codebase/build that's basic CRUD or do you have many separate, but smaller APIs creating more of a micro-service environment. We had a few larger monolithic apps where we did separate out the Dao stuff into separate modules. We also separated 3rd party dependencies such as Google or Stripe into a module even separate from Dao stuff. Makes it easier to create a separate jar if there was a need.
Regardless, what's more important is you can ensure you have one way dependency. Meaning the API is dependent on the DAO but not the opposite. Curious what others think

Related

How do you create a class diagram for a SPA and a REST API? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Context
I have created a SPA (Single Page Application) along with a REST API.
Backbone.js was used to develop the SPA and Codeigniter was used for the server side implementation of the REST API.
Question
For a MVC application we could draw a single class diagram as usual. But how do you create a class diagram for a system that has a SPA and a REST API? Do you create two separate class diagrams for each of the applications or a single class diagram for both? If there is one diagram, what relationships/associations can be used to connect the SPA and REST API?
How class diagrams for server and client could be organized
The common class diagram is senseless if the components have no common classes.
It is very probably that you have some common classes, because client and server work with the same objects of the real world and appropriate classes should be the same. And every such class should have only one code realization, too. As these common classes will belong to the domain area (methods, usual for your theme), you could separate them all into the yet another package. So, you will have two intersecting components but three non-intersecting packages. You can draw that primitive package diagram, too.
If the class diagrams of the components are large, always make two (or more) of them, marking the common classes in some special way. (blue for classes common with component A, red for classes common with component B...).
Class diagrams are not intended to model the behaviour, such as calling the components and things. It would be a bad style. If you still want to write about the behavioral details important for understanding, do that in comments.
For modelling behavioral connections of the components, use communication or sequence diagrams.

What is RESTful application? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
There has been a lot of hype over REST last few years, and I've tried to embrace the principle and understand it's benefits. Some things about REST still elude me, though. I'll try to be concise and to the point:
Can a web application be considered RESTful? What are the benefits of this? I can understand (to a point) advantages of RESTful service, which is to be used by many clients, but what is gained by using REST principles when developing application interface which is to be consumed by HTML/JS frontend?
REST mandates use of verbs which roughly map to CRUD operations and to which the server responds with representations which in turn put client in a new state. Does this imply that ALL actions on a resource must be done through modification/creation/deletion of that and possibly many other related resources? What about "atomicity" of such operations (i.e. transactions)?
REST compliant service is supposed to be self-descriptive (through HATEOAS principle), but the lack of metadata makes it impossible for a client to e.g. create a resource without knowing exactly what fields (and their types) are mandatory. This information must still be provided out-of-band. Is there something I'm missing here?
I could come up with more questions, but it will be enough for now if someone could clarify these points for me.
Some notes about your questions:
1) If your web application is a Single Page Application the simplest way to communicate with the server will be if it is a Rest service.
For a traditional web application I think is better that "controllers" communicate with a service layer using dependency injection.
3) Yes, of course the client needs to know the format of the data it is receiving. But AFAIK, Rest does not give any constrain about how this metadata has to be defined or transmitted.
The HATEOAS principle refers more to the discovering of related resources from a given one.
There exists different conventions to express that relations, see for example:
http://stateless.co/hal_specification.html
2) Every Rest action must be atomic. If you need a some kind of long operation, the usual is to create a resource that describes the operation. The state of the operation is retrieved from that resource, and you do whatever you want with the operation (i.e. Cancel it) interacting with that resource.
See an example of that here.
1.Usually REST architecture is designed when application is going to be used by different
by nature clients(external api consumers, mobile applications etc). In this case development costs will be paid back completely.
Developing truly REST application is not such a simple task to do. So,
if you know in advance that your application is to be used only by your client-side, probably,
you could consider 100% RESTfull approach as an overhead. But it does not mean
that you should not design your application well, you still could use some of REST principles in your application.
For example, stateless application simplifies scaling, REST-style URLs looks good for users and search engines, etc.
2.Yes, in truly REST all interactions should be expressed via standard verbs on resources.
But you could still create Transaction resource to wrap around your transaction.
See great discussion here: Transactions in REST?
3.As I understand nothing prevent you from providing metadata-information in HATEOAS-based response.

Developing a website on top of RESTFul APIs [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
we're developing kind of a social network.
We first focused on mobile applications and thus we developed our own API (REST) using jboss as application server and everything is fine.
Now we are beginning the development of a website. We decided to build such website on top on the API we already have, so we won't have to worry about the database management.
My question is: what approach should I follow?
client-side calls (using ajax)
server-side calls (using e.g. php, python) to dynamically generate the html page
Do you have any suggestion?
Thanks,
Andrea
I like a mixed approach.
Direct client side calls into your REST layer will have issues with Authentication & Authorization.
So you need to have a server-side Facade that validates application session and then allows the calls to pass through to your backend.
This layer can employ pagination kind of logic if the REST APIs have them missing.
Sometimes an UI action would require you to manipulate the data structure or multiple REST calls to create the resulting view. Direct one-to-one mapping of UI action to backend REST calls may not be possible. There also this facade helps make the APIs more UI friendly.
Finally - for some static / cachable HTML fragments your server can generate the view from REST layer and cache it for faster serving.
So in summary
Use node.js or playframework kind of AJAX based UI to build the UI layer.
But to use a Facade that orchestrates, aggregates, transforms, authenticate, authorize the UI calls before hitting the REST layer - to keep the UI experience simpler.

Opinion on ASP.NET MVC Onion-based architecture [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What is your opinion on the following 'generic' code-first Onion-inspired ASP.NET MVC architecture:
The layers, explained:
Core - contain the Domain model. e.g. that's the business objects and their relationship. I am using Entity Framework to visually design the entities and the relations between them. It lets me generate a script for a database. I am getting automatically-generated POCO-like models, which I can freely refer to in the next layer (Persistence), since they are simple (i.e. they are not database-specific).
Persistence - Repository interface and implementations. Basically CRUD operations on the Domain model.
BusinessServices - A business layer around the repository. All the business logic should be here (e.g. GetLargestTeam(), etc). Uses CRUD operations to compose return objects or get/filter/store data. Should contain all business rules and validations.
Web (or any other UI) - In this particular case it's an MVC application, but the idea behind this project is to provide UI, driven by what the Business services offer. The UI project consumes the Business layer and has no direct access to the Repository. The MVC project has its own View models, which are specific to each View situation. I am not trying to force-feed it Domain Models.
So the references go like this:
UI -> Business Services -> Repository -> Core objects
What I like about it:
I can design my objects, rather than code them manually. I am getting
code-generated Model objects.
UI is driven/enforced by the Business
layer. Different UI applications can be coded against the same
Business model.
Mixed feelings about:
Fine, we have a pluggable repository implementation, but how often do you really have different implementations of the same persistence interface?
Same goes for the UI - we have the technical ability to implement different UI apps against the same business rules, but why would we do that, when we can simply render different views (mobile, desktop, etc)?
I am not sure if the UI should only communicate with the Business Layer via View models, or should I use Domain Models to transfer data, as I do now. For display, I am using view models, but for data transfer I am using Domain models. Wrong?
What I don't like:
The Core project is now referenced in every other project - because I want/have to access the Domain models. In classic Onion architecture, the core is referenced only by the next layer.
The DbContext is implemented in the .Core project, because it is being generated by the Entity Framework, in the same place where the .edmx is. I actually want to use the .EDMX for the visual model design, but I feel like the DbContext belongs to the Persistence layer, somewhere within the database-specific repository implementation.
As a final question - what is a good architecture which is not over-engineered (such as a full-blown Onion, where we have injections, service locators, etc) but at the same time provides some reasonable flexibility, in places where you would realistically need it?
Thanks
Wow, there’s a lot to say here! ;-)
First of all, let’s talk about the overall architecture.
What I can see here is that it’s not really an Onion architecture. You forgot the outermost layer, the “Dependency Resolution” layer. In an Onion architecture, it’s up to this layer to wires up Core interfaces to Infrastructure implementations (where your Persistence project should reside).
Here’s a brief description of what you should find in an Onion application. What goes in the Core layer is everything unique to the business: Domain model, business workflows... This layer defines all technical implementation needs as interfaces (i.e.: repositories’ interfaces, logging interfaces, session’s interfaces …). The Core layer cannot reference any external libraries and has no technology specific code. The second layer is the Infrastructure layer. This layer provides implementations for non-business Core interfaces. This is where you call your DB, your web services … You can reference any external libraries you need to provide implementations, deploy as many nugget packages as you want :-). The third layer is your UI, well you know what to put in there ;-) And the latest layer, it’s the Dependency Resolution I talked about above.
Direction of dependency between layers is toward the center.
Here’s how it could looks like:
The question now is: how to fit what you’ve already coded in an Onion architecture.
Core: contain the Domain model
Yes, this is the right place!
Persistence - Repository interface and implementations
Well, you’ll need to separate interfaces with implementations. Interfaces need to be moved into Core and implementations need to be moved into Infrastructure folder (you can call this project Persistence).
BusinessServices - A business layer around the repository. All the
business logic should be here
This needs to be moved in Core, but you shouldn’t use repositories implementations here, just manipulate interfaces!
Web (or any other UI) - In this particular case it's an MVC
application
cool :-)
You will need to add a “Bootstrapper“ project, just have a look here to see how to proceed.
About your mixed feelings:
I won’t discuss about the need of having repositories or not, you’ll find plenty of answers on stackoverflow.
In my ViewModel project I have a folder called “Builder”. It’s up to my Builders to discuss with my Business services interfaces in order to get data. The builder will receive lists of Core.Domain objects and will map them into the right ViewModel.
About what you don’t like:
In classic Onion architecture, the core is referenced only by the next
layer.
False ! :-) Every layer needs the Core to have access to all the interfaces defined in there.
The DbContext is implemented in the .Core project, because it is being
generated by the Entity Framework, in the same place where the .edmx
is
Once again, it’s not a problem as soon as it’s really easy to edit the T4 template associated with your EDMX. You just need to change the path of the generated files and you can have the EDMX in the Infrastructure layer and the POCO’s in your Core.Domain project.
Hope this helps!
I inject my services into my controllers. The services return DTO's which reside in Core.
The model you have looks good, I don't use the repository pattern but many people do. I is difficult to work with EF in this type of architecture which is why I chose to use Nhibernate.
A possible answer to your final question.
CORE
DOMAIN
DI
INFRASTRUCTURE
PRESENTATION
SERVICES
In my opinion:
"In classic Onion architecture, the core is referenced only by the next layer."
That is not true, Core should be reference by any layer... remember that Direction of dependency between layers is toward the center (Core).
"the layers above can use any layer beneath them" By Jeffrey Palermo http://jeffreypalermo.com/blog/the-onion-architecture-part-3/
About your EF, it is in the wrong place.... it should be in the Infrastructure layer not in the Core layer. And use POCO Generator to create the entities (POCO classes) in Core/Model. Or use a Auto-mapper so you can map Core Model (Business objects) to Entity Model (EF Entities)
What you've done looks pretty good and is basically one of two standard architectures that I see a lot.
Mixed feelings about:
Fine, we have a pluggable repository implementation, but how often do you really have different implementations of the same persistence interface?
Pluggable is often touted as being good design but I've never once seen a team swap out a major implementation of something for something else. They just modify the existing thing. IMHO "pluggability" is only useful for being able to mock components for automated unit testing.
I am not sure if the UI should only communicate with the Business Layer via View models, or should I use Domain Models to transfer data, as I do now. For display, I am using view models, but for data transfer I am using Domain models. Wrong?
I reckon view models are a UI (MVC Web) concern, if you added a different type of UI for example it might not require view models or might need something different. So I think the Business layer should return domain entities and allow them to be mapped to view models in the UI layer.
What I don't like:
The Core project is now referenced in every other project - because I want/have to access the Domain models. In classic Onion architecture, the core is referenced only by the next layer.
As others have said this is quite normal. Usually everything ends up having a dependency on the Domain.
The DbContext is implemented in the .Core project, because it is being generated by the Entity Framework, in the same place where the .edmx is. I actually want to use the .EDMX for the visual model design, but I feel like the DbContext belongs to the Persistence layer, somewhere within the database-specific repository implementation.
I think this is a consequence of Entity Framework. If you used it in "Code First" mode you actually can - and usually do - have the context and repository in the Persistance layer with the Domain (represented as POCO classes) in what you've called Core.
As a final question - what is a good architecture which is not over-engineered (such as a full-blown Onion, where we have injections, service locators, etc) but at the same time provides some reasonable flexibility, in places where you would realistically need it?
As I touched on above I wouldn't worry about the need to swap things out except to allow for automated unit tests. Unless there is a specific requirement you know about that will make this very likely.
Good luck!

Are these the main differences between RestSharp and ServiceStack's Client Code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have been unable to make a definitive choice and was hoping that somebody (or a combination of a couple of people) could point out the differences between using RestSharp versus ServiceStack's client services (keeping in mind that I am already using ServiceStack for my service). Here is what I have so far (differences only). The list is fairly small as they are indeed very similar:
ServiceStack
Pros
Fluent Validation from my already created service POCO objects
One API for both client and service
Code reads better (i.e. Get<>(), Post<>())
Cons
Some of my strings must be written out (i.e. If I make a GET request with query parameters, I must create that string in my code)
I must create a different class for each Request/Response Type (JsonServiceClient, XmlServiceClient)
RestSharp
Pros
Just about everything can be a POCO (i.e. If I make a GET request with query parameters, I just add the parameters via code)
Switching between Request/Response types is simple (request.RequestFormat = DataFormat.Json/Xml)
Cons
Manual Validation (beyond that found in the Data Annotations)
Two APIs to learn (this is minor since they are both fairly simple)
Code is not as readable at a glance (barely) (i.e. request.Method = Get/Post.. and main call is Execute< T >())
I was leaning towards RestSharp since it tends more towards straight POCO use and very little string manipulation, however I think ServiceStack might be acceptable to gain the validation and code that is more easily read.
So, here are the questions:
Which do you prefer?
Why the one over the other?
I know this is not a totally subjective question, but at bare minimum I am looking for the answer to this question (which is subjective):
Are any of my findings incorrect and/or are there any that I missed?
As the project lead of ServiceStack I can list some features of the ServiceStack Service clients:
The ServiceStack Service Clients are opinionated in consuming ServiceStack web services and its conventions. i.e. They have built-in support for structured validation and error handling as well as all clients implement the same interface so you can have the same unit test to be used as an integration test on each of the JSON, JSV, XML, SOAP and even Protobuf service clients - allowing you to easily change the endpoint/format your service uses without code-changes.
Basically if you're consuming ServiceStack web services I'd recommend using the ServiceStack clients which will allow you to re-use your DTOs you defined your web services with, giving you a typed API end-to-end.
If you're consuming a 3rd Party API I would recommend RestSharp which is a more general purpose REST client that is well suited for the task. Also as ServiceStack just returns clean DTOs over the wire it would also be easily consumable from RestSharp, which if you prefer its API is also a good option.
UPDATE - Using ServiceStack's HTTP Client Utils
ServiceStack now provides an alternative option for consuming 3rd Party APIs with its HTTP Client Util extension methods that provides DRY, readable API's around common HttpWebRequest access patterns, e.g:
List<GithubRepo> repos = "https://api.github.com/users/{0}/repos".Fmt(user)
.GetJsonFromUrl()
.FromJson<List<GithubRepo>>();
Url extensions
var url ="http://api.twitter.com/statuses/user_timeline.json?screen_name={0}"
.Fmt(name);
if (sinceId != null)
url = url.AddQueryParam("since_id", sinceId);
if (maxId != null)
url = url.AddQueryParam("max_id", maxId);
var tweets = url.GetJsonFromUrl()
.FromJson<List<Tweet>>();
Alternative Content-Type
var csv = "http://example.org/users.csv"
.GetStringFromUrl(acceptContentType:"text/csv");
More examples available from the HTTP Utils wiki page.