I always heard hexagonal architecture has to be agnostic of any framework and use interfaces (SPI) to delegate each part of code which does not belong to the business layer.
But how to create a reactive business layer thanks to hexagonal architecture without using additional framework?
Most of the time SPI's implementations will be reactive (API's implementations/adaptations also) and the core of the business layer should also be reactive.
Is there any JSR (implemented by each reactive framework) to use? Or should I define my own and perform adaptations with the final framework I will use in infra part?
I've never developed software following a reactive programming approach, I don't know much about it... but I know it is a programming paradigm, so it defines the way you have to write source code, how you have to structure it, etc.
From my point of view, RxJava wouldn't be considered as a framework, in the sense of technology that you use to communicate with the actors living outside your application. RxJava would be an extension to a programming language (Java) that lacks the chance of writing reactive code with it.
So I see no problem using RxJava to write the hexagon source code.
Related
I use Clean Architecture in Asp.net Core.
My Layers are:
*UI
*Ioc
*Domain
*Data
*Application
In First I Define Model Classes In Domain Layer and Define An Interface For each Model.
then i implement the Interfaces in Data Layer. It's OK
After It in Application Layer I Define new Interface for Models and then implement Interfaces in this layer With the methods in Repository Classes in Data layer. In this layer i can use logic and condition while in Data Layer I Avoid From Logic.
is this Architecture Good?
and when i have a simple Model With CRUD Operation I should Copy Interface in Domain Layer to Application Layer and then implement Services.
i confused for this copy paste in this Architecture.
what's your opinion?
-Picture
"The purpose of a good architecture is to defer decisions, delay
decisions." -- Clean Architecture
It's hard to say if your design is good or not without knowing what problem you're trying to solve.
If an architecture solves the problem, it might be good enough for now.
If an architecture defers the decisions that you have make in the early stage, it's good.
I'm learning Scala/Play2.1.3 from a C#/.Net/ASP.NET MVC background.
I wonder why there is no dependency injection support by default?
In Play samples, all data-access methods are static in the domain model classes. They use factories instead of injections. What if I want to mock some data-access methods for unit-testing?
There is no ready-to-use high-level ORM there. Actually they discourage me to use ORMs! Regarding SQL DBs I can't believe that I have to write joins again which I don't remember the last time I wrote a join clause. Isn't it a step backward?
I've learned to use SOLID principals which are not observed in Play framework (completely) IMO.
Am I wrong your I should consider using another framework?
You are right, the majority of samples does not use Dependency Injection. But since the 2.1 version, it is possible to inject the controllers, and their dependencies.
For the dependency injection, check the doc and also, how to unit test (last paragraph).
But since there are many static calls, you could end up with some static reference somewhere and you'll not be able to unit test your code.
But I think Play is a great framework, the team is modularizing more and more the framework, so that it will be better and better regarding SOLID principles.
I am looking for a framework/library that generates most/all of the generic MVP code itself, so that I can then extend that code. In default GWT-Eclipse IDE setup, I have to write every bit of code by hand.
I have seen a few frameworks like Tessell which aim at generating a large part of the boiler plate code...Which framework do you recommend for this purpose, so that I can create new MVP-GWT apps with minimal effort/fuss?
Take a look at Tessell:
Tessell is a GWT application framework
Follows a Model View Presenter architecture
Less boilerplate (10x less LOC than hand-coded MVP)
Features
View generation of the MVP/UiBinder interfaces/implementations that allow for fast, DOM-decoupled unit tests but that suck to code by hand
Rich models to make your application's presenter/business logic more declarative and have less spaghetti/inner class code
Dispatch-style server/client AJAX communication
Stubs for awesome, out-of-the-box tests
Conventions for forms, row tables, and cell tables
I know people who have used mvp4g on some large projects effectively.
I used gwtp in two projects and it worked really well.
It has the concept of nested presenters/views which might come handy if you want to create reusable MVP components.
The GPE (Google Plugin for Eclipse) and Google Window Builder together will generate most of what you need for MVP code using the GWT libraries. You go to New ->Window Builder->GWT UIBindder->MVP->MVP View. The Wizard will generate the uibinder code, a UI interface, a UI implementation, a place, and an activity. It will also use a client factory if you are using one. If you have a client.place and/or a client.activity package(s) it will also put the places and activities in those packages for you.
My development team is evaluating the various frameworks available for .NET to simplify our programming, one of which is CSLA. I have to admit to being a bit confused as to whether or not CSLA would benefit from being used in conjunction with a dependency injection framework, such as Spring.net or Windsor. If we combined one of those two DI frameworks with, say, the Entity Framework to handle ORM duties, does that negate the need or benefit of using CSLA altogether?
I have various levels of understanding of all these frameworks, and I'm trying to get a big picture of what will best benefit our enterprise architecture and object design.
Thank you!
CSLA is a framework for creating business entities, so has separate concerns than an IoC container or ORM. In a enterprise application you should consider the benefits of all three.
In particular, you should consider CSLA if you want data binding built in to your models, dirty checking, N-level undo, validation and business rules, as well as the data portal implementation which allows easy configuration for n-tier deployments.
Short answer: Yes.
Long answer: It requires a bit of grunt work and some experimentation to setup, but it can be done without fundamentally breaking CSLA. I put together a working prototype using StructureMap and the repository pattern and used the BuildUp method of Setter Injection to inject within CSLA. I used a method similar to the one found here to ensure that my business objects are re-injected when the objects are serialized.
I also use the registry base class of StructureMap to separate my configuration into presentation, CSLA client, CSLA server, and CSLA global settings. This way I can use the linked file feature of Visual Studio to include the CSLA server and CSLA global configuration files within the server-side Data Portal and the configuration will always be the same in both places. This was to ensure I can still change the Data Portal configuration settings in CSLA from 2 tier to 3 tier without breaking anything.
Anyway, I am still weighing the potential benefits with the drawbacks to using DI, but so far I am leaning in the direction of using it because testing will be much easier although I am skeptical of trying to use any of the advanced features of DI such as interception. I recommend reading the book Dependency Injection in .NET by Mark Seemann to understand the right and wrong way to use DI because there is a lot of misinformation on the Internet.
I feel, that MVVM and REST being considered together can produce a solid and reliable pattern for programming for many years. (My intuition says me that we SHOULD consider them together). Also it seems that it should be a proper abstraction for Asynchronous operations in ViewModels and Controllers - like a composable asynchronous data dependency graph (with support of transactions) - thing that operates at higher level of abstraction than c# 4.0 parallels (closer to busines logic).
Are there any investigations, or best practices on that?
MVVM + REST - ?
MVVM + AsyncModels - ?
REST + AsyncModels - ?
MVVM + REST + AsyncModels - ?
I'm afraid your question is a little vague to give a really clear answer, but I will give you my thoughts.
If you are talking about using MVVM on the desktop (or JS in the browser) and REST on the server then then I think this is a very viable approach, as long you consider the Model as the media-type that is returned from the Http request.
If you are talking about implementing RESTful endpoints using MVVM then I tend to prefer a straight MVC pattern.
I'm really not sure what you are asking with regards to AsyncModels. Are you inferring that the model asynchronously load their own data from REST endpoints? Are these "asyncmodels" a replacement for the M in MVVM or are they in addition.
It would be much easier to give you a valid answer if you could tell me on which physical tier you expect these various components to run.
I completely agree with you in that MVVM + Rest together are a perfect combination.
Maybe the problem for this to get more interest is that his natural target are Silverlight applications, and the framework they are promoting there is RIA Services.
I personally prefer to get data from a Rest server, and to have my MVVM Model objects correspond to Rest resources.
I don't know any investigations about this, but sure is an interesting topic. About the async operations i would suggest using couroutines, based in IEnumerable. I know 2 frameworks that use that:
1) caliburn
2) the Dream Rest framework, but (for what i know) is not silverlight.