Related
At the moment we have one huge API which is used by our backoffice, our frontend, and also our public API.
This causes me a lot of headaches because when building new endpoints I find a lot of application specific logic in the code which I don't necessarily want to include in my endpoint. For example, the code to create a user might contain code to send a welcome email, but because that's not needed for the backoffice endpoint I will then need to add a new endpoint without that logic.
I was thinking about a large refactor to break our code base in to a number of smaller highly specific service APIs, then building a set of small application APIs on top of those.
So for example, an application endpoint to create a new user might do something like this after the refactor:
customerService.createCustomer();
paymentService.chargeCard();
emailService.sendWelcomeEmail();
The application and service APIs will be entirely separate code bases (perhaps a separate code base per service), they may also be built using different languages. They will only interact through REST API calls. They will be on the same local network, so latency shouldn't be a huge issue.
Is this a bad idea? I've never seen/worked on a codebase which has separated the two before, so perhaps there is a better architecture to achieve the flexibility and maintainability I'm looking for?
Advise, links, or comments would all be appreciated.
Your idea of making multiple, well-defined services is sound and really it is the best way to approach this. Going with purely micro-services approach however trendy it might seem, proves to be an overkill most often than not. This is why I'd just redesign the existing API/services properly and follow solid and sound SOA design principles below. Good Resources could be found on both serviceorientation.com and soapatterns.org I've always used them as reference in my career.
Consider what types of services you need
(image from serviceorientation.com)
Entity services are generally your Client, Payment services - e.g. services centered around an entity in your domain. They should be business-agnostic, and be able to be reused in all scenarios. They could be called sometimes by clients directly if sufficient for their needs. They could be called by Task services.
Utility services contain logic you're likely to reuse in other services, but are generally not called by the clients directly. Rather, they'd be called by Task and Entity services. An example might be a Transliteration service.
Task services combine and reuse Entity and Utility services into meaningful tasks. Most often they are not that agnostic and they do implement some specific business logic. They have meaningful business operations and they are what clients mostly call.
Principles to follow when redesigning
I strongly recommend going over this cheat sheet and making sure everything there is covered when you do your redesign. It's great help.
In general, you should make sure that:
Each service has a common context and follows the separation of concerns principle. E.g. Clients service is only for clients related operations, etc.
Each of the Entity and Utility services is business-agnostic and basic enough. So it can be reused in multiple scenarios and context without being changed. Contract must be simple - CRUD and only common operations that make sense in most usage scenarios.
Services follow a common data model - make sure all the data structures you use are used uniformly in all services in order to prevent need for integration efforts in the future and promote combination of services for clients to exploit. If you need to receive a customer that another service returns, this should be happening without the need for transformation
OK, but where to put the non-agnostic logic?
Now, you have multiple options for abstracting business logic whenever you have a need for complex business functionality. It depends on your scenario what you're going to chose:
Leave logic to all clients. Let them combine your simplified services
If there is business logic that is commonly implemented in multiple of your applications and has the potential to be reused heavily you can implement a composite service that reuses multiple existing underlying services and exposing the logic.
Service Composability. Concerns on multiple API calls communication overhead.
Well, this is an age-old question - should you make multiple API calls when they will probably create some communication overhead? The answer is - it depends on how complex your scenario is, how much reuse you expect and how flexible you want to be. Also is speed critical? To what extent? In Service Oriented Architecture though, this is a very common approach - to reuse your existing services and combine them in new configurations as needed. Yes, it does add some overhead, but I've seen implementations in very complex environments, for example Telecoms, where thanks to the use of ESB solutions, message queues, etc the overhead is negligible compared to the benefits. Here is a common architecture approach (image from serviceorientation.com):
The mandatory legacy refactoring heads-up
More often than not, changing the established contract for multiple existing client systems is a messy business and could very well lead to lots of refactoring and need for looking for needle-in-a-stack functionality that's somewhere deep in the (possibly) legacy code. Business logic might be dispersed everywhere. So make sure you're ready and have the controls, time and will to lead this battle.
Hope this helps
Is this a bad idea?
No, but this is a big overall question to be able to provide very specific advice.
I'd like to separate this into 3 areas:
Approach
Design
Technology
Working backwards, the Technology is the final and most-specific part, and totally depends on what your current environment is (platforms, skills), and (hopefully) will be reasonable self-evident to you once the other things are in progress.
The Design that you outlined above seems like a good end-state - having multiple, specific, focused APIs, each with their own responsibility. Again, the details of the design will depend on the skills of you and your organization, and the existing platforms that you have. E.g. if you are already using TIBCO (for example) and have a lot invested (licenses, platforms, tools, people) then leveraging some of their published patterns/designs/templates makes sense; but (probably) not if you don't already have TIBCO exposure.
In the abstract, the REST API services seems like a good starting point - there are a lot of tools and platforms at all levels of the system for security, deployment, monitoring, scalability, etc. If you are NGINX users, they have a lot of (platform-independent) thoughts on how to do this also NGINX blog, including some smart thinking on scalability and performance. If you are more adventurous, and have an smart, eager team, a look at Event-driven architecture - see this
Approach (or Process) is the key thing here. Ultimately, this is a refactoring, though your description of "a large refactor" does scare me a little - put that way, it sounds like you are talking about a big-bang change and calling it refactoring. Perhaps it is just language, but what's in my mind would be "an evolution of the 'one huge API' into multiple, specific, focused APIs (by refactoring the architecture)". One place to start is Martin Fowler, while this book is about refactoring software, the principles and approach are the same, just at a higher-level. Indeed, he talks about just this here
IBM talk about refactoring to microservices and make it sound easy to do in one step, but it never is (outside the lab).
You have an existing API, serving multiple internal and external clients. I will suggest that you'll want to keep this interface solid for these clients - separate your refactoring of the implementation from the additional concerns of liaising with and coordinating external systems/groups. My high-level starting approach would be:
identify a small (3-7) number of related methods on the API
ideally if a significant, limited-scope change is needed anyway with these methods, that is good - business value with the code change
design/specify a new stand-alone API specifically for these methods
at first, clone the existing model/naming/style
code a new service just for these
with proper automated CI/CD testing and deployment practices
with associated monitoring
modify the existing API to have calls to these methods re-direct to call the new service
perhaps have a run-time switch to change between the old implementation and the new implementation
remove the old implementation from codebase
capture issues, assumptions and problems along the way
the first pass will involve a lot of learning about what works and doesn't.
then repeat the process over & over, incorporating improvements each time.
At some point in the future, when appropriate due to other business-driven needs, the API published to the back-end, front-end and/or public clients can change, but that is a whole different project.
As you can see, if the API is huge (1,000 methods => 140 releases) this is a many-months process, and having a reasonably frequent release schedule is important. And there may be no value improving code that works reliably and never changes, so a (potentially) large portion of the existing API may remain, just wrapped by a new API.
Other considerations:
public API? Maybe a new version (significant changes) will be needed sooner than the internal APIs
focus on the methods/services used by it
what parts/services change the most (have the most enhancement requests approved)
these are the bits most likely to change, and could benefit most from a better process/architecture
what are future plans for change and where would the API be impacted
e.g. change to user management, change to payment processors, change to fulfilment systems
e.g. new business plans (new products/services)
consider affected methods in the API
Also see:
Using Microservices for Legacy System Modernization
Migrating From a Monolith to APIs and Microservices
Break the Monolith! Loosely Coupled Architecture Brings DevOps Success
From the CEO’s Desk: Application Modernization – Assess, Strategize, Modernize! 9
[Microservices Architecture As A Large-Scale Refactoring Tool 10
Probably the biggest 4 pieces of advice that I can give is:
think refactoring: small changes that don't affect function
think agile: small increments that are valuable, testable, achievable
think continuous: have a vision for where you will (eventually) get to, then work the process continuously
script & automate the processes from code, documentation, testing, deployment, monitoring...
improving it every time!
you have an application/API that works - keep it working!
That is always the first priority (you just need to work to carve-out time/budget for maintenance)
Not a bad idea at all.
Also what are your looking is microservices arch. and with that the question comes is how you break your system into well defined services.
We use Domain Driven Design Arch. to break our system into microservices and lagom framework , which allows every service to be in diff. code base and event driven arch. between microservices.
Now lets look at your problem at low level: you said a service contains code like creating a user and sending a email and one with just creating a user and there might be other code as well.
First we need to understand how many type of code you are writing:
Domain Object Logic (eg: User Object) -- what parameters are valid and all -- this should be independent of service endpoint and should be encapsulated in one Class like user class and we say it an Aggregate in Domain Driven Design terms
Business Reactions -- like on user creation send a email -- using event driven arch. these type of logics are separated into process managers or sagas which could most cases work conditionally like a for user created externally send a mail and for user created internally send a email , by having extra data in the event
Also the current way you are doing it , how are you handling transaction across services???
I want to dive into the whole diversity of tools which provide connection between programs over the network.
To clarify the question, I divide it on subquestions:
Why some groups of programs (or specific tools/frameworks/approaches with programming languages where this frameworks can be used) were popular in each period of time? (I expect description of problems which were solved, description of tools, why those tools are considered as best solution to those problems at that time, why some tools lost popularity)
What is the entire history of software communication over the network? (tools/approaches popularity precisely to decades)
What are the modern solutions to this problem?
I can distinguish only two significant approaches.
RPC, RMI and their implementations (I saw this, but it is about concrete problem and specific tools to solve this problem, I want to see the place of this problem in the whole picture of interconnection programs over the network. I heard about implementations: ONC RPC, XML-RPC, CORBA, DCOM, gRPC, but which are active now? which are reasonable to use? which are preferable and why? I want answers not to be opinion based, so I accept answers like "technology A better than technology B for problem X because ..." only if there is reliable research/statistics or facts). I heard that RPC and RMI were popular 10 years ago. Are they still?
Web services: REST, SOAP.
Am I miss something? Maybe there are some technologies which solve problem completely new way? Maybe there are technologies which can be treated as replacement to RPC(RMI) and Web Services? Can we replace RPC(RMI) by REST for any task? Can we replace RPC(RMI) by REST only for modern tasks? Should I separate technologies not as RPC and Web Services, but in some other manner?
As a partial answer, I can give you my feedback on the use of RabbitMQ.
As explain here, it provides a lot of different ways to use it :
RPC by implementing a "callback" queue
One to one, one to many routing strategy to propagate your events through your whole infrastructure and target the right destination.
It comes with the ability to persist messages to avoid loosing data when a crash appears but also with some plugins to increase possibilities (e.g x-delayed plugin)
This technologie written in Erlang is powerful and is a must try in term of communication between programs.
To your question „Am I missing something“: yes.
Very popular communication patterns are the so-called Event-Driven or Message-Driven protocols. This type of protocols are often used in distributed systems such web applications, microservices and IoT-Environments. The communication is complete asynchronously and allows building scalable and loosely coupled systems.
There are many different frameworks and methods for Event-Driven systems like WebSockets, WebHooks, Pub-Sub and Messaging-Librarys like AcitveMQ, OpenMQ, RabbitMQ, ZeroMQ and MQTT.
Hope this info helps for your research.
I am wondering if it would be possible to develop an enterprise-level web application without the use of a standard MVC structure and application server by carrying the business/flow logic and session data to the client-size Javascript and make it talk to REST data services directly...maybe we could make use of an authorization/authentication layer and a second validation layer sitting on top of the data services. All these services operate on standard HTTP methods, support configurable logging&monitoring, and content or query parameters are all contained in the HTTP request/response body. Static HTML and Javascript are served to the browser and the rest is carried out by Javascript functions talking to the HTTP-based authorization/authentication, validation and then data services. Do you think this kind of an architecture could satisfy enterprise-level web application requirements?
It's possible but unlikely; what are the drivers that suggest this architecture to you? Is it just to be different or are there some specific aspects that this best addresses?
by carrying the business/flow logic
and session data to the client-side
Javascript and make it talk to REST
data services directly
In theory you'd still be able to have an appropriately layered solution (Business Logic (BL) script vs UI focused script) but practically speaking it'd be messy, and you'd lose the ability to physically separate it into different tiers. This could "bite" you at any number of places in the life of the system.
"Enterprise" grade systems are seldom small, I hate to think how much logic you'd be having to send over the wire to support a given action/process.
Putting all the BL into a scripting language ties you to that platform, and platforms change over time. The bad thing about scripts is that whilst they are stable to a degree I'd suggest they are more exposed to change than server-based platforms like Java or .Net. In an enterprise scenario, the servers will have very tight change control and upgrade paths mapped out for them - whereas browsers are much more open to regular change.
There's the issue of compatibility - unless you're tied to a specific browser (to the version level) guaranteeing consistent behavior is going to be harder, and will likely require more development effort. Let's say you deliver the solution successfully; what do you do when the business wants to take advantage of mobile computing - say iPads? Your only option is going to be a browser - you won't be able to take advantage of any of the native advantages of the platform. "The web and browsers" might seem like they'll be around forever - but then I'm guessing that what MainFrame folks said at the time. A server-centered solution is going to give you more life for less expense.
Staffing will be an issue - you'll need very strong JavaScript and server-side developers.
Security: having your core BL out on the client where it's much more exposed sounds very dangerous.
EDIT:
Web Apps can be sow for many reasons - not many of which are reason enough to put all your BL in JavaScript on the client. Building apps for performance is a whole field of endeavor on its own - I suggest you get more familiar with Architecting and implementing for performance before you write off n-tier web apps altogether :)
Regarding keeping your layers separated: there are different ways of doing this but it boils down to abstraction - and more correctly to keeping good design principles in mind; if you haven't heard of SOLID that would be a good place to start. In terms of implementation start reading up on Dependency Inversion (FYI - self-promotion, the articles mine and is .Net focused, but you should have no problem tracking down Java-based ones too).
I was today trying to figure out on working with WebService and found many articles really gospel over the Web Service and its effectiveness in the Market share.
My Questions are:
For a Complex project of critical data, is it better to opt for WebService?
What Makes WebService different from other way of fetching the data?
The answer is... it depends. Web services are not really the next Big thing, they have been a Huge Thing for years now. In business applications, web services allow a big level of interoperability and capabilities never seen before.
They help integration with legacy systems, cooperation between distinct departments, defining loosely coupled interfaces and such. You should read some about Service-oriented architecture.
If all you need is a PHP application that handles data from a single database, you might not need web services at all. If you are designing a solution that revolves around multiple data sources, with complex security involved, multiple languages and/or multiple applications, then web services become essential.
SOAP is a protocol; if working with PHP, you'll need to check out the PHP: SOAP guide to understand how it works. For every language (almost), there are existing APIs to develop web services. Anyhow you might want to check RESTful web services instead of SOAP-based ones, they are generally simpler to implement/understand. But that's another debate ;-).
Cheers.
That mostly depends on the definition of "big thing".
My experience with the WS stack and SOAP and all the acronym soup is that it takes an awful lot of workforce to deploy it. The status of the frameworks is complex, and definitely not something a hobbyist can put to work in a couple of afternoons. We have seen how many things on the net became the next big thing just because they were easy. Easy to understand, easy to interact with, easy in technology. Wikipedia, twitter, digg, youtube are internet big things, and they are, from the interaction point of view, light years away from SOAP/WS based interaction. They are KISS: simple and stupid. A whole horizontal market was opened just because of their simplicity. Even multiprocessing platforms like BOINC don't use anything near the WS stack, but they are the core of many high-throughput efforts.
Now, if you have to deal with complex multi-host transactions, authentication, credential delegation, caching... WS is there. It's the target that makes the need: banks, flight reservation, stuff like this. but they won't impact the common programmer. They require too much energy and too many different competences at once to become something usable for a horizontal market of developers.
Also, I am a REST person. I never advocated SOAP with much emphasis, but there was nothing else and it was a better evolution over XMLRPC (which, if you have to perform dumb RPC, IMHO it's still a good choice). Now I changed my mind. You mostly have resources on the web, and you interact with them with HTTP methods. SOAP is nothing but RPC on hypersteroids. No, REST is not the solution that replaces WS. At all. it's simply easier to use and to debug, albeit more difficult to design (you have to think in terms of resources instead of method calls). It's KISS. That's why it has more chances for success on the horizontal market.
It depends.
Web services can be useful if you need to expose the data across security boundaries, where a direct connection to an RDBMS would be a bad idea.
Popular method for implementing web services nowdays is to use RESTful API (eg. via Ajax/JSON). It's already "next big thing" – almost every major player has been offering it for years. Google, Flickr, Twitter, you name it.
The big advantage is that they help to implement an API layer.
If you implement your solution using a "bus" where the web services sit, it opens up your product to a far greater range of users and moves away from being a proprietary product.
It also enables people to interface using a wide range of solutions e.g web service clients can be implemented using command line, Jsp, Java, Asp, .NET, PHP etc.
They also enable code re-use e.g. if you implement GetClientDetails (ID) as a web service for one user, when the next group comes along wanting the same thing, all you have to do is give them the WSDL and they are away.
Are people still writing SOAP services or is it a technology that has passed its architectural shelf life? Are people returning to binary formats?
The alternative to SOAP is not binary formats.
I think you're seeing a surge in the desire to leave the complexities of WS-* behind in favor of REST and JSON, because they're much simpler to use and don't require frameworks to be used successfully. The problems that WS-* ostensibly tries to solve aren't problems for most users, but they have to pay for the complexity any way.
I still write WS-*–based services. Somewhat surprisingly, I've had less trouble with them when trying to inter-operate with less capable developers. This is because if I send them a WSDL file, they know how to crank it through their tool and get an API they can call, while being blissfully unaware what is happening under the hood. To give customers a REST-ful service, I have to start talking to them about HTTP and XML, which they really don't understand as well as they think they do, and then I start getting a headache.
In other words, to be successful with REST, both the service provider and consumer have to know what they're doing (and they can keep things simple and come up with a great, non–WS-* solution). With WS-* technologies, it can still succeed even if only one party has a clue.
I think, however, that REST-oriented standards that are much less complicated than current WS standards, will eventually emerge, and when that happens, comparable tools will be available too.
I think so. RESTful solutions are more and more sensible for the vast majority of use cases; the complexities of SOAP and other RPC technologies just aren't worth the effort anymore.
I wouldn't consider SOAP legacy at all. REST vs. SOAP is really just the continuation of the debate of COM/CORBA vs. HTTP POST/GET etc. SOAP is nothing more than an updated version of the same principles defined with C and C (contracts, providers, consumers etc.). It's just that has appeared to SOAP succeed (at least partially) where the other two failed (and it could be that SOAP just has a better marketing team), that is that SOAP really does allow to different systems to connect rather easily compared to it's predecessors. That being said, it still suffers from the same drawbacks that COM/CORBA did...it can get really complex.
I think REST is just coming back into style at the moment. It's nothing new, people are just taking another look at it. Look at the web. It's REST and it's been around for years. 5 years from now people are going to look back and say the same thing about it being legacy and the need to change. It's the nature of software development. Everything goes in cycles.
The debate about which one is better is going to be just like the tabs vs. spaces debate. There are going to be people on different sides swearing that one is better. Really in the end, they both accomplish the same goal. Sure one will be a better solution than the other in some situations, but in the end neither will be superior 100% of the time.
We were using SOAP, but since we control both messaging endpoints (thick client out on the web connecting to our servers) we decided that the "lingua franca" of XML wasn't offering any real benefit. Instead, we're experimenting with binary serialization via Google protocol buffers, and like everything we've learned so far. It's somewhat CORBA-esque, but doesn't make me grumpy the way CORBA did. Still haven't found the best fit for the RPC layer, but pretty sure the payload will be protocol buffers.
The point I'm trying to make is that if you control both sides of the conversation, there are significant efficiency advantages in bypassing the XML tax.
Yes, some people still are (and now it's 2011!). I think the main reason is that MS WCF automatically generates SOAP bindings. The horror.
It's impossible to define what the best technology solution is without considering what the problem is, in other words, what the context is. Both REST and SOAP have their place. If you have a high traffic site and a development audience who is comfortable with REST, then SOAP would be a bad choice, primarily because the message size is so incredibly bloated. If you have small scale site with a modest development budget, then SOAP will be a superior choice due to automatic proxy generation from WSDL. To make a fair comparison, it should be mentioned that implementing a REST conversation takes more development time and therefore is more expensive, a very relevant fact for your boss.
While it is true that SOAP is a more complicated protocol, in my experience this doesn't translate to maintainability issues. That's because messages ride on HTTP and can be easily debugged just like REST message, and the SOAP stacks available on major platforms are very solid.
The complexity of SOAP is of course an advantage if your requirements include sophisticated items like federated message security. On the other hand, these kind of requirements are not seen that often in my experience. The WS standards committee may have been vulnerable to some YAGNI issues. Now that web service communication is commonplace, it's turning out to be simpler that was originally envisioned.