Using WCF programming model for general application development as well? - frameworks

This may sound weird or funny but I need to ask this question for clarification. Basically I like the WCF programming model. So assuming I don't strictly have requirements for the 'service' based application, can I still go ahead and leverage the WCF infrastructure?
I know technically it is possible, however I want to know if there is noticeable penalty in doing so. I know I can improve performance by using appropriate binding (say, NetNamedPipesBinding).
So how can I take better care of performance and moreover, besides performance, what are other aspects I should be concerned about?

We have actually tried this. I listened to a podcast that had the idea that every class should have a WCF interface.
On the plus side:
It forces you think more about how parts of your system interact
When we needed to spilt the application over 2 machines, it was just a config change
One the negative
There was a lot of extra work on the configuration and time lost due to config errors
What I think happens: Say you have a 1 MB object and you send it over WCF named pipes, although it is all in memory you still "send" 1 MB. If you just use classes you send a pointer.
On balance I would not try this again, although WCF 4 with less config would make it more doable.

Related

Implementing SOA with RESTful service and application APIs?

At the moment we have one huge API which is used by our backoffice, our frontend, and also our public API.
This causes me a lot of headaches because when building new endpoints I find a lot of application specific logic in the code which I don't necessarily want to include in my endpoint. For example, the code to create a user might contain code to send a welcome email, but because that's not needed for the backoffice endpoint I will then need to add a new endpoint without that logic.
I was thinking about a large refactor to break our code base in to a number of smaller highly specific service APIs, then building a set of small application APIs on top of those.
So for example, an application endpoint to create a new user might do something like this after the refactor:
customerService.createCustomer();
paymentService.chargeCard();
emailService.sendWelcomeEmail();
The application and service APIs will be entirely separate code bases (perhaps a separate code base per service), they may also be built using different languages. They will only interact through REST API calls. They will be on the same local network, so latency shouldn't be a huge issue.
Is this a bad idea? I've never seen/worked on a codebase which has separated the two before, so perhaps there is a better architecture to achieve the flexibility and maintainability I'm looking for?
Advise, links, or comments would all be appreciated.
Your idea of making multiple, well-defined services is sound and really it is the best way to approach this. Going with purely micro-services approach however trendy it might seem, proves to be an overkill most often than not. This is why I'd just redesign the existing API/services properly and follow solid and sound SOA design principles below. Good Resources could be found on both serviceorientation.com and soapatterns.org I've always used them as reference in my career.
Consider what types of services you need
(image from serviceorientation.com)
Entity services are generally your Client, Payment services - e.g. services centered around an entity in your domain. They should be business-agnostic, and be able to be reused in all scenarios. They could be called sometimes by clients directly if sufficient for their needs. They could be called by Task services.
Utility services contain logic you're likely to reuse in other services, but are generally not called by the clients directly. Rather, they'd be called by Task and Entity services. An example might be a Transliteration service.
Task services combine and reuse Entity and Utility services into meaningful tasks. Most often they are not that agnostic and they do implement some specific business logic. They have meaningful business operations and they are what clients mostly call.
Principles to follow when redesigning
I strongly recommend going over this cheat sheet and making sure everything there is covered when you do your redesign. It's great help.
In general, you should make sure that:
Each service has a common context and follows the separation of concerns principle. E.g. Clients service is only for clients related operations, etc.
Each of the Entity and Utility services is business-agnostic and basic enough. So it can be reused in multiple scenarios and context without being changed. Contract must be simple - CRUD and only common operations that make sense in most usage scenarios.
Services follow a common data model - make sure all the data structures you use are used uniformly in all services in order to prevent need for integration efforts in the future and promote combination of services for clients to exploit. If you need to receive a customer that another service returns, this should be happening without the need for transformation
OK, but where to put the non-agnostic logic?
Now, you have multiple options for abstracting business logic whenever you have a need for complex business functionality. It depends on your scenario what you're going to chose:
Leave logic to all clients. Let them combine your simplified services
If there is business logic that is commonly implemented in multiple of your applications and has the potential to be reused heavily you can implement a composite service that reuses multiple existing underlying services and exposing the logic.
Service Composability. Concerns on multiple API calls communication overhead.
Well, this is an age-old question - should you make multiple API calls when they will probably create some communication overhead? The answer is - it depends on how complex your scenario is, how much reuse you expect and how flexible you want to be. Also is speed critical? To what extent? In Service Oriented Architecture though, this is a very common approach - to reuse your existing services and combine them in new configurations as needed. Yes, it does add some overhead, but I've seen implementations in very complex environments, for example Telecoms, where thanks to the use of ESB solutions, message queues, etc the overhead is negligible compared to the benefits. Here is a common architecture approach (image from serviceorientation.com):
The mandatory legacy refactoring heads-up
More often than not, changing the established contract for multiple existing client systems is a messy business and could very well lead to lots of refactoring and need for looking for needle-in-a-stack functionality that's somewhere deep in the (possibly) legacy code. Business logic might be dispersed everywhere. So make sure you're ready and have the controls, time and will to lead this battle.
Hope this helps
Is this a bad idea?
No, but this is a big overall question to be able to provide very specific advice.
I'd like to separate this into 3 areas:
Approach
Design
Technology
Working backwards, the Technology is the final and most-specific part, and totally depends on what your current environment is (platforms, skills), and (hopefully) will be reasonable self-evident to you once the other things are in progress.
The Design that you outlined above seems like a good end-state - having multiple, specific, focused APIs, each with their own responsibility. Again, the details of the design will depend on the skills of you and your organization, and the existing platforms that you have. E.g. if you are already using TIBCO (for example) and have a lot invested (licenses, platforms, tools, people) then leveraging some of their published patterns/designs/templates makes sense; but (probably) not if you don't already have TIBCO exposure.
In the abstract, the REST API services seems like a good starting point - there are a lot of tools and platforms at all levels of the system for security, deployment, monitoring, scalability, etc. If you are NGINX users, they have a lot of (platform-independent) thoughts on how to do this also NGINX blog, including some smart thinking on scalability and performance. If you are more adventurous, and have an smart, eager team, a look at Event-driven architecture - see this
Approach (or Process) is the key thing here. Ultimately, this is a refactoring, though your description of "a large refactor" does scare me a little - put that way, it sounds like you are talking about a big-bang change and calling it refactoring. Perhaps it is just language, but what's in my mind would be "an evolution of the 'one huge API' into multiple, specific, focused APIs (by refactoring the architecture)". One place to start is Martin Fowler, while this book is about refactoring software, the principles and approach are the same, just at a higher-level. Indeed, he talks about just this here
IBM talk about refactoring to microservices and make it sound easy to do in one step, but it never is (outside the lab).
You have an existing API, serving multiple internal and external clients. I will suggest that you'll want to keep this interface solid for these clients - separate your refactoring of the implementation from the additional concerns of liaising with and coordinating external systems/groups. My high-level starting approach would be:
identify a small (3-7) number of related methods on the API
ideally if a significant, limited-scope change is needed anyway with these methods, that is good - business value with the code change
design/specify a new stand-alone API specifically for these methods
at first, clone the existing model/naming/style
code a new service just for these
with proper automated CI/CD testing and deployment practices
with associated monitoring
modify the existing API to have calls to these methods re-direct to call the new service
perhaps have a run-time switch to change between the old implementation and the new implementation
remove the old implementation from codebase
capture issues, assumptions and problems along the way
the first pass will involve a lot of learning about what works and doesn't.
then repeat the process over & over, incorporating improvements each time.
At some point in the future, when appropriate due to other business-driven needs, the API published to the back-end, front-end and/or public clients can change, but that is a whole different project.
As you can see, if the API is huge (1,000 methods => 140 releases) this is a many-months process, and having a reasonably frequent release schedule is important. And there may be no value improving code that works reliably and never changes, so a (potentially) large portion of the existing API may remain, just wrapped by a new API.
Other considerations:
public API? Maybe a new version (significant changes) will be needed sooner than the internal APIs
focus on the methods/services used by it
what parts/services change the most (have the most enhancement requests approved)
these are the bits most likely to change, and could benefit most from a better process/architecture
what are future plans for change and where would the API be impacted
e.g. change to user management, change to payment processors, change to fulfilment systems
e.g. new business plans (new products/services)
consider affected methods in the API
Also see:
Using Microservices for Legacy System Modernization
Migrating From a Monolith to APIs and Microservices
Break the Monolith! Loosely Coupled Architecture Brings DevOps Success
From the CEO’s Desk: Application Modernization – Assess, Strategize, Modernize! 9
[Microservices Architecture As A Large-Scale Refactoring Tool 10
Probably the biggest 4 pieces of advice that I can give is:
think refactoring: small changes that don't affect function
think agile: small increments that are valuable, testable, achievable
think continuous: have a vision for where you will (eventually) get to, then work the process continuously
script & automate the processes from code, documentation, testing, deployment, monitoring...
improving it every time!
you have an application/API that works - keep it working!
That is always the first priority (you just need to work to carve-out time/budget for maintenance)
Not a bad idea at all.
Also what are your looking is microservices arch. and with that the question comes is how you break your system into well defined services.
We use Domain Driven Design Arch. to break our system into microservices and lagom framework , which allows every service to be in diff. code base and event driven arch. between microservices.
Now lets look at your problem at low level: you said a service contains code like creating a user and sending a email and one with just creating a user and there might be other code as well.
First we need to understand how many type of code you are writing:
Domain Object Logic (eg: User Object) -- what parameters are valid and all -- this should be independent of service endpoint and should be encapsulated in one Class like user class and we say it an Aggregate in Domain Driven Design terms
Business Reactions -- like on user creation send a email -- using event driven arch. these type of logics are separated into process managers or sagas which could most cases work conditionally like a for user created externally send a mail and for user created internally send a email , by having extra data in the event
Also the current way you are doing it , how are you handling transaction across services???

Is REST only adequate for applications with human-computer interaction?

I am fairly new to building applications using the RESTful architecture. As a matter of fact, all I have done so far is categorized as Level 2 REST by Leonard Richardson and that I know Fielding would happily categorize as Non-RESTful.
I have spent hours trying to understand HATEOAS and how to reach level 4. And I see it more clearly now. I conceptualize the application as a series of state transitions, and the resources will dynamically provide links with information on how to move from one state to another.
But everything related to HATEOAS seem to be inherent of a human-computer interaction. I mean, even when the resources provide the links that enable the application user to move to the next state, it is ultimately the user the one that drives the application from one state to the other by causing the use of of the provided links.
But how are things supposed to work when we are dealing with computer-to-computer interaction? After all when it comes to service-orientation the idea of service composition is key, and we cannot naively assume that the client is always going to be a human being? Many services are designed to be consumed by non-human users, and some interactions/orchestrations might be fairly complex, the type of things that are typically modeled with things like BPM, or BPEL.
Is REST and particularly HATEOAS only usable in applications that imply human intervention and if not how is this supposed to work otherwise?
I am getting this vibe that REST is only good for certain type of solutions and inadequate for others, but literature out there has failed to explain those inadequacies and sell REST as the cure of all evil, but I just don't quite get how to use for proper service composition when humans are not the drivers.
I'd really appreciate any references or insights on this, because believe me I have two days straight reading all I have been able to find on this topic and I have not yet being able to reach any reasonable and well documented conclusions.
Well, your client app can parse the response to get possible actions. In this case actual urls are obtained not from knowledge of the API, but upon calling the initial method (usually GET). All human-less.
It sounds almost as if you're comparing SOA to REST/Hypermedia and fail to see that SOA is a strategy, for designing a complex system made out of other systems, while REST/Hypermedia is a software architecture style applying a bunch of constraints on client-server communication. The client, however, can be both a server or a human, it doesn't matter.
To use or not to use REST/Hypermedia is not something to bother with when outlining/designing service composition. It's a question that comes into play when trying to achieve syntactic interoperability. Many times it comes down to comparing REST to Soap and other technical details.

What are the pros and cons for using a IOC container?

Using a IOC container will decrease the speed of your application because most of them uses reflection under the hood. They also can make your code more difficult to understand(?). On the bright side; they help you create more loosely coupled applications and makes unit testing easier. Are there other pros and cons for using/not using a IOC container?
If you're using an IOC container in a simple fashion, reflection is only used on startup - the application is wired up to start with, and then it runs as normal without any intervention from the container. Of course, if you're using the IOC to resolve dependencies after you've started running, that may be slightly different - although I'd still expect it to resolve lazily and cache unless you've got it configured to create new instances each time.
As for making the code harder to understand - quite the reverse! With the dependencies explicitly stated, it's much easier to understand each component, and the configuration file(s) make it clear how the whole application hangs together.
Well I suppose a con I've experienced is that some developers don't seem to be able to grasp IoC. We've had a few people who were against it for no reason other than that they didn't understand them. (Not saying that's a bad reason to be against something, not at all.)
It does add a bit of abstraction that always seems to manage to confuse someone or other, but I'd say the pros far outweigh the cons in most cases.
I think it is fair to say if you have expert understanding of how to use IOC and tend to write good code anyway, then IOC will make your code easier to understand on all but the smallest systems.
However if you are working somewhere where most classes/methods are very large and the concept of refactoring has not yet taken hold, then trying to use an IOC is likely just to make the software harder to understand. The IOC also has to be leant by everyone that programs on the project, so that may be a consideration.
I see IOC as icing on the cake; I like the icing but only on a nice cake. If the cake is not nice to start with, sort out the cake first.
As to the performance overhead of using IOC, I don’t see this as a problem in most cases. The overhead need not be large, and given the speed of today’s CPU most of you run time is likely to be data access anyway. If a IOC proved to slow for a given bit of code I would look at adding some caching of returned object, or removing the IOC just from that bit of code.
I believe the assumption about reduced execution speed is much the same kind of argument as "C is faster than C#/Java". While this statement may be true for specific operations and/or structurally simple tasks it is not the case the moment complexity rises.
The way DI-frameworks let you focus on object creation and dependencies creates more efficient systems when code size increases. For large applications I'm almost certain DI-framework based code will outperform any alternative solution. There's simply so little redundancy in the runtime that it's hard to make it more efficient! Most of the additional overhead is also just at first load.
Advanced DI containers also lets you do "scope" magic that you can only dream of without the container. Using scope-proxies, spring can do the following:
A Singleton
|
B Singleton
|
C Prototype (per-invocation)
|
D Singleton
|
E Session scope (web app)
|
F Singleton
Effectively you can have ten layers of singleton objects and all of a sudden something session scoped shows up.
Stuff like security can be injected in totally different manner than you would otherwise. There's often a classical paradox: Often the GUI layer needs to have intricate knowledge of the security permissions. Quite often the services layer also needs this, but often at a different level of detail (usually less detailed than the gui). The classical approach would be to send it around as parameters, put it on a threadlocal or to ask a service. With spring you can just inject it straght where you need it and no-one else needs to know.
This actually changes application development as a whole. I had a real hard time adjusting to this, but after this pain I see it is truly a lot closer to how things should be (as opposed to how we've learned to do it).
So I think DI frameworks have the potential of changing the way you make programs, with much further reaching implications than just DI. It's not just a glorified way of calling new.
I agree with all the answers so far.
What I'd like to add, is that it creates a bit of overhead, so it isn't really suited for small applications.
Mid-size and larger applications benefit the most from using IoC.
You might also check out this question for more information on pros and cons: Castle Windsor Are There Any Downsides?
In most circumstances you would not even notice performance penalty since for "singleton" objects all the initialization is performed once only. I would also argue that IoC makes it different to understand the code: on the contrary, IoC-style development forces you to create small coherent classes, which are in turn easier to grok.
If you are writing a business application, using an inversion-of-control and dependency-injection container (in conjunction with other agile practices and tools) will help you out in terms of productivity and reliability.
Moreover, your application will probably spend a vast majority of its CPU time waiting for resources or waiting for human interaction and doing nothing useful. Your application should have plenty of horsepower to spare for a few microseconds of reflection.
It seeks to reduce their dependency with IOC by ensuring object instance management within the application. The framework, rather than the developer, is in charge of creating and managing dependencies in your project.
The framework calls and runs our code when we write a block of code, and the entire event of passing control back to the framework is known as the Inversion of Control.
It enables the execution of a method separate from its
implementation.
It enables you to switch between multiple implementations with ease.
increases the modularity of the software
Because dependencies are reduced, it is simple to test and write.

Caching solutions for multi-webserver configuration?

I am looking into caching solutions, for a multi webserver configuration. Thought of memcached as being cheap (free) and proven over the years. Microsoft is also developing a caching solution for webfarms, called Velocity, but this is still in CTP2.
There is a distributed caching model used in the configuration service that is part of the .NET Stocktrader sample application. This is a framework that allows you to run multiple nodes with centralised configuration management, load balancing and distributed caching. You can implement the configuration service as is or look through the code and grab what suits you. Worth a look.
When I listened to Scott Hanselman's podcast interview with the StackOverflow team, I was left with the impressions that a. they did use some kind of caching and b. they knew almost nothing about what they were doing in this respect and had fiddled with a few options and then written a blog post or two.
They currently seem to use client-side caching rather half-heartedly (short expiry times on images, for example), and I think they use a lot of ASP.NET user-mode caching, and I can't tell if they use IIS kernel-mode caching. (They didn't seem to be able to tell Scott that, either.)
However, the podcast was a while back, and I was driving at the time, so my memory might be wrong and/or out of date.
You should think HARD before bringing in something like memcached.
Caching can hide performance issues from you ("got a slow running query? just cache it and dont worry about fixing it!")
Invalidating stale data out is a nightmare.
You may spend days chasing bugs that get cleared up when you clear the cache, and it pollutes your code base.
I'm not saying don't do it, but think HARD before you do.
If you can get enough performance by adding a couple* of extra machines (which I think stackoverflow can) then do that and don't worry about caching. It'll be much cheaper in the long run.
*note I don't say 100 machines.

What level of complexity requires a framework?

At what level of complexity is it mandatory to switch to an existing framework for web development?
What measurement of complexity is practical for web development? Code length? Feature list? Database Size?
If you work on several different sites then by using a common framework across all of them you can spend time working on the code rather than trying to remember what is located where and why.
I'd always use a framework of some sort, even if it's your own, as the uniformity will help you structure your project. Unless it's a one page static HTML project.
There is no mandatory limit however.
I don't think there is a level of complexity that necessitates a framework. For me whenever I am writing a dynamic site I immediately consider a framework, and if it will save me time, I use it(it almost always does, and I almost always do).
Consider that the question may be faulty. Many of the most complex websites don't use any popular, preexisting, framework. Google has their own web server and their own custom way of doing things, as does Amazon, and probably lots of other sites.
If a framework makes your task easier, or provides added value, go for it. However, when you get that framework you are tied to a new dependancy. I'm starting to essentially recreate a Joel on Software post, so I will redirect you here for more on adding unneeded dependencies to your code:
http://www.joelonsoftware.com/articles/fog0000000007.html
All factors matter. You should measure how much time you can save using 3rd party framework and compare it to the risks of using other's code
Never "mandatory." Some problems are not well solved by any framework. It would be suggestible to switch to a framework when most of the code you are implementing has already be implemented by the framework in question in a way that suits your particular application. This saves you time, energy, and will most likely be more stable than the fresh code you would have written.
This is really two questions, you realize. :-) The answer to the first one is that it's never mandatory, but honestly, parsing HTML request parameters directly is pretty horrible right from the start. I don't want to do it even once, so I tend to go toward a framework relatively early on.
As far as what measurement is practical, well, what are you worried about? All of the descriptions that you list have value. Database size matters primarily for scaling, in my opinion (you can write a very simple app if you have a very simple schema, even if there are hundreds of thousands of rows in the database). The feature list will probably determine the number and complexity of UI pages, which will in turn help to dictate the code length.
There are frameworks that are there for getting moving very quickly with a simple blog, django or RoR all the way to enterprise full-stack applications Zope. Not to be tied to just the buzz world, you also have ASP.Net and J2EE, etc.
All frameworks and libraries are tools at your disposal. Determine which ones will make your life easier for your given project and use them.
I would say the reverse is true. At some point, your project gets so expansive, that you actually get slowed down by the shortcomings of the framework. For sufficiently large projects you may, in fact, be better off developing your own framework, to meet your own needs. I have seen many times where people were held back in the decisions they could make, or the work they could produce, because they were trying to do something that the framework didn't anticipate. And doing these things that the framework doesn't anticipate can be very troublesome. The nice thing about making your own framework, is that it can evolve with your project, to be a help to you system, instead of a hindrance.
So, to conclude, small projects should be use existing frameworks. Large projects should contain their own framework.