My situation is that the Microsoft IIS app server and code in C# already exists.
The Web Services and contracts have been done in the .NET framework. My question is what open source Enterprise Service Bus is available to register the endpoint for sending messages to/from the services on the IIS app server? Can I have a Java-based ESB when my endpoint is written in a different language, C#?
I'm looking for an open-source ESB where I can deploy existing WSDL to register the Microsoft server endpoint and wondering if a Java-based ESB will work? What kind of issues would creep up? Is it better to match endpoint vendor type with esb vendor type?
Warewolf ESB is a GPL Licensed Open Source ESB based on the Microsoft Stack. You can download an installer from http://warewolf.io, or the source code from Github at https://github.com/Warewolf-ESB/Warewolf-ESB
Its very easy to work with, and is highly extensible. The development team are also very friendly and are quick to respond to the communities queries.
Since nobody else has chimed in, I guess I'll give it a go.
You'll see that most of the open-source service buses in .NET (NServiceBus, MassTransit, etc) stay pretty far away from WSDL - instead preferring to take a more message-centric approach.
Full disclosure, I'm deeply involved with NServiceBus.
Java-based ESBs tend to be more open to integrating more web-service-centric approaches.
The tradeoffs associated with introducing non-Microsoft technology in your production environment center around your operations team's familiarity with them. There's also the development side of things, but in my experience, that tends to be more minor in comparison.
Hope that helps in some small way.
Related
Is it possible to scale Axon Framework without Axon Server Enterprise? I'm interested in creating a prototype CQRS app with Axon, but the final, deployable system has to be be free from licensing fees. If Axon Framework can't be scaled to half a dozen nodes using free software, then I should probably look elsewhere.
If Axon Framework turn out not to be a good choice for the system, what would you recommend? Would building something around Apache Pulsar be a sensible alternative?
I think I have good news for you then.
You can utilize Axon Framework perfectly fine without Axon Server Enterprise.
Firstly, you can use the Axon Server Standard edition, which is completely free and you can check out the code too if you want.
If you prefer to get infrastructure back in your own hands, you can also select different approaches to distributing the CommandBus and the EventBus/EventStore.
For the CommandBus the framework provides the DistributedCommandBus for which two implementation are in place, being:
JGroups
Spring Cloud
I'd argue option 2 is the most ideal for distributing your commands, as it gives you the freedom to choose whichever Spring Cloud Discovery Service implementation you desire. That should give you the handles to work "free of licenses" in that area.
For distributing your Events, you can broadly look at two approaches:
Share the database, aka your EventStore, among all instances
Use a event message bus to distribute your event messages
If you want instances of your nodes to be able to Event Source the Command Model, you are inclined to use option 1. This is required as Axon Framework requires a dedicated EventStore to be able to source the Command Models from history.
When you just want to connect other applications to your Event Stream, option 2 would be a good fit. Again, the framework has two options in this area:
AMQP
Kafka
The only thing I'd like to point out on this part additionally is that the Kafka extension is still in a release candidate state. It is being worked on actively at the moment though.
All these extensions and their options should be clearly stated in the Reference Guide, so I'd definitely check this documentation out if you are gonna start an application.
There is a sole thing you cannot distribute though, which is the QueryBus.
There is an outstanding issue to resolve this, for which someone put the effort in to provide a PR.
After seeing how Axon Server Standard edition does it though, he intentionally closed the PR (with the following comment) as it didn't seem feasible to him to maintain such a tool at this stage.
So, can you use Axon Framework without Axon Server Enterprise?
Well, I think you can. :-)
Mind you though, although you'd be winning on not having a license fee if you don't use Axon Server Enterprise, it doesn't mean your production is gonna be free.
You'd be introducing back quite some infrastructure set up and production time to get this going.
Hope this gives you plenty of feedback #ahoffer!
I know theres already a good question on this, but it doesn't really answer what I'm looking for.
From what I understand:
1.both are used as a central focal point between applications
2.both can use routing/mediation/transformation etc. between services/apps
But the only difference i can really see is that hub and spoke typically have many different formats entering the hub(SOAP/REST/XML/JSON...) while ESB typically has a standard format(Usually just SOAP.)
Also I keep reading that hub and spoke introduces a single point of failure compared to an ESB. So is the physical deployment the difference here? Where a hub has every possible endpoint and as ESB has endpoints deployed across multiple hubs? So an ESB is just multiple hubs(for want of better words)?
Can anyone help clear this up for me?
There is no exact answer here, since you can talk about ESB as a specific design pattern, or as the discourse about the evolution of software integration tools and SOA.
ESB as a design pattern means that you manage communication between different services using a bus where clients can easily plug in and out. This is usually done by forcing them to use standard data formats and protocols, whereas with Hub and Spoke you might use custom connectors and data transformations for each client. This limits the number of problems you may have with running multiple integrations, but you may still have a single point of failure in ESB.
ESB as a discourse (or marketing term) is a more complex issue, where people argue over what is "True ESB". Some people say you need to have a modular architecture where you can select which components you deploy, or you need to be able to distribute the components across different machines to allow scaling and fault tolerance. In the extreme definition you would need to deploy even your data transformers as distributed services.
From Here
The ESB is the next generation of enterprise integration technology, taking over where EAI(hub-spoke) leaves off.
Smarter Endpoints : The ESB enables architectures in which more intelligence is placed at the point
where the application interfaces with the outside world. The ESB allows each endpoint to present
itself as a service using standards such as WSDL and obviates the need for a unique interface written
for each application. Integration intelligence can be deployed natively on the end-points (clients and
servers) themselves. Canonical formats are bypassed in favor of directly formatting the payload to
the targeted format. This approach effectively removes much of the complexity inherent in EAI
products.
Distributed Architecture : Where EAI is a purely hub and spoke approach, ESB is a lightweight
distributed architecture. A centralized hub made sense when each interaction among programs had
to be converted to a canonical format. An ESB, distributes much more of the
processing logic to the end points.
No integration stacks : As customers used EAI products to solve more problems, each vendor added
stacks of proprietary features wedded to the EAI product. Over time these integration stacks got
monolithic and require deep expertise to use. ESBs, in contrast, are a relatively thin layer of software
to which other processing layers can be applied using open standards. For example, if an ESB user
wants to deploy a particular business process management tool, it can be easily integrated with the
ESB using industry standard interfaces such as BPEL for coordinating business processes.
The immediate short-term advantage of the ESB approach is that it achieves the same overall effect
as the EAI(hub-spoke) approach, but at a much lower total-cost-of-ownership. These savings are realized not
only through reduced hardware and software expenses, but also via labor savings that are realized by
using a framework that is distributed and flexible.
I don't know if you mean this when you say is physical deployment the difference here? but actually the main difference between Hubs and ESP is that, its communication system is in different Layer.
When we talking about an ESP we are reffering to a software architecture model where as a hub is reffering to a strict hardware connecting topology.
Profiously this hardware topology, (a collection of hubs) implements an ESP, but there is a distinct line in communication layers between the two.
Let us imagine we have an orders processing system for a pizza shop to design and build.
The requirements are:
R1. The system should be client- and use-case-agnostic, which means that the system can be accessed by a client which was not taken into account during the initial design. For example, if the pizza shop decides that many of its customers use the Samsung Bada smartphones later, writing a client for Bada OS will not require rewriting the system's API and the system itself; or for instance, if it turns out that using iPads instead of Android devices is somehow better for delivery drivers, then it would be easy to create an iPad client and will not affect the system's API in any way;
R2. Reusability, which means that the system can be easily reconfigured without rewriting much code if the business process changes. For example, if later the pizza shop will start accepting payments online along with accepting cash by delivery drivers (accepting a payment before taking an order VS accepting a payment on delivery), then it would be easy to adapt the system to the new business process;
R3. High-availability and fault-tolerance, which means that the system should be online and should accept orders 24/7.
So, in order to meet R3 we could use Erlang/OTP and have the following architecture:
The problem here is that this kind of architecture has a lot of "hard-coded" functionality in it. If, for example, the pizza shop will move from accepting cash payments on delivery to accepting online payments before the order is placed, then it will take a lot of time and effort to rewrite the whole system and modify the system's API.
Moreover, if the pizza shop will need some enhancements to its CRM client, then again we would have to rewrite the API, the clients and the system itself.
So, the following architecture is aimed to solve those problems and thus to help meeting R1, R2 and R3:
Each 'service' in the system is a Webmachine webserver with a RESTful API. Such an approach has the following benefits:
all goodness of Erlang/OTP, since each Webmachine is an Erlang application, which can be supervised and can be put into an Erlang release;
service oriented architecture with all the benefits of SOA;
easy adaptable to changes in the business process;
easy to add new clients and new functions to clients (e.g. to the CRM client), because a client can use RESTful APIs of all the services in the system instead of one 'central' API (Service composability in terms of SOA).
So, essentially, the system architecture proposed in the second picture is a Service Oriented Architecture where each service has a RESTful API instead of a WSDL contract and where each service is an Erlang/OTP application.
And here are my questions:
Picture 2: Am I trying to reinvent the wheel here? Should I just stick with the pure Erlang/OTP architecture instead? ("Pure Erlang" means Erlang applications packed into a release, talking to each other via
gen_server:call and gen_server:cast function calls);
Can you name any disadvantages in suggested approach? (Picture 2)
Do you think it would be easier to maintain and grow (R1 and R2) a system like this (Picture 2) than a truly Erlang/OTP one?
The security of such a system (Picture 2) could be an issue, since there are many entry points open to the web (RESTful APIs of all services) instead of just one entry point (Picture 1), isn't it so?
Is it ok to have several 'orchestrating modules' in such a system or maybe some better practice exists? ("Accept orders", "CRM" and "Dispatch orders" services on Picture 2);
Does pure Erlang/OTP (Picture 1) have any advantages over this approach (Picture 2) in terms of message passing and the limitations of the protocol? (partly discussed in my previous similar question, gen_server:call VS HTTP RESTful calls)
The thing to keep in mind regarding SOA is that the architecture is not about the technology (REST, WS* ). So you can get a good SOA in place with endpoints of several types if/when needed (what I call an Edge component - separating business logic from other concerns like communications and protocols)
Also it is important to note that service boundary is a trust boundary so when you cross it you may need to authenticate and authorize, cross network etc. Additionally, separation into layers (like data and logic) shouldn't drive the way you partition your services.
So from what I am reading in your questions, I'd probably partition the services into more coarse grained services (see below). communications within the boundary of the service can be whatever, where-as communications across services uses a public API (REST or Erlang native is up to you, but the point is that it is managed, versioned, secured etc.) - again, a service may have endpoints in multiple technologies to facilitate different users (sometimes you'd use an ESB to mediate between services and protocols but the need for that depends on size and complexity of your system)
Regarding your specific questions
1 As noted above, I think theres a place to expose more public APIs
than just a single entry point, I am not sure that exposing each and
every capability as a service with public-api is the right way to go
see.
2&3 The disadvantages of exposing every little thing is
management overhead, decreased performance (e.g. you'd have to
authenticate on these calls). You get nano-services services
whose overhead is more than their utility.
One thing to add about the security is that the fact that some service has a REST API does not have to translate to having that API available to the general public. Deployment-wise you can keep it behind a firewall and restrict the access to it for known addresses
etc.
5 It is ok to have several orchestrating modules, though if you get beyond a few you should probably consider some orchestration module (and ESB or an Orchestration engine) alternatively you can use event based integration and get choreography based integration which is more flexible (but somewhat less manageable )
6 The first option has the advantage of easy development and probably better performance (if that's an issue). The hard-coded integration layer can prove harder to maintain over time. The erlang services, if you wrote them write should be able to evolve independently if you keep API integration and message passing between them (luckily Erland makes it relatively easy to get this right by its inherent features (e.g. immutability))
I'd introduce the third way that is rather more cost effective and change-reactive. The architecture definitely should be service oriented because you have services explicitly. But there's no requirement to expose each service as Restful or WSDL-defined one. I'm not an Erlang developer but I believe there's a way to invoke local and remote processes by messaging and thus avoid unnecessary serialisation/serialisation activities for internal calls. But one day you will be faced with new integration issue. For example you will be to integrate accounting or logistic system. Then if you designed architecture well regarding SOA principles the most efforts will be related to exposing existing service with RESTful front-end wrapper with no effort to refactor existing connections to other services. But the issue is to keep domain of responsibilities clean. I mean each service should be responsible to the activity it was originally designed.
The security issue you mentioned is known one. You should have authentication/authorization in all exposed services using tokens for example.
What makes most sense for large enterprise applications involving several jsps and many transactions (like a commercial banking application or a healthcare application with huge amounts of data) w.r.t templating? Is it just a matter of personal choice or is there a strong reason to lean towards server-side templating?
Client-side templating is more matter of UI and should not be confused with backend transactions in any way. When your site does something it is still doing HTTP POST which is processed by the server logic. I saw the development more like a natural continuum to build easier to use web enabled by the modern browsers and Javascript engines.
You might find some more info here:
https://github.com/leonidas/transparency/wiki/Frequently-Asked-Questions
in Siebel I can create Business Services at 2 locations:
Siebel Client
Siebel Tools
In the Siebel Client I cannot see the Business Services created in Siebel Tools, and vice versa.
(After creating a new Business Service in Siebel Tools, I compiled it - no errors reported - and ran the client with "Debug" from the Siebel Tools menu.)
Do you know, why?
Thanks!
Edit: I use the Sample data base, I did not check in or check out anything. I am not comfortable with the deployment process yet and just digging through the docs.
If you wrote the business service using server script, then the business service will be compiled into the SRF. There will be no physical files outside of the SRF itself, and it will not show up in the Siebel client.
If you wrote the business service using browser script, then the business service will be converted to an external .js file and dropped in whatever your script directory. The script directory is specified in Tools or in the genbscript command-line utility.
Hope this helps.
Perfect explanation of the difference between repository business services and run-time business services is provided here:
Just copy-paste it:
In Siebel we can write business services in two places.
Siebel Client
Siebel Tools
There is nothing different in the scripting that we do but there are
differences in how these business services are executed.
As far my knowledge (which is pretty limited :) ) is concerned the
difference between them are as following.
Client side is SRF independent and Tools is SRF dependent (Which
means an SRF change is required even if we want to make a slight
change)
Siebel Client BS Compiled at Runtime and Siebel Tools BS is compiled when we compile the SRF
When you have to make decision of writing a Business Service following
factors can affect your decision.
Performance: Tools BS has slight advantage of Performance
(Theoretically) as it is compiled before hand and just executed at run
time.
Flexibility: Client BS offers you ultimate flexibility as you can
change the code anytime you want to. So, if flexibility is more
important to you then Client BS is for you.
IDE: From developers perspective Tools BS provides you better IDE and
better syntax checking. Client Side BS has a crappy IDE and zilch
syntax checking,just a field where we write the code.(I have spent
hours debugging Client Side BS just to find out that I had misspelled
a variable name :( )
But still I have not come across even a single solid point that can
help us to determine exactly when we should use Client Side BS or
Tools Side BS. It mostly depends on developer’s choice who is writing
the BS. So, I am leaving this post as an open question asking you all
about your inputs which can help us to take right decision as right
time.