SignalR vs. Reactive Extensions - system.reactive

Is SignalR the same thing is Reactive Extensions? Can you explain why or why not?

No, they are absolutely not the same thing.
Reactive Extensions is a library for creating and composing observable streams of data or events (which are actually quite similar). It basically knows nothing about client-server connections or other things. It is focused solely on Observables and is capable of wrapping any collection, stream, event, async method, etc. into the common Observable interface.
SignalR is a toolkit for creating persistent (i.e. alive) duplex connections between client and server. It works over HTTP and its purpose is wrapping 3 low-level techniques: long-polling, server-side events and web sockets into a high-level API for comfortable development. So, it's focused on the communication.
So, the components themselves are quite independent from each other, and they have completely different concerns.
On the other hand, these 2 great libraries are complementary to each other: one might use SignalR to push events from server to clients and then wrap the server-side events into RX's Observables to create complex reactive user experiences.
UPDATE
Rx is like LINQ, it helps you specify 'what happens', it doesn't get into the details of 'how'. SignalR is a library to implement the 'how' for real-time network communication – Paul Betts
The difference between 'LINQ to Objects' and RX is that in 'LINQ to Objects' you pull next items from an enumerable thing, while in RX they are pushed to you from an observable thing.

Related

How can event-driven architecture be applied to this example?

I am unsure how to make use of event-driven architecture in real-world scenarios. Let's say there is a route planning platform consisting of the following back-end services:
user-service (manages user data and roles)
map-data-service (roads & addresses, only modified by admins)
planning-tasks-service
(accepts new route planning tasks, keeps track of background tasks, stores results)
The public website will usually request data from all 3 of those services. map-data-service needs information about user-roles on a data change request. planning-tasks-service needs information about users, as well as about map-data to validate new tasks.
Right now those services would just make a sync request to each other to get the needed data. What would be the best way to translate this basic structure into an event-driven architecture? Can dependencies be reduced by making use of events? How will the public website get the needed data?
Cosmin is 100% correct in that you need something to do some orchestration.
One approach to take, if you have a client that needs data from multiple services, is the Experience API approach.
Clients call the experience API, which performs the orchestration - pulling data from different sources and providing it back to the client. The design of the experience API is heavily, and deliberately, biased towards what the client needs.
Based on the details you've said so far, I can't see anything that cries out for event-based architecture. The communication between the client and ExpAPI can be a mix of sync and async, as can the ExpAPI to [Services] communication.
And for what it's worth, putting all of that on API gateway is not a bad idea, in that they are designed to host API's and therefore provide the desirable controls and observability for managing them.
Update based on OP Comment
I was really interested in how an event-driven architecture could
reduce dependencies between my microservices, as it is often stated
Having components (or systems) talk via events is sort-of the asynchronous equivalent of Inversion of Control, in that the event consumers are not tightly-coupled to the thing that emits the events. That's how the dependencies are reduced.
One thing you could do would be to do a little side-project just as a learning exercise - take a snapshot of your code and do a rough-n-ready conversion to event-based and just see how that went - not so much as an attempt to event-a-cise your solution but to see what putting events into a real-world solution looks like. If you have the time, of course.
The missing piece in your architecture is the API Gateway, which should be the only entry-point in your system, used by the public website directly.
The API Gateway would play the role of an orchestrator, which decides to which services to route the request, and also it assembles the final response needed by the frontend.
For scalability purposes, the communication between the API Gateway and individual microservices should be done asynchronously through an event-bus (or message queue).
However, the most important step in creating a scalable event-driven architecture which leverages microservices, is to properly define the bounded contexts of your system and understand the boundaries of each functionality.
More details about this architecture can be found here
Event storming is the first thing you need to do to identify domain events(a change in state in your system). For example, 'userCreated', 'userModified', 'locatinCreated', 'routeCreated', 'routeCompleted' etc. Then you can define topics that manage these events. Interested parties can consume these events by subscribing to published events(via topics/channel) and then act accordingly. Implementation of an event-driven architecture is often composed of loosely coupled microservices that communicate asynchronously through a message broker like Apache Kafka. Free EDA book is an excellent resource to know most of the things in EDA.
Tutorial: Even-driven-architecture pattern

Is ReST (HTTP) relevant for broadcasting data to many client application

I am currently working on a project made of many microservices that will asynchronously broadcast data to many possible client applications.
Additionally, client applications will be able to communicate with the system (i.e. the set of microservices) via a ReST Open-API
For broadcasting the data, my first consideration was to use a MOM (Message Oriented Middleware) such as AMQ.
However, I am asked to reconsider this solution and to prefer a ReST endpoint (over HTTP) in order to provide an API more "Open-API oriented".
I am not a big specialist of HTTP but it seems to me that main technologies to send asynchronous data from server to client are:
WebSocket
SSE
I am opening this discussion I order to get advices/feedback from other developers to help me to measure the pros & cons of this new solution. Among that:
is an HTTP technology such as SSE/WebSocket relevant for my needs
For additional information, here are a few metrics regarding the
amount of data to broadcast
considerable amount of messages per seconde
responsiveness
more than 100 clients listening for data
Thank you for your help and contribution
There's many different definitions of what people consider REST and not REST, but most people tend to agree that in practical terms and popular best practices REST services expose a data model via HTTP, and limit operations to this data model by either requesting the state of resources (GET), or updating the state of resources (PUT). From that foundation things are stacked on top of that.
What you describe is a pub-sub model. While it might be possible in academic terms to use REST concepts in a pub-sub architecture, I don't think that's really what you're looking for here.
Websocket and SSE are in most real-word situations do not fall under a REST umbrella, but they can augment an existing REST service.
If your goal is to simply create a pub-sub system that uses a technology stack that people are familiar with, Websockets are a really good choice. It's widely available and works in browsers.

Events in Zend Framework application

I'm looking for a reference to a good implementation of event driven architecture based on Zend Framework. Could you share your experience in this topic?
I've found two solutions, but haven't used them yet:
http://framework.zend.com/wiki/display/ZFPROP/Zend_Event+-+Alvar+Vilu
http://components.symfony-project.org/event-dispatcher/
Edit:
Example:
http://www.slideshare.net/beberlei/towards-the-cloud-eventdriven-architectures-in-php
I don't have much practical experience in this topic, but since no one else seems to be replying, I suppose I'll share what I think of this...
This is perhaps a bit tricky thing in PHP apps, since they typically only run for the duration of a request, so the benefit of being able to subscribe and listen to generic events during that short phase may not be very large.
However, I think there can be some benefits in allowing you to decouple your code more.
From what I can tell, the Symfony dispatcher looks better - mainly because it looks simpler.
I've used a sort of dojo pubsub type system myself: Basically you have an event publisher, to which classes can publish events. This is a sort of global event handling, where you don't specifically subscribe to the class itself - instead you subscribe to a specific event, and it doesn't matter what class publishes the event.
The benefits of this vs. subscribing to a specific class is that the code is decoupled more: In my case, it's a ZF app, and classes which subscribe to events can simply do it in the bootstrap, vs. having to do subscriptions in controllers (or where ever the publishers are created)
The downside of this approach is that it can make dependencies between things harder to track. For example you only see an event publish call, but you have no idea what sort of things listen for it without digging further into the code.
In my case I don't really know if the application has got any benefits from using this architecture - in fact I've several times considered removing it entirely and just using the objects which listen to the events directly.

Understanding FP in an enterprise application context (in Scala)

Most examples (if not all) that I see are the sort of a function that does some sort of computation and finishes. In that aspect, FP shines. However, I have trouble seeing how to apply it in the context of an enterprise application environment where there's not much of algorithms going on and a lot of data transfer and services.
So I'd like to ask how to implement the following problem in FP style.
I want to implement an events bus service. The service has a register method for registering listeners and publish for publishing events.
In an OO setting this is done by creating an EventBus interface with both methods. Then an implementation can use a list to hold listeners that is updated by register and used in publish. Of course this means register has a side effect. Spring can be used to create the class and pass its instance to publishers or subscribers of events.
How to model this in FP, given that clients of the event bus service are independent (e.g., not all are created in a "test" method)? As far as I can see this negates making register return a new instance of EventBus, since other clients already hold a reference to the old instance (and e.g., publishing to it will only publish to the listeners it knows of)
I prefer a solution to be in Scala.
I think you should have a closer look at functional reactive programming techniques. Since you want something in Scala, I suggest reading Deprecating The observer pattern paper by Ingo Maier, Tiark Rompf and Martin Odersky.
The sketch of the solution is that publish should return IO[Unit]. Listeners should be iteratees. Registration also returns IO[Unit].

Why doesn’t Web Sockets use SOAP?

First off, I intend no hostility nor neglegence, just want to know people's thoughts. I am looking into bi-directional communication between client and server; client being a web application. At this point I have a few options: MS-proprietary duplex binding, from what I hear unreliable and unnatural: comet, and web sockets (for supported browsers).
I know this question has been asked in other ways here, but I have a more specific question to the approach. Considering web sockets are client-side, the client code sits in JavaScript. Is it really the intention to build a large chunk of an application directly in JavaScript? Why didn't W3C do this in web services? Wouldn't it be easier if we were to be able to use SOAP to provide a contract and define events along with the existing messaging involved? Just feels like the short end of the stick so far.
Why not make it simple and take advantage of JS dynamic nature and leave the bulk of code where it belongs....on the server?
Instead of
mysocket.send("AFunction|withparameters|segmented");
we could say
myServerObject.AFunction("that", "makessense");
and instead of
...
mysocket.onmessage = function() { alert("yay! an ambiguous message"); }
...
we could say
...
myServerObject.MeaningfulEvent = function(realData) { alert("Since I have realistic data...."); alert("Hello " + realData.FullName); }
...
HTML 5 took forever to take hold....did we waste a large amount of effort in the wrong direction? Thoughts?
Sounds to me like you've not yet fully grasped the concepts around Websockets. For example you say:
Considering web sockets are client-side
This is not the case, sockets have 2 sides, you could think of these as a Server and a Client, however once the connection is established the distinction blurs - you could then also think of the client and the server as "peers" - each can write or read in to the pipe that connects them (the socket connection) at any time. I suspect you'd benefit from learning a little more about HTTP works on top of TCP - WebSockets is similar / analogous to HTTP in this way.
Regarding SOAP / WSDL, from the point of view of a conversation surrounding TCP / WebSocket / HTTP you can think of all SOAP / WSDL conversations as being identical to HTTP (i.e. normal web page traffic).
Finally, remember the stacked nature of network programming, for instance SOAP/WSDL looks like this:
SOAP/WSDL
--------- (sits atop)
HTTP
--------- (sits atop)
TCP
And WebSockets look like this
WebSocket
--------- (sits atop)
TCP
HTH.
JavaScript allows clients to communicate via HTTP with XMLHttpRequest. WebSockets extends this functionality to allow JavaScript to make arbitrary network I/O (not just HTTP), which is a logical extension and allows all sorts of applications that need to use TCP traffic (but might not be using the HTTP protocol) to be ported to JavaScript. I think it is rather logical that, as applications continue to move to the cloud, that HTML and JavaScript support everything that is available on the desktop.
While a server can do non-HTTP network I/O on behalf of a JavaScript client and make that communication available over HTTP, this is not always the most appropriate or efficient thing to do. For example, it would not make sense to add an additional round-trip cost when attempting to make an online SSH terminal. WebSockets makes it possible for JavaScript to talk directly to the SSH server.
As for the syntax, part of it is based on XMLHttpRequest. As has been pointed out in the other posting, WebSockets is a fairly low-level API that can be wrapped in a more understandable one. It is more important that WebSockets support all the necessary applications than that it have the most elegant syntax (sometimes focusing on the syntax can lead to more restrictive functionality). Library authors can always make this very general API more manageable to other application developers.
As you noted WebSockets has low overhead. The overhead is similar to normal TCP sockets: just two bytes more per frame compared to hundreds for AJAX/Comet.
Why low-level instead of some sort of built-in RPC functionality? Some thoughts:
It's not that hard to take an existing RPC protocol and layer it on a low-level socket protocol. You can't go the opposite direction and build a low-level connection if the RPC overhead is assumed.
WebSockets support is fairly trivial to add to multiple languages on the server side. The payload is just a UTF-8 string and pretty much every language has built-in efficient support for that. An RPC mechanism not so much. How do you handle data type conversions between Javascript and the target language? Do you need to add type hinting on the Javascript side? What about variable length arguments and/or argument lists? Do you build these mechanisms if the language doesn't have a good answer? Etc.
Which RPC mechanism would it be modeled after? Would you choose an existing one (SOAP, XML-RPC, JSON-RPC, Java RMI, AMF, RPyC, CORBA) or an entirely new one?
Once client support is fairly universal, then many services that have normal TCP socket will add WebSockets support (because it's fairly trivial to add). The same is not true if WebSockets was RPC based. Some existing services might add an RPC layer, but for the most part WebSockets services would be created from scratch.
For my noVNC project (VNC client using just Javascript, Canvas, WebSockets) the low-overhead nature of WebSockets is critical for achieving reasonable performance. Until VNC servers include WebSockets support, noVNC includes wsproxy which is a generic WebSockets to TCP socket proxy.
If you are thinking about implementing an interactive web application and you haven't decided on server-side language, then I suggest looking at Socket.IO which is a library for node (server-side Javascript using Google's V8 engine).
In addition to all the advantages of node (same language on both sides, very efficient, power libraries, etc), Socket.IO gives you several things:
Provides both client and server framework library for handling connections.
Detects the best transport supported by both client and server. Transports include (from best to worst): native WebSockets, WebSockets using flash emulation, various AJAX models.
Consistent interface no matter what transport is used.
Automatic encode/decode of Javascript datatypes.
It wouldn't be that hard to create a RPC mechanism on top of Socket.IO since both side are the same language with the same native types.
WebSocket makes Comet and all other HTTP push type techniques legible by allowing requests to originate from the server. It is kind of a sandboxed socket and gives us limited functionality.
However, the API is general enough for framework and library authors to improve on the interface in whichever way they desire. For example, you could write some RPC or RMI styled service on top of WebSockets that allows sending objects over the wire. Now internally they are being serialized in some unknown format, but the service user doesn't need to know and doesn't care.
So thinking from a spec authors POV, going from
mysocket.send("AFunction|withparameters|segmented");
to
myServerObject.AFunction("that", "makessense");
is comparatively easy and requires writing a small wrapper around WebSockets so that serialization and deserialization happens opaquely to the application. But going in the reverse direction means the spec authors need to make a much more complex API which makes for a weaker foundation for writing code on top of it.
I ran into the same problem where I needed to do something like call('AFunction', 'foo', 'bar') rather than serialize/de-serialize every interaction. My preference was also to leave the bulk of code on the server and just use the Javascript to handle the view. WebSockets were a better fit because of its natural support for bi-directional communication. To simplify my app development, I build a layer on top of WebSockets to make remote method calls (like RPC).
I have published the RMI/RPC library at http://sourceforge.net/projects/rmiwebsocket/. Once the communication is setup between the web-page and the servlet, you can execute calls in either direction. The server uses reflection to call the appropriate method in the server-side object and client uses Javascript's 'call' method to call the appropriate function in the client-side object. The library uses Jackson to take care of the serialization/deserialization of various Java types to/from JSON.
WebSocket JSR was negotiated by a number of parties (Oracle, Apache, Eclipse, etc) all with very different agendas. It's just as well they stopped at message transport level and left higher level constructs out. If what you need is a Java to JavaScript RMI, check out the FERMI Framework.