I am building a distributed system which consist of modules/application with interfaces defined by protobuf messages.
Is it a good idea to expose those protobuf messages to a client directly? ... or maybe it's better to prepare a shared library which will be responsible for translation of (let's assume) method based interface to a protobuf based for each module and clients won't be aware about protobuf at all?
It's neither a "good idea" nor a bad one. It depends on whether or not you want to impose protocol buffers onto your consumers. A large part of that decision is, then:
Who are your consumers? Do you mind exposing the protobuf specifics to them?
Will the clients be written in languages which have protobuf support?
My $0.02 is that this is a perfect use case for Protocol Buffers, since they were specifically designed with cross-system, cross-language interchange in mind. The .proto file makes for a concise, language-independent, thorough description of the data format. Of course, there are other similar/competing formats & libraries out there to consider (see: Thrift, Cap'n Proto, etc.) if you decide to head down this path.
If you are planning to define interfaces that take Google Protobuf message classes as arguments than according to this and that section in Google's Protobuf documentation it is not a good idea to expose Protobuf messages to a client directly. In short, with every version of Protobuf the generated code is likely to be not binary compatible with older code. So don't do it!
However, if you are planning to define interfaces that take byte arrays containing serialized Protobuf messages as function/method parameters then I totally agree with Matt Ball's answer.
Related
We have different services based on the domain. All these services communicate via Rest(Sync) and Kafka (async).
However, the writers of this service have used a common library to write the logic to consume the records from Kafka and individual services use this common library as dependency.
I believe this is an anti pattern of Microservices.
This is because all services have dependency on this common library, however the the library is getting the consumer record value as string(stringdeserializer) and then based on the type of the message content it is delegated to respective handler.
The common library is origin of event consumption, and deserialization is happening to string and then using Gson the string is converted to specific event types.
The problem with above approach is that for schema evolution of the services is becoming bottleneck. As individual services are listening for certain events on some topic, but everything is deserialized to string, hence unable to use schema registry for schema evolution.
After many attempts i decided that common library is an evil for microservices as independency is killed.
The problem here almost certainly isn't the common library, because you'd hit the same problem without it of producers and consumers needing to agree on a schema. A schema registry makes it potentially easier to reach that agreement, but it doesn't really solve the problem (there are scenarios where the schema registry won't help you).
There are two deeper problems:
First (and this is almost certainly the bigger of the two), it sounds like you're using the same types as wire types (for interservice communication) and as internal model types. This is what actually leads to the coupling. By separating the wire types from the model types, you do incur the overhead of translation between them, but, "you don't get freedom for free" (Peart, 1976). What you gain is only having to agree on the wire type which will change a lot less often than the internal model types as their respective services evolve.
Second, approaches to serialization that try to do things by magic based on implementation details of what you're serializing are intrinsically fragile. This is perhaps less of a problem with a wire type, but defining the wire type in a "schema-first" manner might be useful.
Who says the common library is needed? You could repeat all string parsing logic over and over in any new consumer to that topic.
So, that approach definitely isn't any better.
Your thoughts aren't unique to Kafka, or microservices, either. For example, a REST API uses Openapi and publishes a schema and client dependency. Any HTTP "consumer" needs to depend on that API and client, plus it's pinned at a specific version at runtime. If the API "producer" changes the server "events/schema", your "consumer" will fail.
The Schema Registry also makes a shared dependency, plus the overhead of maintaining an external service separate from the broker that must have higher availability than the brokers themselves, otherwise your clients will completely drop events. Also, Schema Registry supports custom types, so evolution can still happen, even for strings, albeit with much custom code.
If you want to store multiple types in one topic to use with Schema Registry, you'd use Subject naming strategies. Before that feature existed, though, the only way to do so was to create some switch-case in the consumer and wrap string/bytes data, such as CloudEvents object and annotate it with a type field.
Also worth pointing out - Kafka includes Jackson, so you shouldn't need Gson as an extra dependency
With ZeroMQ PUB/SUB model, is it possible for a subscriber to filter based on contents of more than just the first frame?
For example, if we have a multi-frame message that contains three frames1) data type,2) instrument, and then 3) the actual data,is it possible to subscribe to a specific data type, instrument pair?
(All of the examples I've seen only shows filtering based off of the first message of a multipart message).
Different modes of ZeroMQ PUB/SUB filtering
Initial ZeroMQ model used a subscriber-side subscription-based filtering.
That was easier for publisher, as it need not handle subscriber-specific handling and simply fed all data to all SUB-s ( yes, at the cost of the vasted network traffic and causing SUB-side workload to process all incoming BLOBs, all the way up from the PHY-media, even if it did not subscribe to the particular topic. Yes, pretty expensive in low-latency designs )
PUB-side filtering was initially proposed for later implementation
still, this mode does not allow your idea to fly just on using the PUB/SUB Scaleable Formal Communication Pattern.
ZeroMQ protocol design strives to inspire users, how to achieve just-enough designed distributed systems' behaviour.
As understood from your 1-2-3 intention, a custom logic shall be just enough to achieve the desired processing.
So, how to approach the solution?
Do not hesitate to setup several messaging / control relays in parallel in your application domain-specific solution, it works much better for any tailor-made solutions and a way safer than to try to "bend" a library's primitive archetype ( in this case the trivial PUB/SUB topic-based filtering ) to make something a bit different, than for what an original use-case was designed.
The more true, if your domain-specific use is FOREX / Equity trading, where latency is your worst enemy, so a successful approach has to minimise stream-decoding and alternative branching as much as possible.
nanoseconds do matter
If one reads into details about multiframe composition ( a sender-side BLOB assembly ), there is no latency-wise advantage, as the whole BLOB gets onto wire only after it has been completed - se there is no advantage for your SUB-side in case your idea was to handle initial frame content for signalling, as the whole BLOB arrives "together"
I was reading an interesting blog post about Erlang/OTP and the actor model. I also hear that Scala supports the actor model. From the little I gathered so far, the actor model breaks down processing into components that communicate with each other by passing messages. Typically, those processes are immutable.
Are those features language-specific though or more at the architecture level? more specifically, can't you just implement the same actor model in almost any language, and just use some form of message-queue to pass messages between worker processes? (for example, use something like celery). Or is it that those languages like Erlang and Scala simply do this transparently and much faster?
Certainly you can define an "Actor Library" in virtually any language, but in Erlang the model is baked-in to the language, and is really the only concurrency model available.
While Scala's actors system is well implemented, at the end of the day, it still vulnerable to some hazards that Erlang is immune from. I'll draw your attention to this paper.
This would be the case for any Actor library implemented in any imperative language that supports shared mutable state.
An interesting exception to this is Nodes.js. Some work is being done with actors between Nodes that probably exhibit the same isolation properties as Erlang, simply because there is no shared mutable state.
Actor model is not limited to any specific platform or programming language, it's just a model after all.
Erlang and Scala have really good and useful implementations of this model, which fits nicely in typical technology stack of these platforms and helps to effectively solve certain kinds of tasks.
To add to the points mentioned above, the fact that in Erlang actor model is the only way you can program, makes your code scalable from the get-go. Erlang processes are lightweight, and you can spawn 10-100K on one machine (I don't think you can do it with python), this changes the way you approach problems. For example, in our product we parse web server logs with Erlang and spawn an Erlang process to handle each line. That way, if one log line is corrupted, or the process that handles it crashes, nothing happens to the other ones.
Another difference is when you start using OTP you get processes supervisors and you can make processes connected so if one terminates all the others do.
Other than that, Erlang has some other nice feature (which can be found in other languages through libraries, but again here it's baked in) like pattern matching and hot deploy.
No, there is nothing language-specific about the Actor Model. In fact, you already mention Scala in your question, where actors are not part of the language but are instead implemented as a library. (Three competing libraries, actually.)
However, just like Functional Programming or Object-Oriented Programming, having direct support for Actor Programming, or at least support for some abstractions that make it easier to implement, in the language will lead to a very different programming experience. Anyone who has ever done Functional Programming or Object-Oriented Programming in C will probably understand this.
I'm interested in a certain situation. I have an object in C# that I would like to serialize and deserialize.
I'm kind of conducting an experiment. I'm trying to see if switching the libraries of protobuf will have any affect on the time it takes to serialize and deserialize an object. Additionally, I'm throwing an XML serialization in the mix to see if that can compete as well, even though I'm pretty sure protobuf is faster.
In terms of speed, is there a clear, definite winner between protobuf and XML? Assuming everything was done consistently? i.e. Same path, parallel code, straightforward, etc.. And also, would the speed be affected if I switched the libraries that the protobuf is using to serialize and deserialize? (From protobuf-net to protobuf C# port?) I'm quite new to this so I do not know the answer yet, but what I heard was that protobuf is supposed to be smaller, faster, and easier than XML.
Any insight is greatly appreciated! Thanks! Off to write the tests now.
Will protobuf be quicker? Absolutely. I've profiled this many many times, all with similar results, for examples:
Performance Tests of Serializations used by WCF Bindings
https://stackoverflow.com/questions/1650419/protocol-buffers-net-protobuf-net-10x-slower-that-xml-serializer-how-come/1660990#1660990 (note the title here is misleading)
http://code.google.com/p/protobuf-net/wiki/Performance
http://www.servicestack.net/benchmarks/NorthwindDatabaseRowsSerialization.1000000-times.2010-02-06.html (doesn't include XmlSerializer, but includes most others for comparison)
and many passing comments from happy users. I honenstly haven't compared to Jon's version in a long while, and I haven't done a direct v2 comparison, but here's a key point: in most cases the final bandwidth is the limiting factor in network performance, and they are the same wire format so should be pretty much identical there. Of course, protobuf is also demonstrably cheaper to read and write too, but unless you are on a mobile device that is secondary.
The big difference between protobuf-net and the port is that the ported version (Jon's) adopts (quite reasonably) the protobuf approach (immutable/generated objects etc) which might it hard to retrofit to an existing type model - you would have to introduce a separate DTO layer and map to it. Which isn't a big problem - simply a consideration. And for that reason you might find it hard to do a direct comparison between XmlSerializer and the port; they both get your data, but the routes are very different. Conversely, protobuf-net deliberately positions itself as a very similar API to XmlSerializer etc, so it is pretty easy to do a test suite using the same objects etc - just changing the serializer.
In my attempt to redesign an existing application using REST architectural style, I came across a problem which I would like to term as "Mediatype Explosion". However, I am not sure if this is really a problem or an inherent benefit of REST. To explain what I mean, take the following example
One tiny part of our application looks like:
collection-of-collections->collections-of-items->items
i.e the top level is a collection of collections and each of these collection is again a collection of items.
Also, each item has 8 attributes which can be read and written individually. Trying to expose the above hierarchy as RESTful resources leaves me with the following media types:
application/vnd.mycompany.collection-of-collections+xml
application/vnd.mycompany.collection-of-items+xml
application/vnd.mycompany.item+xml
Further more, since each item has 8 attributes which can be read and written to individually, it will result in another 8 media types. e.g. one such media type for "value" attribute of an item would be:
application/vnd.mycompany.item_value+xml
As I mentioned earlier, this is just a tiny part of our application and I expect several different collections and items that needs to be exposed in this way.
My questions are:
Am I doing something wrong by having these huge number of media types?
What is the alternative design method to avoid this explosion of media types?
I am also aware that the design above is highly granular, especially exposing individual attributes of the item and having separate media types for each them. However, making it coarse means I will end up transferring unnecessary data over the wire when in reality the client only needs to read or write a single attribute of an item. How would you approach such a design issue?
One approach that would reduce the number of media types required is to use a media type defined to hold lists of other media-types. This could be used for all of your collections. Generally lists tend to have a consistent set of behavior.
You could roll your own vnd.mycompany.resourcelist or you could reuse something like an Atom collection.
With regards to the specific resource representations like vnd.mycompany.item, what you can do depends a whole lot on the characteristics of your client. Is it in a browser? can you do code-download? Is your client a rich UI, or is it a data processing client?
If the client is going to do specific data processing then you pretty much need to stick with the precise media types and you may end up with a large number of them. But look on the bright side, you will have less media-types than you would have namespaces if you were using SOAP!
Remember, the media-type is your contract, if your application needs to define lots of contracts with the client, then so be it.
However, I would not go as far as defining contracts to exchange single attribute values. If you feel the need to do that, then you are doing something else wrong in your design. Distributed interface design needs to have chunky conversations, not chatty ones.
I think I finally got the clarification I sought for the above question from Ian Robinson's presentation and thought I should share it here.
Recently, I came across the statement "media type for helping tune the hypermedia engine, schema for structure" in a blog entry by Jim Webber. I then found this presentation by Ian Robinson of Thoughtworks. This presentation is one of the best that I have come across that provides a very clear understanding of the roles and responsibilities of media types and schema languages (the entire presentation is a treat and I highly recommend for all). Especially lookout for the slides titled "You've Chosen application/xml, you bstrd." and "Custom media types". Ian clearly explains the different roles of the schemas and the media types. In short, this is my take away from Ian's presentation:
A media type description includes the processing model that identifies hypermedia controls and defines what methods are applicable for the resources of that type. Identifying hypermedia controls means "How do we identify links?" in XHTML, links are identified based on tag and RDF has different semantics for the same. The next thing that media types help identify is what methods are applicable for resources of a given media type? A good example is ATOM (application/atom+xml) specification which gives a very rich description of hyper media controls; they tell us how the link element is defined? and what we can expect to be able to do when we dereference a URI so it actually tells something about the methods we can expect to be able to apply to the resource. The structural information of a resource represenation is NOT part of or NOT contained within the media type description but is provided as part of appropriate schema of the actual representation i.e the media type specification won’t necessarily dictate anything about the structure of the representation.
So what does this mean to us? simply that we dont need a separate media type for describing each resource as described above in my original question. We just need one media type for the entire application. This could be a totally new custom media type or a custom media type which reuses existing standard media types or better still, simply a standard media type that can be reused without change in our application.
Hope this helps.
In my opinion, this is the weak link of the REST concept. As an architectural and interface style, REST is outstanding and the work done by Roy F. and others has advanced the state of the art considerably. But there is an upper limit to what can be communicated (not just represented) by standard media types.
For people to understand and use your REST-ish API, they need to understand the meaning of the data. There are APIs where the media types tell most of the story; e.g. if you have a text-to-speech API, the input media type is text/plain and the output media type is audio / mp4, then someone familiar with the subject matter could probably make do. Text in, audio out, probably enough to go on in this case.
But many APIs can't communicate much of their meaning with just media type. Let's say you have an API that handles airline ticketing. The inputs and outputs will mostly be data. The media types on input and output of every API could be application/json or application/xml, so the media type doesn't transmit a lot of information. So then you would look at the individual fields in the inputs & outputs. Maybe there's a field called "price". Is that in dollars or pennies? USD or some other currency? I don't know how a user would answer those questions without either (a) very descriptive names, like "price_pennies_in_usd", or (b) documentation. Not to mention format conventions. Is an account number provided with or without dashes, must letters be all-caps and so on. There is no standard media type that defines these issues.
It's one thing when we're in situations where the client doesn't need a semantic understanding of the data. That works well. The fact that browsers can visually render any compliant document, and interact with any compliant resource, is really great. That's basically the "media" use case.
But it's entirely different when the client (or actually, the developer/user behind the client) needs to understand the semantics of the data. DATA IS NOT MEDIA. There is no way to explain data in all its real-world meaning and subtlety other than documenting it. This is the "data" use case.
The overly-academic definition of REST works in the media use case. It doesn't work, and needs to be supplemented with non-pure but useful things like documentation, for other use cases.
You're using the media type to convey details of your data that should be stored in the representation itself. So you could have just one media type, say "application/xml", and then your XML representations would look like:
<collection-of-collections>
<collection-of-items>
<item>
</item>
<item>
</item>
</collection-of-items>
<collection-of-items>
<item>
</item>
<item>
</item>
</collection-of-items>
</collection-of-collections>
If you're concerned about sending too much data, substitute JSON for XML. Another way to save on bytes written and read is to use gzip encoding, which cuts things down about 60-70%. Unless you have ultra-high performance needs, one of these approaches ought to work well for you. (For better performance, you could use very terse hand-crafted strings, or even drop down to a custom binary TCP/IP protocol.)
Edit One of your concerns is that:
making [the representation] coarse means I will end up transferring unnecessary data over the wire when in reality the client only needs to read or write a single attribute of an item
In any web service there is quite a lot of overhead in sending messages (each HTTP request might cost several hundred bytes for the start line and request headers and ditto for each HTTP response as in this example). So in general you want to have less granular representations. So you would write your client to ask for these bigger representations and then cache them in some convenient in-memory data structure where your program could read data from them many times (but be sure to honor the HTTP expiration date your server sets). When writing data to the server, you would normally combine a set of changes to your in-memory data structure, and then send the updates as a single HTTP PUT request to the server.
You should grab a copy of Richardson and Ruby's RESTful Web Services, which is a truly excellent book on how to design REST web services and explains things much more clearly than I could. If you're working in Java I highly recommend the RESTlet framework, which very faithfully models the REST concepts. Roy Fielding's USC dissertation defining the REST principles may also be helpful.
A media type should be seldomly created and time should be invested in making sure the format can survive change.
As you're relying on xml, there is no particular reason why you couldn't create one media type, provided that media type is described in one source.
Choosing ATOM over having one host media type that supports multiple root elements doesn't necessarily bring you anything: you'll still need to start reading the message within the context of a specific operation before deciding if enough information is present to process the request.
So i would suggest that you could happily have one media type, represented by one root element, and use a schema language to specify which of the elements can be contained.
In other words, a language like xsd can let you type your media type to support one of multiple root elements. There is nothing inherently wrong with application/vnd.acme.humanresources+xml describing an xml document that can take either or as a root element.
So to answer your question, create as few media types as you can possibly afford, by questioning if what you put in the documentation of the media type will be understandable and implementeable by a developer.
Unless you intend on registering these media types you should pick one of the existing mime types instead of trying to make up your own formats. As Jim mentions application/xml or text/xml or application/json works for most of what gets transmitted in a REST design.
In reply to Darrel here is Roy's full post. Aren't you trying to define typed resources by creating your own mime types?
Suresh, why isn't HTTP+POX Restful?