More than a question, this is an architectural dilemma that I am facing.
Is it a good idea to have REST wrapper around a Kafka Producer and integrate with it, instead of directly integrating with Kafka Producer in my code? I could use a generic interface for my higher classes, instead of using the KafkaImpl directly to keep it loosely coupled for the future.
If you have the option, I'd probably go for the pure-Kafka approach, since you'd get better throughput (the clients are very intelligent with respect to batching and futures).
I'm not sure you're decoupling your code by adding a rest wrapper; you're just adding another level of abstraction, adding maintenance burden and covering over some of the benefits of Kafka.
If you really need to use REST, you can make use of Kafka-Rest - no need to reinvent the wheel!
As mentioned kafka-rest-proxy will work.
I know plenty of people that wrap Kafka producer/consumers with Spring Kafka, Mirconaut, Akka, Quarkus, Lagom/Play just to name a few. Spring, specifically, has the messaging binders that can provide that "generic interface" feel.
These are all web frameworks, and putting an API / RPC abstraction layer on any code is definitely necessary in 12factor applications
Related
I am considering different message-oriented middleware which I may use on our information system. I've been reading lots of papers and framework documentation. I see that Kafka is not the first choice when we want to do SAGA.
As an example, I read MassTransit's riders documentation, and it says that "many of concepts and idioms" that are used on classical transports (e.g. ActiveMQ, RabbitMQ, etc.) do not apply on Kafka.
What is not possible with Kafka that is usually possible with other MQs (e.g. ActiveMQ, RabbitMQ, etc.)?
What, from a transactional point of view, makes it more complicated to use Kafka in a Saga implementation?
I have been reading about the articles on the web about the benefits of graphql but so far I have not been able to find a single benefit of it.
One of the most common benefits mentioned in those articles are below?
No Overfetching with GraphQL.
Reducing number of calls made from client side.
Data Load Control Granularity
Evolve your API without versions.
Those above all makes sense but it is not the graphql itself that provides these benefits. Any second layer api written in java/python or any other language would be able to provide this benefits too. It is basically introducing another layer of abstraction above the data retrieval systems, rest or whatever, and decoupling the client side from that layer. After you do that everything you can do with graphql can also be done with any other language too.
Anyone can implement a say scala server that retrieves the data from various api's integrates them, create objects internally and feeds the client with only the relevant part of the data with total control on the data. This api can be easily versioned and released accordingly. Considering the syntax of graphql and how cumbersome it is and difficulty of creating a good cache around it, I can't see why would you use it really.
So the overall question is there any benefits of graphql that is provided to the application because of the graphql itself and not because you implement another layer of abstraction between your applications and your api's?
Best practices known as REST existed earlier, too.
GraphQL is more standarized than REST, safer (no injections) and syntax gives great flexibility in the area of quickly changing client needs.
It's just a good standard of best practices.
I feel GrapgQL is another example of overengineering. I would say "Best standards and practices" are "Keeping It Simple."
Breaking down and object and building a custom one before sending it to the client is very basic.
Am a Scala programmer and understand Akka from a developer point of view. I have not looked into Akka library's code. Have read about the two types of actors in the Akka model - thread-based and event-based - but not having run Akka at large scale I dont have experience of configuring Akka for production. And am completely new to Vert.x. So, from the choices perspective to build a reactive application stack I want to know -
Is the message-passing model of Akka and Vert.x very different? How?
Are the data-structures behind Akka's actors and Vert.x's verticles to buffer messages very different?
In a superficial view they're really similar, although I personally consider more similar Vert.x ideas to some MQ system than to Akka... the Vert.x topology is more flat: A verticle share a message with other verticle and receive a response... instead Akka is more like a tree, where you've several actors, but you can supervise actors using other actor,..for simple projects maybe they're not so big deal, but for big projects you could appreciate a more "hierarchic system"...
Vert.x on the other hand, offer a better Interoperability between very popular languages*. For me that is a big point, where you would need to mix actors with a MQ system and dealing with more complexity, Vert.x makes it simple and elegant..so the answer, which is better?...depend, if your system will be constructed only over Scala, then Akka could be the best way...if you need communication with JavaScript, Ruby, Python, Java, etc... and don't need a complex hierarchy, then Vert.x is the way to go..
*(using JSON, which could be an advantage or disadvantage compared to)
Also you must consider that Vert.x is a full solution, TCP, HTTP server, routing, even WebSocket!!! That is pretty amazing because they offer a full stack and the API is very clean. If you choose Akka you would need use a framework like Play, Xitrum Ospray. Personally I don't like any of them.
Also remember that Vert.x is a not opinionated platform, you can use Akka or Kafka with it, for example, without almost any overhead. The way how every part of the system is decouple inside a verticle makes it so simple.
Vert.x is a big project with an amazing perspective but really new, if you need a solution now maybe it would not be the better option, fortunately you can learn both and use both in the same project.
After doing a bit of google search I have figured that at detailed comparison of Akka vs Vert.x has not yet been done ( atleast I cound't find it ).
Computation model:
Vert.x is based on Event Driven model.
Akka is based on Actor Model of concurrency,
Reactive Streams:
Vert.x has Reactive Streams builtin
Akka supports Reactive Streams via Akka Streaming. Akka has stream operators ( via Scala DSL ) that is very concise and clean.
HTTP Support
Vert.x has builtin support of creating network services ( HTTP, TCP etc )
Akka has Akka HTTP for that
Scala support
Vert.x is written in Java
Akka is written in Scala and its fun to work on
Remote services
Vert.x supports services, so we need to explicitly create services
Akka has Actors which can be deployed anywhere on the network, with support for clustering, replication, load-balancing, supervision etc.
References:
https://groups.google.com/forum/#!topic/vertx/ppSKmBoOAoQ
https://blog.openshift.com/building-distributed-and-event-driven-applications-in-java-or-scala-with-akka-on-openshift/
https://en.wikipedia.org/wiki/Vert.x
http://akka.io/
I have not used Finagle nor Akka in practice, but I have been reading a lot of about them.
Finagle being a RPC system and Akka a toolkit for highly concurrent applications, why all the people compare them as two possible solutions which cannot be used together? All searches I've done propose to use one or the other, no one proposes to use them together.
Finagle, for example, has a very interesting way of defining endpoints via thrift and its IDL. With this IDL we could define a custom endpoint and through scooge or whatever code generation tool, it would be possible to have a service with no effort. Also a client to connect to this service is created with a lot of common client issues automatically resolved (reconnection, timeout, retries, load-balancing, connection-pooling, ...).
Akka instead, solves a lot of concurrency headaches and it scales extremely well without all the complexities of hand controlled threading.
As a summary, why not use them together?:
Finagle + Thrift (with its IDL): It facilitates service design and development as well as deployment (which includes ease of scaling-out).
Akka: It uses all the server power through its Actor system and it scales extremely well if I change server properties (for example if it's deployed on EC2 and I convert my node from m1.small to m1.large).
What do you think?
NOTE: Asume that the issue of mapping Futures and Promises is resolved, as well as a mismatch between FuturePools and ExecutionContexts. The pattern would be to convert Finagle to the scala way of using Futures.
You are right in that service discovery and service implementation are orthogonal concerns, and I can follow your argument about using Finagle for the former and Akka for the latter. You could in principle use the two together without seeking a grand unification of futures, since you only need to send the service’s reply back to the requesting Actor in a message, i.e. you would need to add your own little “pipeTo” pattern on top of Twitter futures.
Does anyone know where is a good example of GWT application with non-Java backend? Something like the "Contacts" application on the official pages of GWT. I'm interested in the following topics specifically (this is how I see the "back" part of the application, but it may be different, of course):
DTO serialization.
Communication layer (the very back one). Should it use generics and work with any abstract DTO? Is there any other proven approach?
Service layer, which uses the communication layer with specific DTOs.
Requests caching. Should it be implemended on the service or communication level?
Good abstraction. So we can easily substitute any parts for testing and other purposes, like using different serializators (e. g. XML, JSON), different servers' behaviors when managing user sessions (URL might change once the user is logged in).
I know, there are many similar topics here, but I haven't found one, which is focused on the structure of the client part.
Use REST on both sides. PHP/Python REST server-side. GWT client-side.
Don't use GWT-RPC.
The following gives a guideline for Java-java REST, but there is leeway to move out of server-side Java. It explains why REST is appropriate.
http://h2g2java.blessedgeek.com/2011/11/gwt-with-jax-rs-aka-rpcrest-part-0.html.
REST is an industry established pattern (Google. Yahoo, in fact every stable-minded establishment deploys REST services).
REST is abstracted as a HTTP level data structure. Which Java, PHP and Python have established libraries to comply to ensure DTO integrity.
Communication layer (the very back one). Should it use generics ???
Don't understand the question or why the question exists. Just use the REST pattern to provide integrity to non-homogeneous language between server and client and to HTTP request/response.
If you are using Java at back-end, there is no escape from using generics. Generics saves code. But using generics extensively needs programmer to have equally extensive visual capacity to visualise the generics. If your back-end is on PHP or Python, are there generics for PHP? Python generics? Might as well stay in Java land or C# land and forget about Java-free service provider.
Did you mean DTO polymorphism? Don't try polymorphism or decide on it until after you have established your service. Then adaptively and with agility introduce polymorphism into your DTOs if you really see the need. But try to avoid it because with JSON data interchange, it gets rather confusing between server and client. Especially if they don't speak the same programming language.
If you are asking HTTP level generics? I don't know of any framework, not SOAP, not REST where you could have generics carried by the XML or JSON. Is there? Generics?
Service layer? REST.
Requests caching? Cache at every appropriate opportunity. Have service-provider cache query results for items common and static to all sessions like menu, menu/drop-down box choices, labels, etc. Cache your history and places.
On GWT side cache records so that forward/backward button will not trigger inadvertent query. Use MVP pattern and history to manage history traversal that might trigger redisplay of info.
If you are talking about unified info abstraction, you should start your project with JAX-RS to define/test the API and perform data abstraction. Without performing any business logic.
Then, once your HTTP-level APIs andDTOs are defined, convert server-side to using language of your choice to proceed to write more complex code.
BTW, I don't dig your terminology "Backend".
We normally use the terminology client-side for service consumer, server-side for service provider, backend for data repository/persistence access, mid-tier or middle-ware for intervening/auxiliary software required to provide mathematical/scientific/graphical analysis/synthesis.
If our terminologies did not coincide, I probably answered this question wrong.