How it's using vert.x - vert.x

A question is running into my mind for a several days ago.
I read several docs about vert.x and quarkus.
Which kind of message are sent throuh event-bus? Only received http request?
Has it sense to configure an clustered eventbus inside a quarkus application? Any example?
Those above question have not been able to find some lights.
Any ideas?

You can send any type of message through the Vert.x Event Bus, as long as a codec exists for this particular type. There are several built-in codecs, but you can register custom codecs too.
Quarkus does not support clustering officially so far. You can make it work with some tweaks, though.
From my perspective (I'm a Vert.x development team member), it can be useful to have a clustered eventbus inside Quarkus, just as it can be useful inside a Vert.x application. There are plenty of articles on the web about use cases.

Related

Is there an OpenAPI type specification for Kafka or similar technology?

OpenAPI is good for RESTful services and at the moment, I'm hacking it to do it for asynchronous messaging system (specifically Kafka) by using POST to a /topic so that I can use redoc do create a website for the API.
I am trying to see if there's already established system of documenting for this. Especially since the GET /events which is used for event sourcing is getting larger and larger by the day.
It seems asyncAPI is basically what you are looking for: openapi but for topics instead of REST endpoints.
https://www.asyncapi.com/docs/getting-started/coming-from-openapi/
CloudEvents is a CNCF backed project for documenting event sourcing, one specification is for Kafka
https://github.com/cloudevents/sdk-java/blob/master/kafka/README.md
If you want a REST API, look at the Kafka REST Proxy
Consider using Protocol Buffers within Kafka.
https://developers.google.com/protocol-buffers/
Protocol Buffers require an API contract (".proto" file) if you want to call or implement a service. The contracts are both human and machine readable.
Protocol Buffers can also be used with other messaging systems and other protocols like HTTP (check out "gRPC" for that). So your documentation / contract is more portable.
Of course this only works for projects having the flexibility to change their payload format.

How to write Kafka RestProxy Server/Client for production use

Need to develop a rest API which can read published messages from kafka cluster to a dataware house application.
Materials available over internet say use POST/GET commands , but i don't think this is for production use rather useful for testing purposes.
How to implement it in scala/ Java Programming?
Materials available over internet say use POST/GET commands , but i don't think this is for production use rather useful for testing purposes
Please link to where you read this... All production web-services operate over (more than) these two HTTP methods, hundreds of thousands times a day...
If you want to really use Kafka for throughput, though, you wouldn't "hide it" behind a REST interface, though. You would distribute SSL certs plus usernames+passwords, to remote clients, for example.
Need to develop a rest API which can read publish messages from kafka
REST is not meant to keep an open connection, primarily because it is stateless (it shouldn't maintain where you are reading from in Kafka)... It would make more sense to forward a websocket from a Kafka consumer, which is different from a REST API.
how to implement it in scala/ Java Programming
The Confluent REST Proxy is already written in Java, and it is open-source (and used in Production at several companies, I believe). If you need inspiration, then you can start there. Otherwise, you can find examples of Spring and Vert.x, for example, with their Kafka integrations in their respective documentations, but you'll be re-implementing a lot of the existing functionality.

Are Retrofit and OkHttp suitable for Java EE/Server-side use?

I like the APIs of the Retrofit and OkHttp rest/http libraries from Square. I am evaluating options for writing a server-side rest client. For each request to my SOAP-based web service, I have to consume another, restful web service, thus my need for a rest client.
My question is, are Retrofit and OkHttp suitable for server-side use in a highly concurrent web app, or are there likely to be issues, known or otherwise, stemming from these APIs having been designed for use primarily outside of the server-side?
Reading the documentation and perusing the code, nothing jumped out at me to indicate that these libraries would not be suitable. But I don't want to be a guinea pig either. Has anyone experienced any issues with server-side use under high load/concurrency? Had success? Anyone from the dev teams for those libraries care to comment? ;)
We use OkHttp on the Square Cash server and we haven't had problems.
Some of the default settings are not suitable for server side usage, for example, the maximum number of concurrent requests per host defaults to 5.
There is some discussion on this at https://github.com/square/okhttp/issues/4354.
In the microservices architecture world (using Spring Framework), Retrofit/Okhttp may not be a good fit as a REST client for inter-service communication. Using WebClient/RestTemplate will have at least the below advantages over using retrofit for the same purpose:
RestTemplate/WebClient can be easily configured to make use of client-side load balancing (Ribbon), thereby requests can be rotated among various instances or another microservice.
Hystrix can be easily configured with RestTemplate, thereby increasing the fault tolerance (circuit breaker pattern) of the overall system w.r.t inter-service communication.
Service discovery can be easily configured using Eureka or Consul, thereby the client need not know the host/port/protocol of the target web service. All we need is to enable the discovery client.
Alternatively, you can also explore Feign, which is a declarative web service client similar to retrofit, but with all the advantages of RestTemplate.
You can also have a loot at the following article:
https://www.javacodemonk.com/retrofit-vs-feignclient-on-server-side-with-spring-cloud-d7f199c4

What is Cometd ? Why it is used and how to work on that

i'm just a beginner in cometd , and i'm interested and wanted to learn what cometd is and what for it is used i googled it out and found some resource.Under the following link
1.http://docs.cometd.org/reference/installation.html#d0e346.
I tried out with the given demo but i could not able to get the expected output from it. can anybody post some resource url's so that i can learn ?
Disclaimer: I'm the CometD project leader.
CometD is a set of library to write web applications that perform messaging over the web.
Whenever you need to write applications where clients need to react to server-side events, then CometD is a very good choice. Think chat applications, online games, monitoring consoles, collaboration tools, stock trading, etc.
See more at the preface.
CometD ships a JavaScript client library, a Java client library and a Java server library.
This allows you to write applications in the browser with fine-grained logic and control on the server.
The server library, being in Java, leverages the high scalability of the JVM and the powerful asynchronous I/O API that the JVM and the Servlet specification provide.
CometD is transport agnostic: you write your applications using high level APIs, and CometD takes care of delivering the messages over the wire using the best transport available: WebSocket or HTTP, also providing a transparent fallback in case WebSocket does not work.
CometD provides a clustering solution called Oort that allows you to scale horizontally your web applications.
CometD comes with a ton of features and an extended documentation along with tutorials and demos you can use as a starting point for your project.
Join CometD to start hacking on your CometD-based web applications.
The CometD tutorials are currently written for CometD 2.x, but a port to CometD 3.x (the current version of CometD) is currently underway, so that requires a bit of patience.
But you can start right away by following the primer and deploying the demos.
I hope you can get started with CometD with the above references.
Drop an email on the mailing lists for any help you may need.

Need opinion regarding design/architecture of a web application

I am working on a web application which needs to get data from some local and some non local resources and then display it. As it could take arbitrary amount of time to get the data from these resources I am thinking of using the actors concept so that each actor is responsible for getting data from the respective resource. The request thread will wait for each actor to finish its task and then use ajax to update only the portion of the web page that is dependent on that data. This way user will start seeing the data as soon as it is received rather than wait for all of them to finish and then get a first look at the data.
I am planning to look into scala/lift framework for this. I have read some articles on the web for scala/lift and want to explore if this is the correct way to approach this problem and also if scala/lift is good platform of choice. I have worked in Java and C# previously. Any opinions, comments, suggestions are welcome.
Thanks,
gary
Take a look a message queue technology like Java's JMS. Message queues allow you to handle long running background tasks asynchronously and reliably. This is the technique sites like Flickr and YouTube use to do media transcoding asynchronously. You can use a Java EE server, or a JMS technology like Apache's ActiveMQ, and then layer your Scala/Lift code on top of it.
Richard Monson-Haefel's book on JMS covers it well.
For more general help with web site scaling and construction, take a look at Todd Hoff's excellent blog, highscalability.com/. There are some good pointers to using message queues to offload long-running tasks this way.
BTW, Twitter uses Scala for something much like what you're considering. Here's an interview with some of their developers; they describe one way they use Scala:
Robey Pointer: A lot of our architecture is based on letting Rails do what it does best, which is the AJAX, the web front ends, the website—what the user sees. Anything we can offload out of the request/response cycle, we do. So we queue those tasks into a messaging system and have back-end daemons handle them.
If the non-local resources originate at some other service or system, Event Driven Architecture might work for you. Instead of pulling from the non-local resources you could set up this web-application as a subscriber to the events published by these services. Upon receiving a message regarding part of its functionality it would cache locally the data it's interested in. This should let you escape the issue of asynchronous update of parts of the page (all data would be accessible locally).
Udi Dahan blogs about this approach a lot and is also an author of a .NET message bus (NServiceBus) that can be used in such scenarios. See for example http://msdn.microsoft.com/en-us/architecture/aa699424.aspx
Actors would be a way to go. You're essentially setting up a light weight version of JMS. And Lift does the comet stuff very well.
In addition to the Scala actors, and the Lift Actors, you also have akka actors. When Scala Swarm becomes production ready you'll be ready for that too.
If the delayed information is distinct from the information that needs to display immediately, with Lift you can use a LazyLoad snippet that has the longer-running computation (calling the web service) as part of its logic. Lift will take care of inserting it into the page when its ready.