Health check API for message-driven beans - rest

I'd like to be able to do a health check on a deployed Message-Driven Bean in Production. My initial idea was to add a health() method ensuring that the JMS Queue (for reading) and the Database (for writing) are both available, and then expose this health method as a REST API. Unfortunately as a MDB isn't injectable like the other types of EJBs I cannot get a reference to it from my REST controller...
Is there a way to expose a message-driven bean's methods through a REST API ? Or any other way to achieve my initial goal ?
EDIT
A little precision : I don't want to just check that the resources are available, but also that the EJB can communicate with them (by pinging them from inside the EJB instance). This would not only validate that the resources are available (which could indeed be done some other way), but more importantly for me also that the resources bindings are valid and that the resources injection is working.

I think it's not possible the way you want to have it. The reason is, that unlike other EJBs, a MDB solely acts upon arrival of a message and not by any other call to it.
But you may do it the other way round and inject some class into the MDB which you call on any message you receive. That way you'd have a constant "I'm alive" ping, provided that you get messages continuously.
Other than that your only chance is to use the mechanisms of your container which usually can provide some information about its deployed and running components which you may query.

Related

Implementing REST using JDBC Tables

Currently we are implementing REST API's using the spring-boot. Since our API's are growing in number we are thinking of a solution to implement the REST API's using a different approach.
The approach is as below :
Expose a single service to receive all the HTTP requests.
We will have the URI's configured in a data base table to call the
next set of services. These service are configured to listen to
particular JMS messages.
The next set of services will receive the JMS messages and process
the data.
Below are my questions :
Will the above approach still represent the REST architecture ?
What are the downsides of above approach(we are aware of network
latency) any thing other then network latency ?
What are the REST architecture benefits will we be missing.
Or can we just say that our approach is the REST architecture done differently ?
You're making 2 major choices, each can be decided separately:
1) Having a single HTTP service
2) Using JMS as the communication between this service and the underlying microservices
Regarding #1, if you do this, you can no longer call your services REST since the whole point of REST is to use HTTP verbs together with your domain objects for a predicable set of endpoints. GET on /objects/ means the object is being fetched, POST on /objects means a new object is being created, etc... Now, this is OK, you can do it this way and it can work, though it will be "non-standard".
In fact, you might want to check out GraphQL https://www.howtographql.com/basics/1-graphql-is-the-better-rest/ as its pretty close to what you're trying to do.
These days really either REST or GraphQL seems to be the two popular approaches.
Another way to do REST, if you're looking to simply expose REST services on your domain objects without having to write a lot of code, is Spring Data REST: https://spring.io/projects/spring-data-rest and if you're comfortable with Spring already, this should be pretty easy to understand.
For #2, your choice of communication between your single gateway service and the underlying services. Do most of your calls require synchronous answers, such as a UI asking for data to display in a browser or phone? If so, JMS is not a good approach. JMS would be an ok approach if the majority of your services were asyncronous - for example someone submitting a stock trade request. The UI would just need to know the request was submitted, but it will actually be processed some time later and the result will be fetched asyncronously.
Without knowing much about your application, I would recommend sticking with HTTP between your services for simplicity sake unless there is a good reason to switch to JMS.

Sample REST Observable service and a remote subscriber client in Java 9/RxJava 2

Here is the background:
We have a cluster (of 3) different services deployed on various containers (like Tomcat, TomEE, JBoss) etc. Each of the services does one thing. Like one service manages a common DB and provides REST services to CRUD the db. One service puts some data into a JMS Queue, Another service reads from the Queue and updates the DB. There is a client app that makes a REST service call to one of the service that sets off creating a row in the db, pushing that row into a queue etc.
Question: We need to implement the client app so that we know at any given point in time where the processing is. How do I implement this in RcJava 2/Java 9?
First, you need to determine what functionality in RxJava 2 will benefit you.
Coordination between asynchronous sources. Since you have a) event-driven requests from one side, and b) network queries on the other sides, this is a good fit so far.
Managing a stream of data, transforming and combining from one or more sources. You have given no indication that this is required.
Second, you need to determine what RxJava 2 does not provide:
Network connections. This is provided by your existing libraries.
Database management. Again, this is provided in your existing solutions.
Now, you have to decide whether the firstlies add up to something you can benefit from, given the up-front costs of learning a new library.

Why bother with service discovery when message oriented middleware does the job?

I get the problem that etcd/consul/$whatever are trying to solve. Service consumers need to talk to service providers, a hugely fluid distributed system needs a mechanism to marry the two.
However, the problem of "where do service consumers go with their requests?" is old and IMO has been solved with MOM -- message oriented middleware.
In MOM, the idea is that service consumers do not care where the service providers live. They simply send a message and have the messaging bus take care of routing the message to the appropriate consumer. There can be multiple providers all doing the same thing (queue-based round-robin) or versioned providers (/v1/request goes to one, /v2/request goes to another).
This is a simple, powerful integration pattern that completely decouples a service interface from its implementation.
And yet I see this bizarre obsession with discovering service providers, which appears to create tight coupling between consumers and providers (in addition to a few other anti-patterns as well.)
So, what am I missing here? TIA.
In MOM, everything flows through the bus, so it might become a bottleneck. With service discovery, a consumer looks up a producer "once" (ok it might have to check back again after a while), and then "directly" (ok could be through a proxy) talks to it.
Or if you prefer catchy phrases: smart endpoints & dumb pipes vs (i guess) dumb endpoints & smart pipes.
Personally I don't see the two as either or for this type of architecture. You could use the service discovery to see what services are available at the moment and subscribe to the MOM for the events you then know will be there. If you can't find services you depend on you can raise an alert. Not all MOM's let you know when there is no publisher for a channel.
You can also combine them in the way that the service discovery is where you find the services you want to contact directly, for example a data store that does no job, and still use the MOM to subscribe to events for changes that other systems do. Not all use cases fit well with job queuing either, as some tasks must be solved synchronously, and then the service discovery is a great way to have a dynamic environment.
I do prefer the asynchronous MQ myself, and I think that if you do it right, with load balancing, redundancy, clustering with separate readers and writers etc you can easily have great stability, scalability and a standardized way for all your components to communicate.

Delivery different kind of protocols in a SOA architecture

I have a project that is currently in production delivering some web-services using the REST approach. Right now, I need to delivery some of this web-services in SOAP too (it means that I will need to deliver some of the same web-services in SOAP and others a bit different), so, I ask you:
Should I incorporate to the existent project the SOAP stack (libraries, configuration files, ...), building another layer that deliver the data in envelopes way (some people call it "anti-corruption layer") ?
Should I build another project using just the canonical model in common (become it in a shared-library) ?
... Or how do you proceed in similar situations ?
Please, consider our ideal target a SOA architecture.
Thanks.
In our projects we have a facade layer which exposes the services and maps to business entities, and a business layer where the business logic is run.
So to add a SOAP end point for an existing service, we just create a new facade and call in to the same business logic.
In many cases it is even simpler, since we use WCF we can have a http SOAP endpoint for external clients, and a binary tcpip endpoint for internal clients. The new endpoint can be added by changing the configuration without any need to change the code.
The way I think about an SOA system, you have messages and pub/sub. The message is the interface. Getting those messages into and out of the system is an implementation detail. I create an endpoint that accepts a raw message document (more REST-like, but not really REST) as well as an endpoint that accepts the message as a single parameter to a SOAP call. The code that processes the incoming message is a separate concern from the HTTP endpoint enablement.
You can use an ESB for this. Where ESB receive the soap messages and send the rest request to the back end. WSO2 ESB provides this functionality. Please look at this sample[1].
[1] http://wso2.org/project/esb/java/4.0.0/docs/samples/proxy_samples.html#Sample152

Web Service multi-client basic question

I've been reading up on web service all day but I'm still missing a basic understanding on web services as they relate to multiple clients.
The web service runs on a web server. The service exposes various methods. Multiple clients may call the same service method simultaneously. Question: Does each client get its own copy of the method or does the code in the method implementation have to start a thread for each client and process each client's request in it's own thread? What am I missing?
Thanks in advance.
DP
It depends on configuration. In WCF you can configure 'singleton' ie. one service instance, which will work with all clients. Or you can set another value which will create separate instance for each client calling it. You will definitely get more at MSDN.
Edit:
Check this attribute: InstanceContextBehavior