Logging microservice requests and responses to MongoDB asynchronously using AOP or Solace? [closed] - mongodb

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have multiple Java REST microservice APIs and I would like to log their requests and responses to MongoDB. Do I create a separate logging API with an asynchronous service method and call it from all other microservice controller classes using AOP? Or Do I use event brokers like Solace/Kafka where the microservices publish the logs to a topic and a separate service picks and stores in MongoDB?
Which is the better way, I can afford to lose some logs without being stored in MongoDB but I cannot afford to affect the performance of my microservices.

There are definitely advantages to using an event broker to handle log data, since it can serve as a buffer during times when the logging API isn't available or slow. Note that AOP could also be used with an event broker, it would just use a event endpoint, rather than an HTTPS endpoint.
A couple other related points:
Have you considered persistence layers other than MongoDB? OpenTelemetry backends are made to address exactly the sort of use case you have, and provide some very useful tooling for auditing/troubleshooting microservices.
Rather than using REST, how about connecting the microservices themselves through an event broker. It could provide some very nice performance benefits, and make your microservices more agile.
Best,
Jesse

Related

What is the benfit of asynchonous communication over synchornonus communication in microservice communication? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
All of the tutorials suggest, that we should use async communication using something like kafka over direct sync http based communication, when it comes to communication between micro-services.
Can somebody explain why, and how will async communication happen using kafka?
The question is very broad. I would argue that the main aim of using async communication is to separate domain boundaries, but there are other benefits of doing this: Partition failures and the ability to support spikes without bringing down a system.
Imagine a purchase on an online shop:
Payments needs to be processed.
Check for fraudulent operations.
A invoice needs to be created.
A fulfillment order needs to be created in the warehouse.
Purchase information has to be sent to analytics.
Update tailored product suggestions.
And probably a few tens more of things have to happen after an order is placed.
Some of these, the critical path, might need to happen synchronously (e.g. taken a payment) but all of the other ones can happen asyncronously. Kafka is just a message broker (check the docs or the free book Kafka: The Definitive Guide to know how it works)
It's also possible to build a platform that is fully asynchronous (e.g. using request-reply pattern).
An incredible good explanation of using messaging and examples are in the book Enterprise Integration Patterns. It's almost 20 years since it has been published, but everything in there is still current.

Which Messaging System to be used? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I would like to transfer data from one db system to any other db systems. which messaging system(Kafka, ActiveMQ, RabbitMQ.....likewise) would be better to achieve this with much throughput and performance.
I guess the answer for this type of questions is "it depends"
You probably could find many information on the internet about comparison between these message brokers,
as far as I can share from our experience and knowledge, Kafka and its ecosystem tools like kafka connect , introducing your requested behavior with source connectors and sink connectors using kafka in the middle,
Kafka connect is a framework which allows adding plugin called connectors
Sink connectors- reads from kafka and send that data to target system
Source connector- read from source store and write to kafka
Using kafka connect is "no code", calling rest api to set configuration of the connectors.
Kafka is distributed system that supports very high throughout with low latency. It supports near real time streaming of data.
Kafka is highly adopted by the biggest companies around the world.
There are many tools and vendors that support your use case, they vary in price and support, it depends from which sources you need to take data and to which targets you wish to write, should it be cdc/near real time or "batch" copy

Is consistent hashing useless if our server uses RESTful APIs? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
So RESTful APIs are stateless wherein the server does not store any state about the client session on the server side.
And consistent hashing in load balancing is used to associate a client with a server, ie, all requests from a given client will be directed to a given server only (amongst a group of servers) because that server has some data stored in it about that client.
So, if our server uses RESTful APIs then is there no need for consistent hashing while load balancing?
Not necessarily. While RESTful APIs are stateless, your server isn't. Server-side caching doesn't violate the constraints of REST. If a server is able to keep information from a client in its cache, it could make a significant difference if future requests are made to that server instead of to another one which may need to perform more work to retrieve the client's data.
It is very situational, however, so I can't speak to your specific server setup!

Should HTTP Based Micro Services always be Rest [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 months ago.
Improve this question
I'm currently developing a micro service that basically provides calculation services to other micro services. It does not store data or have any resources like a sales order. It only calls other micro services and then calculates metrics and prices to return a result.
I'm kind of struggling trying to make a rest API with resources names that are nouns when all I do is calculate stuff and return results (more like an action).
So can we have a micro services that behaves more like an HTTP API than a Restful service (is it a bad practice, an anti pattern , an architecture smell, ....)
Regards
You can use whatever you want and in your particular case I am pretty sure you won't see any drawbacks. From my point of view only difference with rest is mostly semantic -some people may also argue about cacheability but I don't think so-
Apart from rest/rpc creating microservices without any actual domain could cause a maintenance issue in the long run as you totally depend on some other microservices data whenever a change required in their side you may also need change this microservice. That is why I don't recommend those kind of calculation services unless you have a valid scalability requirement.

REST API along with Websocket API [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have to create an application with a permanent connection (I need to send data update from the server) and, in parallel, I need to configure this app. My idea was to use Socket.IO for the connection and use it also for the configuration, with specifics event name.
But someone said that it's better to keep Socket.IO only for sending data from the server and use a REST API to configure the app.
I want to know if using a REST API along with of a Websocket connection is a good practice or not, and if no, why.
You always can provide a REST API along with a WebSocket API for different purposes. It's up to your requirements and it depends on what you want to achieve.
For instance, you can use a WebSocket API to provide real-time notifications while the REST API can be used to manage resources.
There are a few details you should be aware of:
REST is a protocol-independent architectural style frequently implemented over the HTTP protocol and it's meant to be stateless.
In HTTP, the communication is driven by the client: the client requests and the server responds.
WebSocket is a bi-directional, full-duplex and persistent connection protocol, hence it's stateful.
In WebSockets, once the communication is established, both client and server can exchange frames in no particular order.
Just to mention one example of application that provides different APIs: Stack Exchange provides a REST API along with a WebSocket API.