Caching Web Client in Vert.x - vert.x

This is the architecture I am trying to build where I first check in Redis cache and incase of a cache miss go to downstream. Found Caching web client in vert x 4.2.0 and newer version but does that allow us to communicate with Redis cache or it works with in-memory cache only ? Also any suggestions to achieve this are welcome.
Architecture

currently vert.x web client doesn't have any redis based caching implementation, yet nothing really stops one from quickly building one.
The important bit is the interface io.vertx.ext.web.client.spi.CacheStore. A custom implementation that stores data in redis would sufice to get your architecture complete.
Currently, vert.x web client offers a local in-memory cache or a SharedData based cache, usefull for sharing data across verticles.
If one would follow the shared data implementation as a blueprint, supporting redis would be a nice good first contribution project.

Related

Redis read-through implementation & tooling example

I am a beginner in Redis. I am interested to introduce Redis into my system to make it a small-scale microservice. I have read the high level concept of Read-Through cache strategy and then imagine the following picture in my mind.
Let me explain briefly, I will have 2 (or more) domain driven microservices (Payment, Customer) responsible for UPDATING (i.e. the "command" part in CQRS) data in their isolated PostgreSql DB schema. For the QUERY part, I would like that all of "GET" API requests from my mobile app fetch data from Redis using the Read-Through strategy by having some kind of "PG to Redis Converter" behind the scene (as in the label (6) in the picture).
This is what I think the Read-Through cache is about as far as I understand. But I am trying to search for an example of this kind of converter to integrate with my NodeJS or Java REST API, but I cannot find one. Most example I can find only talk about the concept. And the ones that show the implementation turn out to be more like Cache-Aside strategy.
Please help suggest which tools to use for such converter. Or if it can be configured directly into Redis itself (e.g. using Lua script?). It would be great if it can be done serverlessly using AWS service, but not necessary, thank you.

Communication of two application in same device

There are 2 applications that need to communicate with each other. They are both running on the same PC.
Main Application (C#)
Helper Application (C#) -> launched from Main Application
Helper Application will modify some data used/contained by the Main Application. Can the helper application be a microservice? (not familiar with microservices, but I've saw this while checking on the net)
I found a helpful tutorial and was able to create a WCF Duplex Binding.
Now the Main Application and Helper Application can communicate.
I'm just wondering if this is a good solution (or a microservice is better??)
Can the helper application be a microservice? (not familiar with microservices...
Sure. "Microservices" is just the latest term that describes distributed component-based network computing. It goes back a long way to the days of (and possibly further) distributed COM (DCOM) and Corba; COM+ and finally service-orientated architecture (SOA). WCF used SOA as a best practice. In practice the only real difference between SOA and microservices is that the latter tend to adopt HTTP-REST-JSON as the transport/API/payload whereas the SOA generation is transport/payload neutral but generally using SOAP.
I found a helpful tutorial and was able to create a WCF Duplex Binding. Now the Main Application and Helper Application can communicate. I'm just wondering if this is a good solution (or a microservice is better??)
Well technically you are already using microservice/SOA.
I'm just wondering if this is a good solution
No. The problem with SOA/microservices on the same machine is that they are very chatty; have a high overhead; and their message payloads quite verbose. Both SOAP and REST utilise text messages (XML and JSON respectively) by default (which is large compared to binary).
If both client and server are on the same machine you are best to just use straight-up named-pipes and avoid WCF/REST. Communcation under named-pipes are binary and so are very compact; named pipes run in Kernal mode meaning it is very fast and as an added bonus when communicating locally, bypasses the network layer (as opposed to say TCP which will even for LOCALHOST).

Efficient architecture/tools for implementing async web API

Consider an event-driven microservice based web application that should have some async web APIs. AFAIK the suggested way to achieve async http request/response is to respond each API call with a say 202 Accepted status code and a location header to let caller retrieve the results later.
In this way, we have to generate a unique ID (like uuid or guid) for each request and store the id and all related events in the future in a persistent storage, so the API caller can track the progress of its request.
My question is how this API layer should be implemented considering we may have tens or hundreds of thousands of requests and responses per second. What is the most efficient architecture and tools to make such an API with this load?
One way could be storing all the requests and all related events in both database and a cache like redis (just for a certain limited time like 30 minutes).
Is there any better pattern/architecture/tools? How big companies and websites solved this issue?
Which database could be better for this scenario? (MongoDB, MySQL, …)
I really appreciate any useful answer specially if you have some production experience.
very valid question! In term of architecture or tools point of you should check out zipkin, which is an open distributed tracing system tried and tested by Twitter and especially if you have a microservice architecture, It is really useful to track-down all your request/response. It also includes Storage options include in-memory, JDBC (mysql), Cassandra, and Elasticsearch.
If you are using spring-boot for your microservices then it is easily pluggable.
Even if you are not totally convinced with Zipkin, architecture is worth looking into. From Production experience, I have used it and it was really useful.

Distributed services sharing same data schema

We are building a Spring Boot based web application which consists of a central API server and multiple remote workers. We tried to embrace the idea of microservices so each component is built as a separate Spring Boot App. But they do share a lot of same data schema for the objects being handled across components, thus currently the JPA model definitions in code are duplicated in each component's project. Every time something changes we need to remember to change it everywhere and this results in poor compatibility between different versions across components.
So I'm wondering if this is the best we can do here, or is there better ways to manage the components code in such scenario?
To be more specific, we use both MySQL and Redis for data storage and both are accessed by all components. Redis is actually serving as one tool of data communication between components.

Play framework to build application with no UI and need to accept requests using REST and ipc and/or message queues

I have to build a component that runs in a jvm, uses MongoDB as database and doesn't need a UI. It will be integrated into other products. I'm planning to build this using scala and related tools.
My first thoughts are to just let it expose REST API and let other products integrate using the API. While this is acceptable for some products, it isn't for others due to performance reasons. So I have to enable other components to communicate to this using either http or ipc or message queues. How can achieve this without much duplication of business logic.
Would Play framework be the right choice for this even though there is no UI involved and there is a need to accept messages via http or ipc or message queues?
Using Play for that is ok but there are frameworks better fit for what you are planning to do, as you already said, play has a lot of support for frontend features you don't need.
It will not so much affect the runtime speed as the time you will need for programming, compiling, building and deployment.
There are some framworks that might fit you needs better:
Scalatra Nice, easy to use, integrates good with JavaEE-Stack http://www.scalatra.org/
Finatra Cool if you have the twitter stack running. Metrics and other stuff almost for free http://finatra.info/
Skinny Framework : Looks nice, never tried myself
Spray : Cool features to come, a little elitist