Redis read-through implementation & tooling example - postgresql

I am a beginner in Redis. I am interested to introduce Redis into my system to make it a small-scale microservice. I have read the high level concept of Read-Through cache strategy and then imagine the following picture in my mind.
Let me explain briefly, I will have 2 (or more) domain driven microservices (Payment, Customer) responsible for UPDATING (i.e. the "command" part in CQRS) data in their isolated PostgreSql DB schema. For the QUERY part, I would like that all of "GET" API requests from my mobile app fetch data from Redis using the Read-Through strategy by having some kind of "PG to Redis Converter" behind the scene (as in the label (6) in the picture).
This is what I think the Read-Through cache is about as far as I understand. But I am trying to search for an example of this kind of converter to integrate with my NodeJS or Java REST API, but I cannot find one. Most example I can find only talk about the concept. And the ones that show the implementation turn out to be more like Cache-Aside strategy.
Please help suggest which tools to use for such converter. Or if it can be configured directly into Redis itself (e.g. using Lua script?). It would be great if it can be done serverlessly using AWS service, but not necessary, thank you.

Related

How can event-driven architecture be applied to this example?

I am unsure how to make use of event-driven architecture in real-world scenarios. Let's say there is a route planning platform consisting of the following back-end services:
user-service (manages user data and roles)
map-data-service (roads & addresses, only modified by admins)
planning-tasks-service
(accepts new route planning tasks, keeps track of background tasks, stores results)
The public website will usually request data from all 3 of those services. map-data-service needs information about user-roles on a data change request. planning-tasks-service needs information about users, as well as about map-data to validate new tasks.
Right now those services would just make a sync request to each other to get the needed data. What would be the best way to translate this basic structure into an event-driven architecture? Can dependencies be reduced by making use of events? How will the public website get the needed data?
Cosmin is 100% correct in that you need something to do some orchestration.
One approach to take, if you have a client that needs data from multiple services, is the Experience API approach.
Clients call the experience API, which performs the orchestration - pulling data from different sources and providing it back to the client. The design of the experience API is heavily, and deliberately, biased towards what the client needs.
Based on the details you've said so far, I can't see anything that cries out for event-based architecture. The communication between the client and ExpAPI can be a mix of sync and async, as can the ExpAPI to [Services] communication.
And for what it's worth, putting all of that on API gateway is not a bad idea, in that they are designed to host API's and therefore provide the desirable controls and observability for managing them.
Update based on OP Comment
I was really interested in how an event-driven architecture could
reduce dependencies between my microservices, as it is often stated
Having components (or systems) talk via events is sort-of the asynchronous equivalent of Inversion of Control, in that the event consumers are not tightly-coupled to the thing that emits the events. That's how the dependencies are reduced.
One thing you could do would be to do a little side-project just as a learning exercise - take a snapshot of your code and do a rough-n-ready conversion to event-based and just see how that went - not so much as an attempt to event-a-cise your solution but to see what putting events into a real-world solution looks like. If you have the time, of course.
The missing piece in your architecture is the API Gateway, which should be the only entry-point in your system, used by the public website directly.
The API Gateway would play the role of an orchestrator, which decides to which services to route the request, and also it assembles the final response needed by the frontend.
For scalability purposes, the communication between the API Gateway and individual microservices should be done asynchronously through an event-bus (or message queue).
However, the most important step in creating a scalable event-driven architecture which leverages microservices, is to properly define the bounded contexts of your system and understand the boundaries of each functionality.
More details about this architecture can be found here
Event storming is the first thing you need to do to identify domain events(a change in state in your system). For example, 'userCreated', 'userModified', 'locatinCreated', 'routeCreated', 'routeCompleted' etc. Then you can define topics that manage these events. Interested parties can consume these events by subscribing to published events(via topics/channel) and then act accordingly. Implementation of an event-driven architecture is often composed of loosely coupled microservices that communicate asynchronously through a message broker like Apache Kafka. Free EDA book is an excellent resource to know most of the things in EDA.
Tutorial: Even-driven-architecture pattern

why set of REST API written doesnt constitute microservices?

I wrote REST API in flask using flask-restplus does helps interact frontend UI in javascript with backend in python some rest api interacts with REST API provided by gitlab and caches them and returns data to client from the cache instead.
why such an implementation cannot be classified as micro services ?
Microservice is really a set of guidelines that help you develop scalable maintainable applications in a fast changing world. It is nothing more than that. It should not worry if you if some one else is calling it a microservice or not. If you have building some thing and you took into consideration of how to divide one big application in defined boundaries with loose coupling in between them, such that each service does one thing for you and can be build, scaled, modified independently without breaking contracts, you have followed the basic guide lines.
Microservices is about whole architecture and not just about couple of services. If you have just one service that does provide rest end points for some other monolith to consume, then over all its still a monolith, if its dealing with set of other services instead, then it may be set of services that can be called microserivces

What is the real benefit of using GraphQL?

I have been reading about the articles on the web about the benefits of graphql but so far I have not been able to find a single benefit of it.
One of the most common benefits mentioned in those articles are below?
No Overfetching with GraphQL.
Reducing number of calls made from client side.
Data Load Control Granularity
Evolve your API without versions.
Those above all makes sense but it is not the graphql itself that provides these benefits. Any second layer api written in java/python or any other language would be able to provide this benefits too. It is basically introducing another layer of abstraction above the data retrieval systems, rest or whatever, and decoupling the client side from that layer. After you do that everything you can do with graphql can also be done with any other language too.
Anyone can implement a say scala server that retrieves the data from various api's integrates them, create objects internally and feeds the client with only the relevant part of the data with total control on the data. This api can be easily versioned and released accordingly. Considering the syntax of graphql and how cumbersome it is and difficulty of creating a good cache around it, I can't see why would you use it really.
So the overall question is there any benefits of graphql that is provided to the application because of the graphql itself and not because you implement another layer of abstraction between your applications and your api's?
Best practices known as REST existed earlier, too.
GraphQL is more standarized than REST, safer (no injections) and syntax gives great flexibility in the area of quickly changing client needs.
It's just a good standard of best practices.
I feel GrapgQL is another example of overengineering. I would say "Best standards and practices" are "Keeping It Simple."
Breaking down and object and building a custom one before sending it to the client is very basic.

ArangoDB Foxx as a REST back-end

I am working on an app that would greatly benefit from Arangos' multi-model capabilities. Considering the app needs for the back-end, I have concluded that most, if not all, of it could be served through a REST API as to aid cleaner design for future development and integration with others. The API would then be consumed by several web and mobile front-end frameworks to handle the rest of the logic. The project will be developed with Javascript for the whole stack, using the NodeJS ecosystem.
.
The question itself:
Should and could one use arangodb + foxx to create the complete back-end stack for serving a REST API, thus avoiding another layer/component in the stack? e.g. express/hapi/loopback etc.
.
Major back-end requirements:
Authentication with roles
Sessions
Encryption
Complex querying (root of my initial thought, as to avoid multiple hops between DB and back-end)
Entry parsing, validation and sanitization
Scheduled tasks
.
Mainly looking for:
Known design advantages
Known design limitations
"Hidden" bottlenecks
Other possible future regrets
.
Side question (that might answer some of the above): Could Foxx utilise some of the node middleware available via npm?
Thanks in advance for your time!
You can use ArangoDB Foxx as the sole backend of your application, however it is important to keep the limitations of Foxx (compared to a general purpose JS environment like Node.js) in mind when doing this.
You mention encryption. While ArangoDB does support some cryptography (e.g. HMAC signing and PBKDF2 key derivation for passwords) the support is not as exhaustive and extensible as in Node.js. Also when using computationally expensive cryptography this will affect the performance of the database (because unlike Node.js Foxx is strictly synchronous and thus all operations should be considered blocking).
ArangoDB does not support role-based authentication out of the box but it is perfectly reasonable to implement it within ArangoDB using Foxx (just like you would implement it in Node.js, except you don't need to leave the database).
For sessions there are generally two possible approaches: you can either use a collection with session documents (using ArangoDB as your session backend) or you can keep your services stateless by using signed tokens (Foxx comes with JWT support out of the box).
Complex/stored queries and input validation (using the joi schema library originally written for hapi) are actually some of the main use cases of Foxx so those shouldn't be any problem whatsoever.
Foxx comes with its own mechanism for queueing tasks, which can also be scheduled ahead or recur periodically. However depending on your requirements an external job or message queue may be a better fit. The good thing is you can get started with the built-in job queue right away and still move on to a dedicated solution if the need arises during development.
As for middleware and NPM packages: Foxx is not fully compatible with Node.js code. While we provide a lot of compatibility code and try to keep the core modules compatible where possible, a big difference is that Node.js is generally used to perform asynchronous operations while in ArangoDB all operations are synchronous.
If you have Node.js modules that don't use crypto, file or network I/O and don't use asynchronous APIs (e.g. setTimeout, promises) they may be compatible with Foxx. A lot of utility libraries like lodash work with no problems at all. Even if you find that a module doesn't work it may be possible to write an adapter for it like we have done with mocha (integrated into Foxx) and GraphQL (via the graphql-sync package on NPM).
In my experience it is a good approach to put your Foxx service behind a thin layer of Node.js (e.g. a simple express application that mostly just proxies to your Foxx API) and/or to delegate some parts of your backend to standalone Node.js microservices (e.g. integration with non-HTTP services like e-mail or LDAP) which can be integrated in Foxx via HTTP.
One more thing: while a lot of existing express middleware likely isn't compatible with Foxx because of Node-specific dependencies and async logic, ArangoDB 3 will bring a new version of Foxx with support for middleware using a functionally express-compatible API.
I'm just starting to port my sails application to a FOXX application so I can answer some of your questions.
Role based authorization in ArangoDB is probably at too high a level than you want. In our case, we use an external service to authorize various web and service based applications at a very fine-grained level (much lower than a vertex or an edge). My feeling is that Authorization at that level will require you to write it yourself in javascript. If it's just CRUD on a per collection basis, then it shouldn't require much effort.
For authorization and sessions, I would look at the FOXX example found at: FOXX authorization-session example
It's not clear what you're asking about encryption. If you're talking about SSL connections, then that is natively supported (see arangodb end-points). As for internal encryption, there is a javascript crypto module ArangoDb crypto
Entry validation, etc. is supported by the javascript joi package.
Complex querying... Absolutely and getting even better in ArangoDB version 3.x. Traversals can be chained (go down using one edge collection, then up using another).
You're right on the ball when thinking about efficiency. This is the main reason we're going from sails to FOXX. In our case, we filter query results based on permissions from our external service. This means that we can't use ArangoDB native skip and limit support if these attributes are specified by the client. In sails, we have to bring back results in chunks and collect until we hit the appropriate skip and limit values. By moving to FOXX, we save a lot of network and other resources. We tested this by having sails forward the request to our prototype FOXX implementation. This scaled much better than the sails post-processing setup.
You can use NPM modules with restrictions. See Javascript Modules

How to provide a REST API into 3rd Party data?

I use OmniFocus a ton and I'd really like to be able to connect my data there to other things (Zapier, IFFFT, Beeminder, etc). There's a lot of support for putting data into OmniFocus through these services, but I can't find any support for getting data out of OmniFocus.
In thinking about this, I realized my question isn't really about OmniFocus but rather about building a connector to a service that I don't own. So this is my scenario:
I have data on some publicly accessible web service (in the case of OF, it's Dropbox)
I want to build and host some sort of application that accesses that data and parses it and then provides a REST API that other servers can then query.
Ideally I'd like to make this service available to others - this seems tricky because they have to somehow enable my application to read their data.
I'm a fairly experienced software dev but I have zero experience with web applications or cloud applications. I'm not looking for a super in-depth answer here, but more of a general sketch of how this would work (or a confirmation that this really isn't feasible).