How i can share information from my MicroService A's postgers DB to MicroService B - postgresql

I have created Four micro services. First service only handles registration n login module(A), Seconds service has the Post& Comment Module(B),Third services has Rating & Review Module (C)and ADMIN Module (D).
Problem
All micro services has their own database.But Service B is dependent to service A's DB. Service C is dependent to B and A's DB and Service D is also dependent to service A,B,C. I'm using postgres DB for all service A,B,C .
Option 1.
I can use JDBC connection Factory and connect service B to service A DB.But this is not a good practice b'coz if Service A changed their column then we have to change Service B module.
Option 2.
I can create Hot-standby replica of my service A and Service B but the problem here Hot-standby replica is Read Only i can't perform update n delete.

You should design your microservices so they don't need any dependency on other microservices. Otherwise it looks like distributed monolith. No matter if dependency is established on microservice level or any kind of database linking as both your options suggest.
IMHO the clean solution is:
think over again if you really need such granularity
if yes, then for each database, declare all entities needed by particular microservice. Duplicities are not problem - if B module (posts) needs database of users, let it have it's own copy of users table, not link to A module.
connect microservices by reliable messaging system (Kafka) in which an event in one microservice propagates to listeners in other microservices and let them update their data models
There is a lot of redundancy in this model, however it's robust and definitely closer to really distributed system. We successfully use it in our big fintech platform.

Related

How the deployment strategy works for Multiple Instances of Same Microservice

Lets say I have 5 Microservices and each microservice has 3 instances each. To deploy these microservices Do we need 15 different servers to deploy each Microservice ???
So In large scale application Let's say i have 100 Microservice and each microservice has 3 instance running in that case i need 300 servers to deploy each microservice's instances??
Please correct me on this
There is nothing like that 1 MicroService Instance = 1 Physical Server.
Microservice means that service only knows about its presense and its own data.
If some other service want to give some task to another service they have to call that service via endpoint for example Http or via bus.
By saying this you can have a one big server and that can have all the services.
If you want all the services belong to one instance or for one customer can be in one server, it is also possible.
In this area, you have to compute how much resource required by each service and its instance and that will be the driving factor in deciding this strategy.

Azure Service Fabric Existing Data MIgration

I want to migrate an existing Web Application that connects to SQL Server into a Service Fabric solution. My application already has hundreds of thousands of rows of data in multiple tables. I want to create the application from the beginning and use Stateful Services in Service Fabric. How do I transfer all my existing data into the Reliable Collections that the Stateful Services will use?
You'll need to think about a way to partition your existing data first, so you can divide it across multiple Stateful service replicas.
Next, you must deploy the application and pass the data to the right service service replica. For example, you can create an API for this, or use Remoting calls from within the cluster.
Also think of a backup strategy to deal with cluster failures and human error. Store your backup data away from the cluster.

Cloud Foundry for SaaS

I am implementing a service broker for my SaaS application on Cloud Foundry.
On create-service of my SaaS application, I create instance of another service (Say service-A) also ie. a new service instance of another service (service-A) is also created for every tenant which on-boards my application.
The details of the newly created service instance (service-A) is passed to my service-broker via environment variable.
To be able to process this newly injected environment variable, the service-broker need to be restaged/restarted.
This means a down-time for the service-broker for every new on-boarding customer.
I have following questions:
1) How these kind on use-cases are handled in Cloud Foundry?
2) Why Cloud Foundry chose to use environment variables to pass the info required to use a service? It seems limiting, as it requires application restart.
As a first guess, your service could be some kind of API provided to a customer. This API must store the data it is sent in some database (e.g. MongoDb or Mysql). So MongoDb or Mysql would be what you call Service-A.
Since you want the performance of the API endpoints for your customers to be independent of each other, you are provisioning dedicated databases for each of your customers, that is for each of the service instances of your service.
You are right in that you would need to restage your service broker if you were to get the credentials to these databases from the environment of your service broker. Or at least you would have to re-read the VCAP_SERVICES environment variable. Yet there is another solution:
Use the CC-API to create the services, and bind them to whatever app you like. Then use again the CC-API to query the bindings of this app. This will include the credentials. Here is the link to the API docs on this endpoint:
https://apidocs.cloudfoundry.org/247/apps/list_all_service_bindings_for_the_app.html
It sounds like you are not using services in the 'correct' manner. It's very hard to tell without more detail of your use case. For instance, why does your broker need to have this additional service attached?
To answer your questions:
1) Not like this. You're using service bindings to represent data, rather than using them as backing services. Many service brokers (I've written quite a few) need to dynamically provision things like Cassandra clusters, but they keep some state about which Cassandra clusters belong to which CF service in a data store of their own. The broker does not bind to each thing it is responsible for creating.
2) Because 12 Factor applications should treat backing services as attached, static resources. It is not normal to say add a new MySQL database to a running application.

spring cloud consul service names

I am switching all my service infrastructure from eureka to consul.
In the eureka case I have multiple services with the same name and Eureka handles this via the Application and instance to differentiate.
In the consul case, if I have this naming scheme, does spring cloud generate unique ids under eh covers?
I read where consul will use the id and name synonymously unless you register them under unique ids.
So you can have service 1 as (name=myservice, id=xxx) and service 2 as (name=myservice, id=yyy).
So in that way consul preserves uniqueness. What does spring cloud do under the covers?
Ok, so it appears that the question is not clear.
I know that I can specify uniqueness when I define them but I don't
I have a large microservices-based system in production. We have multiples of each microservices for both redundancy and scaling and we do not specifically set uniqueness on the services.
We don't because Eureka does this for us. Say I have a CustomerAccountService with 5 instances then I when I request customer account service I can see 5 instances. Looking at the Eureka data model, we see one Application and 5 instances of it.
So I am planning on moving to consul and want t preserve a similar mode of operation. Many instances of the same time of service.
What I really want to know is how the spring consul registration works under the covers or do I have to do something special for this.
I do know that COnsul defines a name and an id and that they can be the same or they can be different.
So can I have the name for 5 instances the same and have the id variate? If so, how does that happen in the spring cloud consul version of this.
Any application registered with the same spring.application.name in Consul using Spring Cloud will be grouped together just like Eureka.

Is there any way to know if a CouchDB database is the source of a pull continuous replication?

For my example, let's say we have two servers. Server A creates a continuous pull replication with a local database on Server A. The source of this pull replication is a database on Server B.
I know that Server A can monitor the status of the replication either by the _replicator database if it was created that way or by querying _active_tasks. Nevertheless, is there any way for Server B to know that it is the source of a continuous pull replication, except by monitoring the GET requests?
Even then, we are using Cloudant as our Server B, monitoring through a proxy is not an option. So if a database on Cloudant is part of a replication not created on the Cloudant server, there is absolutely no way to know it since it won't show up in Cloudant's _active_tasks, am I correct?
EDIT: After communicating with Samantha Scharr from Cloudant Support and she said that "making logs available to our clients is a concern that we are working on". This would not be such a problem once this is done.
Thank you,
Paul
There is no such. For CouchDB replication process is not something special to track on.
Say, you have three instances: A, B and C. CouchDB allows you to run replication process on A to replicate data from B to C. For instance A the replication process will be explicitly defined in _active_tasks since replication is running within separate Erlang process. But for B and C instances this will be looked as that some HTTP client calling their public API resources with some payload. They will never know that someone trying to keep them synced.
Theoretically, you may write some logs parse or proxy that will aware about remote replication running by analyzing HTTP requests basing on Replication protocol definition.
But I fear you have to make it smart enough to not let him make a lot of false-positive matches for regular clients.