Azure Service Fabric Existing Data MIgration - azure-service-fabric

I want to migrate an existing Web Application that connects to SQL Server into a Service Fabric solution. My application already has hundreds of thousands of rows of data in multiple tables. I want to create the application from the beginning and use Stateful Services in Service Fabric. How do I transfer all my existing data into the Reliable Collections that the Stateful Services will use?

You'll need to think about a way to partition your existing data first, so you can divide it across multiple Stateful service replicas.
Next, you must deploy the application and pass the data to the right service service replica. For example, you can create an API for this, or use Remoting calls from within the cluster.
Also think of a backup strategy to deal with cluster failures and human error. Store your backup data away from the cluster.

Related

Possible to deploy or use several containers as one service in Google Cloud Run?

I am testing Google Cloud Run by following the official instruction:
https://cloud.google.com/run/docs/quickstarts/build-and-deploy
Is it possible to deploy or use several containers as one service in Google Cloud Run? For example: DB server container, Web server container, etc.
Short Answer NO. You can't deploy several container on the same service (as you could do with a Pod on K8S).
However, you can run several binaries in parallel on the same container -> This article has been written by a Googler that work on Cloud Run.
In addition, keep in mind
Cloud Run is a serverless product. It scales up and down (to 0) as it wants (but especially according with the traffic). If the startup duration is long and a new instance of your service is created, the query will take time to be served (and your use will wait)
You pay as you use, I means, you are billed only when HTTP requests are processed. Out of processing period, the CPU allocated to the instance is close to 0.
That implies that Cloud Run serves container that handle HTTP requests. You can't run a batch processing out of any HTTP request, in background.
Cloud Run is stateless. You have an ephemeral and in memory writable directory (/tmp) but when the instance goes down, all the data goes down. You can't run a DB server container that store data. You can interact with external services (Cloud SQL, Cloud Storage,...) but store only transient file locally
To answer your question directly, I do not think it is possible to deploy a service that has two different containers: DB server container, and Web server container. This does not include scaling (service is automatically scaled to a certain number of container instances).
However, you can deploy a container (a service) that contains multiple processes, although it might not be considered as best practices, as mentioned in this article.
Cloud Run takes a user's container and executes it on Google infrastructure, and handles the instantiation of instances (scaling) of that container, seamlessly based on parameters specified by the user.
To deploy to Cloud Run, you need to provide a container image. As the documentation points out:
A container image is a packaging format that includes your code, its packages, any needed binary dependencies, the operating system to use, and anything else needed to run your service.
In response to incoming requests, a service is automatically scaled to a certain number of container instances, each of which runs the deployed container image. Services are the main resources of Cloud Run.
Each service has a unique and permanent URL that will not change over time as you deploy new revisions to it. You can refer to the documentation for more details about the container runtime contract.
As a result of the above, Cloud Run is primarily designed to run web applications. If you are after a microservice architecture, which consists of different servers running each in unique containers, you will need to deploy multiple services. I understand that you want to use Cloud Run as database server, but perhaps you may be interested in Google's database solutions, like Cloud SQL, Datastore, BigTable or Spanner.

How i can share information from my MicroService A's postgers DB to MicroService B

I have created Four micro services. First service only handles registration n login module(A), Seconds service has the Post& Comment Module(B),Third services has Rating & Review Module (C)and ADMIN Module (D).
Problem
All micro services has their own database.But Service B is dependent to service A's DB. Service C is dependent to B and A's DB and Service D is also dependent to service A,B,C. I'm using postgres DB for all service A,B,C .
Option 1.
I can use JDBC connection Factory and connect service B to service A DB.But this is not a good practice b'coz if Service A changed their column then we have to change Service B module.
Option 2.
I can create Hot-standby replica of my service A and Service B but the problem here Hot-standby replica is Read Only i can't perform update n delete.
You should design your microservices so they don't need any dependency on other microservices. Otherwise it looks like distributed monolith. No matter if dependency is established on microservice level or any kind of database linking as both your options suggest.
IMHO the clean solution is:
think over again if you really need such granularity
if yes, then for each database, declare all entities needed by particular microservice. Duplicities are not problem - if B module (posts) needs database of users, let it have it's own copy of users table, not link to A module.
connect microservices by reliable messaging system (Kafka) in which an event in one microservice propagates to listeners in other microservices and let them update their data models
There is a lot of redundancy in this model, however it's robust and definitely closer to really distributed system. We successfully use it in our big fintech platform.

Multi tenant stateful service: service instance per tenant vs partition per tenant

We plan on using a stateful service to act basically as a cache for tenant data that is stored externally. Is there much difference in creating a separate service of the same service type for each tenant vs having one service and a separate partition for each tenant?
You cannot add and remove service partitions of a service on the fly. So using partitions is likely not the way to go if the number of tenants is a variable.
You'll get the most flexibility and scalability if you use a service or even an application per tenant.

Spring cloud data flow deployment

I wanna deploy the Spring-cloud-data-flow on several hosts.
I will deploy the server of Spring-cloud-data-flow on one host-A, and deploy the agents on the other hosts(These hosts are in charge of executing the tasks).
Except the host-A, all the other hosts run the same tasks.
Shall I modify on the basis of the Spring Cloud Data Flow Local Server or on the Spring Cloud Data Flow Apache Yarn Server or other better choice?
Do you mean how the apps are deployed on several hosts? If so, the apps are deployed using the underlying deployer implementation. For instance, if it is local deployer then, each app is deployed by spawning a new process. You can scale out the number apps deployment using the count property during the stream deploy. I am not sure what do you mean by the agents here.

Cloud Foundry for SaaS

I am implementing a service broker for my SaaS application on Cloud Foundry.
On create-service of my SaaS application, I create instance of another service (Say service-A) also ie. a new service instance of another service (service-A) is also created for every tenant which on-boards my application.
The details of the newly created service instance (service-A) is passed to my service-broker via environment variable.
To be able to process this newly injected environment variable, the service-broker need to be restaged/restarted.
This means a down-time for the service-broker for every new on-boarding customer.
I have following questions:
1) How these kind on use-cases are handled in Cloud Foundry?
2) Why Cloud Foundry chose to use environment variables to pass the info required to use a service? It seems limiting, as it requires application restart.
As a first guess, your service could be some kind of API provided to a customer. This API must store the data it is sent in some database (e.g. MongoDb or Mysql). So MongoDb or Mysql would be what you call Service-A.
Since you want the performance of the API endpoints for your customers to be independent of each other, you are provisioning dedicated databases for each of your customers, that is for each of the service instances of your service.
You are right in that you would need to restage your service broker if you were to get the credentials to these databases from the environment of your service broker. Or at least you would have to re-read the VCAP_SERVICES environment variable. Yet there is another solution:
Use the CC-API to create the services, and bind them to whatever app you like. Then use again the CC-API to query the bindings of this app. This will include the credentials. Here is the link to the API docs on this endpoint:
https://apidocs.cloudfoundry.org/247/apps/list_all_service_bindings_for_the_app.html
It sounds like you are not using services in the 'correct' manner. It's very hard to tell without more detail of your use case. For instance, why does your broker need to have this additional service attached?
To answer your questions:
1) Not like this. You're using service bindings to represent data, rather than using them as backing services. Many service brokers (I've written quite a few) need to dynamically provision things like Cassandra clusters, but they keep some state about which Cassandra clusters belong to which CF service in a data store of their own. The broker does not bind to each thing it is responsible for creating.
2) Because 12 Factor applications should treat backing services as attached, static resources. It is not normal to say add a new MySQL database to a running application.