Automating the Deployment for different Entities - rest

I have an application, it is Institute Management Application, each institute can register on the application,
what i want is as a new institue gets register successfully then there should be different Database server created for it & new jar deployment created for it.
Everytime from a particular institute login happen then those specific instance of deployment gets invoked.
Or Instead of deploy a new jar only new database server we may only deploy & using save jar we be as per request make the different datasource connections.
I don't want to store data of different institute in one database, i want a scalable solution if large number of institute gets onboarded to the platform.
Tech Stack:
React, Java(Spring Boot, Monolith Arch.), Postgres.
please suggest how it can be done?
Thanks.

Related

IIB application backup options via mqsi commands

I work on IIB and we have many applications deployed on integration nodes and each application has its own integration servers/execution groups.
The requirement is to take backup of particular integration server having its application and later restoring the backup.
I have been using mqsibackupbroker command which created the backup of integration server but has created complete folder structure of integration node with policies, repositories etc.
Is there any mqsi command to take backup and restore of specific application in integration node without impacting other applications in particular integration server.
Thanks...
I don’t think there is a command for that.
But it’s not really necessary, because you can restore the application buy redeploying its BAR.
So for application level backup, you need to save the latest BAR file of each application.
This may not be sufficient, if your applications are not stateless. But, how you store the internal state of your application depends on the developers, so they need to implement custom methods to back up and restore the internal state of the apps. Not even mqsibackupbroker takes care of that.

Is there any way to move individual entity from one server to another in Master data services?

I have master data model with some entity and it is deployed on production server.
Now i have created 2 more new entity in development server and wanted to move only these two entity.
If anyone has any idea please share with me.
Thanks !
You have two options.
Web-app(easiest): On your Dev server, go to System Administration. Click on Deployment and create a package. You then deploy this package by going on the production server, follow the same steps, but choose deploy instead of create under the 'deployment' button.
The alternative is to use the MDSModelDeploy.exe. You can find it on the server by going to the appropriate folder. Generally it's in this path: C:\Program Files\Microsoft SQL Server\130\Master Data Services\Configuration.
I recommend you use this method, as you have more control. You can choose to deploy with data, or without or clone your model. You can read more here ([https://learn.microsoft.com/en-us/sql/master-data-services/deploy-a-model-deployment-package-by-using-mdsmodeldeploy][1])
I can also recommend you consider the ModelPackageEditor when your model starts getting big. Then you have control over what you need to deploy, as in entities, views, business rules etc.
You need to have a deployment strategy in place, because if your development and production is not exactly the same, then you run into deployment errors. It normally happens when you create, for example business rules on the environment to which you are deploying and it is not on your dev environment. MDS uses copious amounts of id's and if the models are not in sync, then you run into problems.

Can I deploy/add a service fabric stateless service to participate in the existing cluster?

I want the ability for clients to create their own stateless services and be able to upload/publish it to join an existing cluster. Is this doable? I understand that I need to update the application manifests dynamically but not sure how or if this is possible programmatically without side effects of the service fabric runtime processes.
The workflow is to upload the code (zipped file maybe or whatever) via an API gateway.
The first thing to keep in mind is that you do not deploy individual services to a Service Fabric cluster. You deploy applications, which can contain one or more services.
So the key question to ask is whether you need the new code to be integrated with an existing application type or not. It sounds like what you're trying to do is just enable multiple clients to deploy independent applications on a shared Service Fabric cluster, in which case you would not be modifying existing application types, but deploying entirely new ones.
Thus, you would need your API gateway to dynamically generate application and service manifests, combine them with the client-provided code to create an application package, then copy, register, and create those applications in the cluster. As far as the Service Fabric runtime is concerned, this looks no different than if you had deployed an application type built and packaged in Visual Studio. Processes running existing applications are not impacted.

Bluemix Liberty SQLDB

I have created an "enterprise template" Liberty server with an EAR file application requiring a few SQLDB connections. This is working and I am able to cf push this server to the Bluemix environment.
My question is how do I go about packaging the entire content and publish this to Bluemix in ONE action (i.e., they will have an instance of the same application running on Liberty with the same SQLDB table setup).
From my quick browsing of the blogs and Q&A, I have only found articles talking about creating the SQLDB ahead of time, packaging the Liberty runtime as a .zip file, and then using cf push to Bluemix. Because the SQLDB was created ahead of time, the DB connections would work.
So is there a way to package the Liberty server with the SQLDB creation as one entity into perhaps one "buildpack"? If so, can someone guide me on the steps involved? (or articles/blogs, anything would help)
You can't do it.
If you want create a script that do all operations in one time, an idea is create a simple job (in java for example) that you can launch in your script.
The job should perform these steps:
connect to sqldb - bluemix service using VCAP_SERVICES (for this
step you can see the documentation
https://www.ng.bluemix.net/docs/#services/SQLDB/index.html#SQLDB
run DDL (create table, ...) in your little job
close connection
Another option is to package a database migration helper (something like Flyway in the application. Then you can invoke it using Java, on application startup (we've had good luck with #singleton #startup EJBs for this pattern). The migration will run when needed, but leave the database alone otherwise. Another advantage of this pattern is you can use the migrations to update the tables of an existing table (as the name suggests).

Spring batch application integration with spring batch admin

I developed one spring batch application which is deployed as executable jar using batch/shell script. It works fine.
Now recently I read about spring batch admin application release. As per their doc, they say you have to point to job-context.xml and that will allow to manage spring batch app to be started,restarted and stopped from admin app. Now my question is do I have to keep my job-context.xml outside the jar or what are the exact steps, i am confused about this configuration.
Any insight on this is very useful and by the way I am using spring batch 2.1.
Thanks
The Spring Batch admin application is a good reference implementation and is highly customizable. All interface implementations may be replaced via Spring DI using your own classes. UI is also template driven(FreeMarker I think) and therefore may be customized to display relevant information, change skin etc.
I had a similar need like yours - need admin functionality included in an app built as jar. I did not quite like the fact that I had to package my jobs as a .war file. Instead I extracted relevant configurations from Spring Batch Admin source and created a deployment that works off file system and runs on embedded Jetty server.
See screen shots here : https://github.com/regunathb/Trooper/wiki/Trooper-Batch-Web-Console
Source, configurations etc are available here : https://github.com/regunathb/Trooper/tree/master/batch-core . This project actually creates a .jar and not .war
If your application has custom classes and is deployed as a runnable jar and not contained within the spring batch admin, you cannot start jobs. You can only view the status of jobs and "kill" their status in the database.
If you look at http://static.springsource.org/spring-batch-admin/reference/reference.xhtml at the end of the Configuration Upload section it states
You can see a new entry in the job registry ("test-job") which is
launchable in-process because the application has a reference to the
Job. (Jobs which are not launchable were executed out of process, but
used the same database for its JobRepository, so they show up with
their executions in the UI.)
If your jobs are strictly configurable jobs, as-in you use only XML to define them and do not need to do any customized item readers/processors/writers or other custom classes, then you can upload the job XML and it will be runnable from within the admin site. If you have custom classes then, from my experience, you will have to have the spring batch application deployed within your web application and then upload an XML that contains the jobs you want to run separately.
I personally just used the Admin tool to view job status and provide me with statistics through some custom pages. I left the scheduler to run the jobs and I didn't want those with access to the admin site to kick off a job when they knew nothing about it. Basically, used it to give the users a warm fuzzy without allowing them to muck it up. (leave it to a user to find an edge case you didn't account for)