spring-cloud-starter-stream-source-jdbc Eample Application? - spring-cloud

I am trying to run "spring-cloud-starter-stream-source-jdbc" application. As my Source is RDBMS and I want to store the RDBMS data into RDBMS sink. I would like to know any best demo application based on "spring-cloud-starter-stream-source-jdbc".
Is there support for Incremental & Full load while performing Data Stream From RDBMS Source to RDBMS Sink using "spring-cloud-starter-stream-jdbc".
Please share any reference blogs to understand "spring-cloud-starter-stream-source-jdbc" demo application.

You can use OOTB jdbc source/sink apps (with the binder of your choice rabbit, kafka). The spring-cloud-starter-stream projects are the ones you would want to use inside your application if you want to extend/build custom applications based on the jdbc starters.
For the OOTB apps, you can refer here. For instance, jdbc source app with rabbit binder can be found here

Related

Data synchronization between primary and redundant servers

I want to synchronize data among a set of REST API servers(Spring Boot based API cluster) periodically. Any instance in the cluster should be able to broadcast new information to all others.
I don't want to use a DB here. I am trying to find a lightweight library that can be used inside the API for this purpose. Is it possible to use Atomoix/Hazelcast/ZooKeeper for this purpose? If so, it will be really helpful if someone can post a sample code - if possible.
My thanks in advance.
In Hazelcast you can do it through WAN replication.
It is an enterprise feature you have to buy a license.
Hazelcast can be used for this use-case. Each of the REST instances will create an embedded Hazelcast member within its JVM. Hazelcast members then discover each other and form the cluster. Your REST apps will use the IMap or ReplicatedMap service - a distributed key-value store (IMap can store more data, ReplicatedMap is faster). Once you write a data to the IMap all other instances see it right away.
See the code sample here: https://docs.hazelcast.com/hazelcast/latest/getting-started/get-started-java.html#complete-code-samples
This feature and the Spring integration are open-source.

Possible to search all topics data in Kafka?

I need a solution preferably something inbuilt (rather than creating my own application) which would help management search through multiple/all topics in Kafka. We are using Confluent Platform. Basically user should be able to search a keyword in a UI and it should search current log of multiple/all Kafka topics and return the data. All the topics in our environment use json to communicate.
So this search would enable us to track flow for example, multiple microservices send data from one system to another system and this flow can be tracked via a correlation id which is present in all the jsons. So if someone searchers this correlation id he should be able to see the messages involved in the flow. This search would have more use cases later on.
We need a solution which would have minimal coding involved. We would prefer to use a UI like Kibana.
On basic reading I suspect below solutions but not really sure as I am new to Confluent (used open-source Apache Kafka earlier):
Sol 1: use ksqldb. (need more help on how to use it)
Sol 2: Stream all topics data using Kafka Connect to Elastic Search by using inbuilt plugin and use Kibana on top of Elastic.
Kindly help to find the best case alternative.
You could use Elastic, sure.
You could also use Splunk, though.
There is also the pdk tool offered by Pilosa that creates a distributed index over Kafka events. (no affiliation)
Another option would be distributed tracing using interceptors between clients, not "on all topics", which sounds like what you actually need

Spring batch jpa and schema questions

In documentation it says "JPA doesn't have a concept similar to the Hibernate StatelessSession so we have to use other features provided by the JPA specification." - what does this mean? Hibernate is one of the jpa impl so bit confused here
Looking for example where we use jpa infra that we have (entity/crud repo) and we want to use that to read data and write data. Most examples talk about file reading and writing and some about jdbc cursor reader. But since we are using other feature of hibernate like envers we want to use same jpa way that we are using for our online transactions. We are using spring boot/jpa (hibernate) out of the box with oracle and in memory h2 db for dev.
In prod we use oracle, we have user that access to some schemas, how we can inform spring batch to use particular schema to create tables. Right now for some time same application will be use for batch and online so we dont want to use second datasource and different user for batch if possible. Isnt this very basic requirement for all?
Good documentation of spring batch and also liked java/xml config toggle.
We use springboot 2.x with batch.
In documentation it says "JPA doesn't have a concept similar to the Hibernate StatelessSession so we have to use other features provided by the JPA specification." - what does this mean?
The direct equivalent of the Hibernate Session API in JPA is the EntityManager. So this simply means there is no API like StatelessEntityManager in JPA, and we need to find a way to achieve the same functionality with JPA APIs only, which is explained in the same section: After each page is read, the entities become detached and the persistence context is cleared, to allow the entities to be garbage collected once the page is processed.
we want to use same jpa way that we are using for our online transactions.
You can use the same DAOs or repositories for both your web app and batch app. For example, the ItemWriterAdapter lets you adapt your hibernate/JPA DAO/repository to the item writer interface and use it to persist entities.
In prod we use oracle, we have user that access to some schemas, how we can inform spring batch to use particular schema to create tables. Right now for some time same application will be use for batch and online so we dont want to use second datasource and different user for batch if possible. Isnt this very basic requirement for all?
You can use the same data source for both your web app and batch app. Then it is up to you to choose the schema for Spring Batch tables. I would recommend using the same schema so that data and meta-data are always in sync (when a Spring Batch transaction fails for example).
Hope this helps.

Integrating external objects into SF without Salesforce or Lightning connect (from Postgres tables)

I have some tables from Postgres database to be integrated into Salesforce as external objects. I went through some video tutorials and documentations where I was recommended to use Salesforce Connect which supports providers with "OData" protocol support. Is it possible to integrate Postgres tables into Salesforce as external objects without Salesforce Connect?
Thanks.
Be careful with the phrase "external objects". To me, the use of those particular words implies the specific implementation of external data access/federation delivered with Salesforce Connect. I don't believe that there is any alternative if your goal is to create "real" external objects (named "objectname__x") within Salesforce.
There are, though, Salesforce integration solutions from the likes of Progress, Jitterbit, Mulesoft, and Informatica and others that can be used to access PostgreSQL, with varying degrees of coding being required. You won't get "external objects", but you will be able to access data residing off-cloud in a PostgreSQL database from your Salesforce system.
Hope this helps.
Currently the way to integrate data from external storages (Postgres in your case) without Salesforce Connect is implement your custom logic for synchronization using REST or SOAP API, Apex classes and triggers, Salesforce Workflows and Flows. Also you will need to implement appropriate interfaces on side of your data storage. Complexity of all these steps depends on complexity of your existing data model and infrastructure around it.

Multi tenant application in grails with shared DB Separate schema can any one provide any demo app or any good reference for that

I have to make a web application multi-tenant enabled using Shared database separate schema approach. Application is built using Grails and PostgreSQL .
I need to have one single app server using a shared database with multiple schema, each schema per client.
What is the best implementation approach to achieve this? - What needs to be done at the middle tier(app-server) level? - Do I need to have multiple host headers each per client? - How can I connect to the correct schema dynamically based on the client who is accessing the application?
Any links or pointers would be helpful.