How to configure Javers with Spring Boot 2.x and Postgresql such that javers tables are created in a specific schema (named dbo) rather than the default "public" schema of the database?
I have created the issue for that:
https://github.com/javers/javers/issues/690
Consider contributing a PR, it's really easy to implement this props.
Related
I have created a instance of mongodb on AWS EC2 Instance.
I have 2 Spring boot/microservices connected to this database. One is just inserting the data and other is fetching data. In my microservices, I dont have any delete operation and no such code is provided atany time so by mistake data getting deleted scenario will not occure.
**But some how mongodb database is getting reset/cleared. All database is getting deleted.**
I have checked the mongodb configuration and I haven't explicitly changed anything.
Mongodb is community edition.
On spring boot microservices, I am directly using the Mongorepository
JPA to insert or fetch the data.
I dont have any constraints in POJOs.
Can someone point out what might be the issue? Does mongodb has any
default setting for resetting the database?
We are in the process of refactoring our databases. As part of that we have modified split our data which used to exist in single Postgres table into a new Postgres table schema and DynamoDB tables. What is the best way to migrate the data from the old schema into this new hybrid schema? We are thinking of writing a Java program to do it. But wanted to check if we can leverage some AWS offering to do this in an efficient way.
Check out the AWS Database Migration Service.
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases.
See https://aws.amazon.com/dms/
I have created spring cloud task tables i.e. TASK_EXECUTION, TASK_TASK_BATCH with prefix as MYTASK_ and spring batch Tables with prefix MYBATCH_ in oracle database.
There are default tables also there in the same schema which got created automatically or by other team mate.
I have bound my Oracle database service to SCDF server deployed on PCF.
How can i tell my Spring Cloud Dataflow server to use tables created with my prefix to render data on dataflow server dashboard?
Currently, SCDF dashboard uses tables with default prefix to render data. It works fine. I want to use my tables to render SCDF dashboard screens.
I am using Dataflowserver version - 1.7.3 and Deployed it on PCF using manifest.yml
There's an open story to add this enhancement via spring-cloud/spring-cloud-dataflow#2048.
Feel free to consider contributing or share use-case details in the issue.
Currently in a spring-cloud-dataflow and spring-cloud-skipper we use flyway to manage database schemas and it's not possible to prefix table names. Trying to support for this would add too much complexity and I'm not even sure if it'd be possible.
I tried to have spring batch meta data tables in Mongo database but its not working correctly. I referred and used below mentioned github project to configure JobRepository to store job data in Mongodb. This GitHub project is updated last 3 years ago and looks discontinued.
https://github.com/vfouzdar/springbatch-mongoDao
https://jbaruch.wordpress.com/2010/04/27/integrating-mongodb-with-spring-batch/
Currently my application uses in-memory tables for spring batch and functional part is done. But I want job data to be stored in Mongodb.
I have already used Mysql for spring batch job data but in current application don't want mysql.
If anybody has any other solution/link which can help me, please share.
I have two database configuration javaconfig classes - one for JPA purposes with transactionManager() and entityManagerFactory() #Bean methods and one config class for non-JPA JDBCTemplate based query submission to access data from that database. The overall idea is to read using JDBCTemplate to read data and persist the data, after transformation, into the JPA based datasource. I am using Spring Boot to enable auto configuration. My test fails with:
java.lang.IllegalArgumentException: Not an managed type:
I have both spring-boot-starter-jdbc and spring-boot-starter-data-jpa in my build.gradle. My gut feeling is that the two data sources are in collision course with each other. How do I enforce the use of each of these datasources for the two use-cases that I mentioned earlier - one for JPA and another for JDBCTemplate purposes?
Details (Added after Dave's reply):
My service classes have been annotated with #Service and my repository classes have #Repository. Service uses repository objects using #Autowired though there are some services that are JDBCTemplate-based for data retrieval.
More complex depiction of my environment goes as follows (logically): JDBCTemplate(DataSource(Database(DB2)))--> Spring Batch Item Reader;Processors; Writer --> Service(Repository(JPADataSource(Database(H2)))). Spring batch item processors connect to both databases using services. For spring batch, I am using a H2 Job repo database (remote) to hold job execution details. Does this make sense? For Spring batch, I am using de.codecentric:spring-boot-starter-batch-web:1.0.0.RELEASE. After getting past the entityManagerFactory bean not found errors, I want to have control over the wiring of the above components.
I don't think it's anything to do with the data sources. The log says you have a JPA repository for a type that is not an #Entity. Repositories are automatically scanned from the package you define #EnableAutoConfiguration by default. So one way to control it is to move the class that has that annotation to a different package. In Boot 1.1 you can also set "spring.data.jpa.repositories.enabled=false" to switch off the scan if you don't want it. Or you can use #EnableJpaRepositories as normal.