Spring Data REST move file rest-messages.properties to database - spring-boot-2

what needs to be added to spring boot 2, so that the value would first be checked in the database and if not found then searched in the file rest-messages.properties?

Can you try using JPA to fetch these values from Database table . If the result is not empty you can return value based on present in table. If its empty you can try to return from properties file.

Related

Is it possible to read static table data before batch job starts execution and use the data as metadata for the batch job

I am trying to read data using a simple select query and create a csv file with the resultset data.
As of now,I have the select query present in application.properties file and I am able to generate the csv file.
Now, I want to move the query to a static table and fetch it as an initialization step before the batch job starts(Something like a before job).
Could you please let me know what would be the best strategy to do so.i.e. reading from a database before the actual batch job of fetching the data and creating a CSV file starts.
I am able to read the data and write it to a CSV file
application.properties
extract.sql.query=SELECT * FROM schema.table_name
I want it moved to database and fetched before actual job starts
1) I created a job with one step(Read and then write).
2) Implemented JobExecutionListener. In the beforeJob method, used JdbcTemplate to fetch the relevant details(A query in my case) from the DB.
3) Using jobExecution.getExecutionContext() , I set the query in the execution context.
4) Used a step scoped reader to retrieve the value using late binding. #Value("#{jobExecutionContext['Query']}") String myQuery.
5) The key to success here is to pass a placeholder value of null so that the compilation is successful.

Lift load Dateformat issue from csv file

we are migrating db2 data to db2 on cloud. We are using below lift cli operation for migration.
Extracting a database table to a CSV file using lift extract from source database.
Then loading the extracted CSV file to db2 on cloud using 'lift load'
ISSUE:
We have created some tables using ddl on the target db2oncloud which have some columns with DATA TYPE "TIMESTAMP"
while load operation(lift load), we are getting below error"
"MESSAGE": "The field in row \"2\", column \"8\" which begins with
\"\"2018-08-08-04.35.58.597660\"\" does not match the user specified
DATEFORMAT, TIMEFORMAT, or TIMESTAMPFORMAT. The row will be
rejected.", "SQLCODE": "SQL3191W"
If you use db2 as a source database, then use either:
the following property during export (to export dates, times, timestamps as usual for db2 utilities - without double quotes):
source-database-type=db2
try to use the following property during load, if you have already
exported timestamps surrounded by double quotes:
timestamp-format="YYYY-MM-DD-HH24.MI.SS.FFFFFF"
If the data was extracted using lift extract then for sure you should load the data with source-database-type=db2. Using this parameter will preconfigure all the necessary load details automatically.

Spring Boot add datasources at runtime?

I have a default database contains a table with column 'name' & 'jndi' means different datasources.
I can add data to this table at runtime.
How can I query datas from these datasources?
I have reference: https://www.codeday.top/2017/07/12/27122.html
But this sample seems need to predefine all the datasources.
Can somebody gives me some suggestions?

how read-through work in ignite

my cache is empty so sql queries return null.
The read-through means that if the cache is missed, then Ignite will automatically get down to the underlying db(or persistent store) to load the corresponding data.
If there are new data inserted into the underlying db table ,i have to down cache server to load the newly inserted data from the db table automatically or it will sync automatically ?
Is work same as Spring's #Cacheable or work differently.
It looks to me that the answer is no. Cache SQL query don't work as no data in cache but when i tried cache.get in i got following results :
case 1:
System.out.println("data == " + cache.get(new PersonKey("Manish", "Singh")).getPhones());
result ==> data == 1235
case 2 :
PersonKey per = new PersonKey();
per.setFirstname("Manish");
System.out.println("data == " + cache.get(per).getPhones());
throws error:- as following
error image, image2
Read-through semantics can be applied when there is a known set of keys to read. This is not the case with SQL, so in case your data is in an arbitrary 3rd party store (RDBMS, Cassandra, HBase, ...), you have to preload the data into memory prior to running queries.
However, Ignite provides native persistence storage [1] which eliminates this limitation. It allows to use any Ignite APIs without having anything in memory, and this includes SQL queries as well. Data will be fetched into memory on demand while you're using it.
[1] https://apacheignite.readme.io/docs/distributed-persistent-store
When you insert something into the database and it is not in the cache yet, then get operations will retrieve missing values from DB if readThrough is enabled and CacheStore is configured.
But currently it doesn't work this way for SQL queries executed on cache. You should call loadCache first, then values will appear in the cache and will be available for SQL.
When you perform your second get, the exact combination of name and lastname is sought in DB. It is converted into a CQL query containing lastname=null condition, and it fails, because lastname cannot be null.
UPD:
To get all records that have firstname column equal to 'Manish' you can first do loadCache with an appropriate predicate and then run an SQL query on cache.
cache.loadCache((k, v) -> v.lastname.equals("Manish"));
SqlFieldsQuery qry = new SqlFieldsQuery("select firstname, lastname from Person where firstname='Manish'");
try (FieldsQueryCursor<List<?>> cursor = cache.query(qry)) {
for (List<?> row : cursor)
System.out.println("firstname:" + row.get(0) + ", lastname:" + row.get(1));
}
Note that loadCache is a complex operation and requires to run over all records in the DB, so it shouldn't be called too often. You can provide null as a predicate, then all records will be loaded from the database.
Also to make SQL run fast on cache, you should mark firstname field as indexed in QueryEntity configuration.
In your case 2, have you tried specifying lastname as well? By your stack trace it's evident that Cassandra expects it to be not null.

Spring store data in jdbcTemlate(h2 db) permanently

I am starting to learn Spring and faced with some issues regarding spring-jdbc.
First, I tried run the example from this: https://spring.io/guides/gs/relational-data-access/ and it worked. Then, I commented lines with droping and creating new tables(http://pastebin.com/zcJHsL1P), in order to not override data, but just get it from db and show it. However, spring showed me error:
Table "CUSTOMERS" not found; SQL statement: ...
So, my question is: What should I do to store my database permanently? I don't want to recreate all time new database, I want create it once and update it.
P.S. I used H2 database. Maybe problem exists in tis db?
That piece of code looks like you are "prototyping" something; so it's easier to automatically create a new database (schema, tables, data) on the fly, execute and/or test whatever you want to...and finish the execution.
If you want to persist your data and only modify/update it, either use H2 with the "file layout" or use MySQL, PostreSQL, etcetera.
By the way, the reason you are getting Table "CUSTOMERS" not found; SQL statement: ... is because you are using H2 as an in-memory database and every time you start your application you need to re-create the tables and populate them with data.