Spring Boot add datasources at runtime? - jpa

I have a default database contains a table with column 'name' & 'jndi' means different datasources.
I can add data to this table at runtime.
How can I query datas from these datasources?
I have reference: https://www.codeday.top/2017/07/12/27122.html
But this sample seems need to predefine all the datasources.
Can somebody gives me some suggestions?

Related

How do we filter out tables and provide for source jdbc connector to fetch data

I'm aware of providing table.whitelist property to fetch data from whitelisted tables.
Now, in our database we need to extract data from tables whose table names are in some format.
For example, my database may contain tables as :-
cus_01
emp_01
cus_02
emp_02
And need to extract tables which as 01 at end (i.e.,cus_01 and emp_01).
How can I achieve it
You can provide regex for table.whitelist or table.include.list properties.
For your use case, you can use the following properties to include those tables which end with 01
"table.include.list": "^(Database_name.)(.+01)"

Can we set output of one Datasource Query as input of other Datasource Query in Grafana Mixed Datasource (MS SQL Server)

I want to query data by joining two tables in two databases(different datasources in Grafana). I am trying to use Grafana Mixed Datasource for the same. Since I want to query into two different databases (datasources) is it possible to set output of one Datasource Query as input of other Datasource Query to have the joining between to database tables .

Hibernate postgresql time difference in model data type

I have a PostgreSQL table with two timestamp columns and a corresponding hibernate model. Now I want to create a new #formula field which stores the difference between these two fields like answer_ts - invite_ts.
What should be the right java data type to map this formula.
Thanks,
Kaushik

Unable to create db2 table with DATE data type

I am using DB2 9.7 (LUW) in a windows server, in which multiple DBs are available in a single DB instance. I just found that in one of these DBs, I am unable to add a column with DATE data type, during table creation or altering. The column been added is getting changed to timestamp instead.
Any help on this will be welcome.
Check out your Oracle Compatibility setting
Depending on that setting a date is interpreted as Timestamp(0) like in your example.
Because these settings take effect if the database has been created after setting the DB2_COMPATIBILITY_VECTOR registry variable your database can show a different behaviour.

How to read AWS Glue Data Catalog table schemas programmatically

I have a set of daily CSV files of uniform structure which I will upload to S3. There is a downstream job which loads the CSV data into a Redshift database table. The number of columns in the CSV may increase and from that point onwards the new files will come with the new columns in them. When this happens, I would like to detect the change and add the column to the target Redshift table automatically.
My plan is to run a Glue Crawler on the source CSV files. Any change in schema would generate a new version of the table in the Glue Data Catalog. I would then like to programmatically read the table structure (columns and their datatypes) of the latest version of the Table in the Glue Data Catalog using Java, .NET or other languages and compare it with the schema of the Redshift table. In case new columns are found, I will generate a DDL statement to alter the Redshift table to add the columns.
Can someone point me to any examples of reading Glue Data Catalog tables using Java, .NET or other languages? Are there any better ideas to automatically add new columns to Redshift tables?
If you want to use Java, use the dependency:
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-glue</artifactId>
<version>{VERSION}</version>
</dependency>
And here's a code snippet to get your table versions and the list of columns:
AWSGlue client = AWSGlueClientBuilder.defaultClient();
GetTableVersionsRequest tableVersionsRequest = new GetTableVersionsRequest()
.withDatabaseName("glue_catalog_database_name")
.withCatalogId("table_name_generated_by_crawler");
GetTableVersionsResult results = client.getTableVersions(tableVersionsRequest);
// Here you have all the table versions, at this point you can check for new ones
List<TableVersion> versions = results.getTableVersions();
// Here's how to get to the table columns
List<Column> tableColumns = versions.get(0).getTable().getStorageDescriptor().getColumns();
Here you can see AWS Doc for the TableVersion and the StorageDescriptor objects.
You could also use the boto3 library for Python.
Hope this helps.