Spring boot show sql parameter binding? - jpa

I am new to spring boot. What is the configuration setting for sql parameter binding? For example, in the following line I should be able to see values for all '?'.
SELECT * FROM MyFeed WHERE feedId > ? AND isHidden = false ORDER BY feedId DESC LIMIT ?
Currently, I have the configuration as
spring.jpa.show-sql: true

Add these to application.properties and you should see the logs in details.
logging.level.org.hibernate.SQL=debug
logging.level.org.hibernate.type.descriptor.sql=trace

In the application yml add the following property.
logging:
level:
org:
hibernate:
type: trace
Add the following to print the formatted SQL in the console
spring:
jpa:
show-sql: true
properties:
hibernate:
format_sql: true
Presume you are finding a student record by the id and you will be able to see the binding param as follows
Hibernate: select student0_.id as id8_5_0_ from student student0_
where student0_.id=?
2020-07-30 12:20:44.005 TRACE 1328 --- [nio-8083-exec-8]
o.h.type.descriptor.sql.BasicBinder : binding parameter [1] as
[BIGINT] - [1]

This is just a hint to the underlying persistence provider e.g. Hibernate, EclipseLink etc. Without knowing what you are using it is difficult to say.
For Hibernate you can configure logging to also output the bind parameters:
http://www.mkyong.com/hibernate/how-to-display-hibernate-sql-parameter-values-log4j/
which will give you output like:
Hibernate: INSERT INTO transaction (A, B)
VALUES (?, ?)
13:33:07,253 DEBUG FloatType:133 - binding '10.0' to parameter: 1
13:33:07,253 DEBUG FloatType:133 - binding '1.1' to parameter: 2
An alternative solution which should work across all JPA providers is to use something like log4jdbc which would give you the nicer output:
INSERT INTO transaction (A, B) values (10.0, 1.1);
See:
https://code.google.com/p/log4jdbc-log4j2/

Add these to the property file
#to show sql
spring.jpa.properties.hibernate.show_sql=true
#formatting
spring.jpa.properties.hibernate.format_sql=true
#printing parameter values in order
logging.level.org.hibernate.type.descriptor.sql=trace

For Spring Boot 3, as it uses Hibernate 6, the aboves is not working.
Try:
logging:
pattern:
level:
org.hibernate.orm.jdbc.bind: trace
See:
https://stackoverflow.com/a/74587796/2648077 and https://stackoverflow.com/a/74862954/2648077

For Eclipse link, Add these lines in appilication.properties
jpa.eclipselink.showsql=true
jpa.eclipselink.logging-level=FINE

Related

how can I specify a different database and schema to create temporary tables in Great Expectations?

Great Expectations creates temporary tables. I tried profiling data in my Snowflake lab. It worked because the role I was using could create tables in the schema that contained the tables I was profiling.
I tried to profile a table in a Snowflake share, where we can't create objects, and it failed:
(snowflake.connector.errors.ProgrammingError) 002003 (02000): SQL compilation error:
Schema 'OUR_DATABASE.SNOWFLAKE_SHARE_SCHEMA' does not exist or not authorized.
[SQL: CREATE OR REPLACE TEMPORARY TABLE ge_temp_3eb6c50b AS SELECT *
FROM "SNOWFLAKE_SHARE_SCHEMA"."INTERESTING_TABLE"
WHERE true]
(Background on this error at: https://sqlalche.me/e/14/f405)
Here's the output from the CLI:
% great_expectations suite new
Using v3 (Batch Request) API
How would you like to create your Expectation Suite?
1. Manually, without interacting with a sample batch of data (default)
2. Interactively, with a sample batch of data
3. Automatically, using a profiler
: 3
A batch of data is required to edit the suite - let's help you to specify it.
Select data_connector
1. default_runtime_data_connector_name
2. default_inferred_data_connector_name
3. default_configured_data_connector_name
: 3
Which data asset (accessible by data connector "default_configured_data_connector_name") would you like to use?
1. INTERESTING_TABLE
Type [n] to see the next page or [p] for the previous. When you're ready to select an asset, enter the index.
: 1
Name the new Expectation Suite [INTERESTING_TABLE.warning]:
Great Expectations will create a notebook, containing code cells that select from available columns in your dataset and
generate expectations about them to demonstrate some examples of assertions you can make about your data.
When you run this notebook, Great Expectations will store these expectations in a new Expectation Suite "INTERESTING_TABLE.warning" here:
file:///path/to-my-repo/great_expectations/expectations/INTERESTING_TABLE/warning.json
Would you like to proceed? [Y/n]: Y
Here's the datasources section from great_expectations.yml:
datasources:
our_snowflake:
class_name: Datasource
module_name: great_expectations.datasource
execution_engine:
module_name: great_expectations.execution_engine
credentials:
host: xyz92716.us-east-1
username: MYUSER
query:
schema: MYSCHEMA
warehouse: MY_WAREHOUSE
role: RW_ROLE
password: password1234
drivername: snowflake
class_name: SqlAlchemyExecutionEngine
data_connectors:
default_runtime_data_connector_name:
class_name: RuntimeDataConnector
batch_identifiers:
- default_identifier_name
module_name: great_expectations.datasource.data_connector
default_inferred_data_connector_name:
include_schema_name: true
class_name: InferredAssetSqlDataConnector
introspection_directives:
schema_name: SNOWFLAKE_SHARE_SCHEMA
module_name: great_expectations.datasource.data_connector
default_configured_data_connector_name:
assets:
INTERESTING_TABLE:
schema_name: SNOWFLAKE_SHARE_SCHEMA
class_name: Asset
module_name: great_expectations.datasource.data_connector.asset
class_name: ConfiguredAssetSqlDataConnector
module_name: great_expectations.datasource.data_connector
How can I tweak great_expectations.yml so that temporary objects are created in a separate database and schema from the datasource?
As a workaround, we created a view in the schema with read/write that points to the data in the read-only share. That adds an extra step. I'm hoping there's a simple config to create temporary objects outside the schema being profiled.

Spring Batch / Postgres : ERROR: relation "batch_job_instance" does not exist

I am trying to configure Spring Batch to use PostGres DB. I have included the following dependencies in my build.gradle.kts file:
implementation("org.springframework.boot:spring-boot-starter-data-jpa")
implementation("org.postgresql:postgresql")
My application.yml for my SpringBatch module has the following included:
spring:
datasource:
url: jdbc:postgresql://postgres:5432/springbatchdb
username: postgres
password: root
driverClassName: org.postgresql.Driver
docker-compose.yml
postgres:
restart: always
image: postgres:12-alpine
container_name: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=springbatchdb
ports:
- "5432:5432"
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
However, when I try to add a data file I see the following error in the logs of both my SpringBatch Docker container, and the PostGres container:
Spring Batch:
<<< Exception in method: org.meanwhileinhell.spring.batch.server.SpringBatchController.handle Error Message: PreparedStatementCallback; bad SQL grammar [SELECT JOB_INSTANCE_ID, JOB_NAME from BATCH_JOB_INSTANCE where JOB_NAME = ? and JOB_KEY = ?]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "batch_job_instance" does not exist
PostGres:
LOG: database system is ready to accept connections
2021-01-08 09:54:56.778 UTC [56] ERROR: relation "batch_job_instance" does not exist at character 39
2021-01-08 09:54:56.778 UTC [56] STATEMENT: SELECT JOB_INSTANCE_ID, JOB_NAME from BATCH_JOB_INSTANCE where JOB_NAME = $1 and JOB_KEY = $2
2021-01-08 09:55:27.033 UTC [56] ERROR: relation "batch_job_instance" does not exist at character 39
2021-01-08 09:55:27.033 UTC [56] STATEMENT: SELECT JOB_INSTANCE_ID, JOB_NAME from BATCH_JOB_INSTANCE where JOB_NAME = $1 and JOB_KEY = $2
I can see that the SB server is picking up POSTGRES from my metadata ok.
JobRepositoryFactoryBean : No database type set, using meta data indicating: POSTGRES
What am I missing to get the initial db configured during the server start?
Edit: I've tried adding spring.datasource.initialize=true explicitly, but no change.
Please check below added in application.yml
spring.batch.initialize-schema: always
Please check below dependencies are added
<artifactId>spring-boot-starter-batch</artifactId>
yaml file is
spring.datasource.url=jdbc:postgresql://localhost:5432/postgres
spring.datasource.username=postgres
spring.datasource.password=1234
spring.datasource.driver-class-name=org.postgresql.Driver
spring.batch.jdbc.initialize-schema=always
gradle dependencies
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-jdbc'
implementation 'org.springframework.boot:spring-boot-starter-batch'
implementation 'org.projectlombok:lombok-maven-plugin:1.18.6.0'
implementation group: 'org.postgresql', name: 'postgresql', version: '42.3.1'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testImplementation 'org.springframework.batch:spring-batch-test'
}
You need to set spring.batch.initialize-schema=always property to tell Spring Boot to create Spring Batch tables automatically. Please refer to the Initialize a Spring Batch Database section of Spring Boot's reference documentation for more details.
For anyone who has spring.batch.initialize-schema=always set already and it's still not working, also verify that you are connecting to the database with a user that has sufficient privileges, including to create the necessary tables.
in application.properties,
Prior Spring Boot 2.5 we can use
spring.batch.initialize-schema=ALWAYS
Later version of Spring Boot 2.5 use below
spring.batch.jdbc.initialize-schema=ALWAYS
Solution that worked for me in Spring 5.0!
I spent a lot of time resolving issues like ERROR: relation "X" does not exist when using the latest Spring Boot Starter 3.0 and Spring Batch 5.0.
spring.batch.jdbc.initialize-schema=always
However, it didn't create the necessary tables for me. Though as per the documentation, it should have created tables.
After a lot of research, I found, that in the latest Spring Batch 5.0, there are a lot of improvements. And I was doing a lot of things wrong when migrating to the new Spring 5.
Remove #EnableBatchProcessing from your configurations. As you don't need that anymore with the latest Spring Batch 5.
Example:
#Configuration
#AllArgsConstructor
#EnableBatchProcessing //please remove it.
public class SpringBatchConfiguration {}
change it to:
#Configuration
#AllArgsConstructor
public class SpringBatchConfiguration {}
PlatformTransaction Manager: The second thing I was doing wrong was using an incorrect Transaction Manager, if you are using JPA for persisting entities you need a corresponding Transaction Manager.
I was using the ResourcelessTransactionManager() which was wrong in my case and was creating a lot of headaches while running.
For JPA you need a JpaTransactionManager()
Something like:
#Bean
public PlatformTransactionManager transactionManager() {
return new JpaTransactionManager();
}
The third thing I learned after mistakes; we don't need to create a bean of Datasource unless we are doing something complex like having 2 Datasource one for writing Spring Batch associated tables and another for persisting our business data.
Wherever we are required to use JobRepository just inject it.
Something like:
#Bean
#Autowired
Job job(JobRepository jobRepository) {
JobBuilder jobBuilderFactory = new JobBuilder("somename", jobRepository );
return jobBuilderFactory.flow(step1(jobRepository)).end()
.build();
}
For more details on migrations: Spring Migration 3 Guide.

SequenceInformation missing

I'm working with a Spring boot Application connecting to an AS400 Database using the com.ibm.db2.jcc.DB2Driver driver with Spring Data JPA.
I use the org.hibernate.dialect.DB2Dialect dialect.
When I start the Application, I get the Error
Could not fetch the SequenceInformation from the database
com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-204, SQLSTATE=42704, SQLERRMC=SYSCAT.SEQUENCES;TABLE, DRIVER=4.26.14
Meaning the Table SYSCAT.SEQUENCES is missing, which it is, because it's not needed.
The Application works fine, but the error bothers me.
As far as I see, SequenceInformations are only important when I generate an ID somewhere, what I don't do.
This Application is only used to copy data from one place to another, so I only use JPAs #Id annotation but not the #GeneratedValue one.
Am I missing some use for the SequenceInformation?
Is there some way to turn off the fetching of SequenceInformation?
Those are my application properties:
spring:
datasource:
driver-class-name: com.ibm.db2.jcc.DB2Driver
hikari.connection-test-query: values 1
hikari.maximum-pool-size: 25
jpa:
database-platform: DB2Platform
hibernate.ddl-auto: none
open-in-view: false
properties:
hibernate:
dll-auto: none
dialect: org.hibernate.dialect.DB2Dialect
naming-strategy: org.hibernate.cfg.ImprovedNamingStrategy
You use the wrong dialect. Please use:
org.hibernate.dialect.DB2400Dialect
I have changed dialect from DB2Dialect to DB2400Dialect and it worked for me.
##spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.DB2Dialect
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.DB2400Dialect

why mybatis insert not return last insert id?

I am using mybatis to insert record like this:
#Override
public void lockRecordHostory(OperateInfo operateInfo) {
WalletLockedRecordHistory lockedRecordHistory = new WalletLockedRecordHistory();
JSONObject jsonObject = JSON.parseObject(operateInfo.getParam(), JSONObject.class);
lockedRecordHistory.setParam(operateInfo.getParam());
int result = lockedRecordHistoryMapper.insertSelective(lockedRecordHistory);
log.info("result:", result);
}
why the the result value aways 1 not the last insert id?I turn on debug info of mybatis,and it execute log:
DEBUG [http-nio-11002-exec-7] - JDBC Connection [com.alibaba.druid.proxy.jdbc.ConnectionProxyImpl#33d1051f] will be managed by Spring
DEBUG [http-nio-11002-exec-7] - ==> Preparing: insert into wallet_locked_record_history ( locked_amount, created_time, updated_time, user_id, locked_type, operate_type, param ) values ( ?, ?, ?, ?, ?, ?, ? )
DEBUG [http-nio-11002-exec-7] - ==> Parameters: 1(Integer), 1566978734712(Long), 1566978734712(Long), 3114(Long), RED_ENVELOPE_BUMPED_LOCK(String), LOCKED(String), {"amount":1,"lockedType":"RED_ENVELOPE_BUMPED_LOCK","userId":3114}(String)
DEBUG [http-nio-11002-exec-7] - <== Updates: 1
DEBUG [http-nio-11002-exec-7] - ==> Preparing: SELECT LAST_INSERT_ID()
DEBUG [http-nio-11002-exec-7] - ==> Parameters:
DEBUG [http-nio-11002-exec-7] - <== Total: 1
DEBUG [http-nio-11002-exec-7] - Releasing transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#420ad884]
Is the transaction affect the results?
It seems that the query that retrieves the value of the generated id uses the separate connection to mysql.
This is from mysql documentation for LAST_INSERT_ID function:
The ID that was generated is maintained in the server on a per-connection basis. This means that the value returned by the function to a given client is the first AUTO_INCREMENT value generated for most recent statement affecting an AUTO_INCREMENT column by that client
You are using connection pool and depending on its configuration it might happen that different queries are executed using different native JDBC Connection objects, that is using different connections to mysql. So the second query returns the value that was generated (at some earlier time) for the wrong connection from the pool.
To overcome this you do need to configure connection pool so that it does not release the connection after the each statement. You need to configure it so that the pool uses the same connection until the proxy connection is released by you code (that is when mybatis closes connection in the end of the transaction).

Spring Boot JPA Schema Initialization

I want to initialise my datasource with a DLL script in my spring boot project (only during dev of course). As mentioned in the docs here I set the spring.datasource.schema property to the DLL scripts which is in src/main/resources/postgresql/define-schema.sql.
spring:
profiles: dev
datasource:
platform: postgresql
driver-class-name: org.postgresql.Driver
url: jdbc:postgresql://localhost:5432/postgres
username: postgres
password: ****
initialize: true
schema: ./postgresql/define-schema.sql
continue-on-error: false
jpa:
hibernate:
ddl-auto: validate
generate-ddl: false
show-sql: true
But the script won't be executed. I also tried to put it on the class path root and call it schema.sql ... Nothing happens.
The dev profile is selected, at least I see it in the log: The following profiles are active: dev
The application than fails on the JPA schema validation.
The only warning I get from hibernate:
Found use of deprecated [org.hibernate.id.SequenceGenerator] sequence-based id generator; use org.hibernate.id.enhanced.SequenceStyleGenerator instead. See Hibernate Domain Model Mapping Guide for details.
But I don't think this has any to do with the initialise problem.
I've got spring-boot-security-starter in my dependencies but not configured yet, could that be a problem source?
Does anybody recognise an obivous typo, mistake anything else?
Looking forward hearing from you!
Amp
Prefix the path to your sql script with classpath
Example:
spring.datasource.schema=classpath:/postgresql/define-schema.sql