Mybatis:
Not sure whats going wrong.. But a basic SelectOne is not returning results.
Mapper:
<select id="getBackLog" parameterType="string"
resultType="string">
select data_key from yfs_task_q where task_q_key = #{value}
</select>
Method:
dataKey = (String) session.selectOne("OMSWatchMappers.getBackLog", agent);
logger.debug("#Backlog=" + dataKey);
Logs:
02:01:34.786 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Openning JDBC Connection
02:01:35.890 [main] DEBUG o.a.i.d.pooled.PooledDataSource - Created connection 2092843500.
02:01:35.961 [main] DEBUG c.b.o.r.OMSWatchMappers.getBackLog - ooo Using Connection [oracle.jdbc.driver.T4CConnection#7cbe41ec]
02:01:35.961 [main] DEBUG c.b.o.r.OMSWatchMappers.getBackLog - ==> Preparing: select data_key from yfs_task_q where task_q_key = ?
02:01:36.126 [main] DEBUG c.b.o.r.OMSWatchMappers.getBackLog - ==> Parameters: 201101070640191548867209(String)
02:01:36.249 [main] DEBUG OMSWatchDAOImpl - #Backlog=null
02:01:36.250 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Resetting autocommit to true on JDBC Connection [oracle.jdbc.driver.T4CConnection#7cbe41ec]
02:01:36.250 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Closing JDBC Connection [oracle.jdbc.driver.T4CConnection#7cbe41ec]
02:01:36.250 [main] DEBUG o.a.i.d.pooled.PooledDataSource - Returned connection 2092843500 to pool.
This record exists in the database.
select data_key from yfs_task_q where task_q_key = '201101070640191548867209';
DATA_KEY
------------------------
201101070636011548866830
If I change the mapper to add the parameter directly, it returns the result.
Mapper:
<select id="getBackLog" parameterType="string" resultType="string">
select data_key from yfs_task_q where task_q_key = '201101070640191548867209'
</select>
Logs:
02:38:52.746 [main] DEBUG c.b.o.r.OMSWatchMappers.getBackLog - ==> Preparing: select data_key from yfs_task_q where task_q_key = '201101070640191548867209'
02:38:52.942 [main] DEBUG c.b.o.r.OMSWatchMappers.getBackLog - ==> Parameters:
02:38:53.096 [main] DEBUG OMSWatchDAOImpl - #Backlog=201101070636011548866830
There is something basic I'm missing here. Any help in pointing out will be much appreciated.
Found the issue.. The DB Column was CHAR(30) and I was passing String object to Mybatis. The WHERE clause comparison was failing ...
ColValue [201101070640191548867209 ] <> Input [201101070640191548867209]
Changed query to
select data_key from yfs_task_q where trim(task_q_key) = #{value}
Still need to do some more research to see if there is any flag / configuration I can set to force MyBatis ignore the padded spaces in the col value.
If anyone knows about mybatis coig, please post the same. But for this issue posted, cause is found and I can move on :)
Note: I'm not sure why these columns are CHAR and not VARCHAR.. its a 3rd party app table.
Make sure that agent in your method is actually a string.
Pass as id.trim() and that worked for character based where clause columns.
Related
I am using mybatis to insert record like this:
#Override
public void lockRecordHostory(OperateInfo operateInfo) {
WalletLockedRecordHistory lockedRecordHistory = new WalletLockedRecordHistory();
JSONObject jsonObject = JSON.parseObject(operateInfo.getParam(), JSONObject.class);
lockedRecordHistory.setParam(operateInfo.getParam());
int result = lockedRecordHistoryMapper.insertSelective(lockedRecordHistory);
log.info("result:", result);
}
why the the result value aways 1 not the last insert id?I turn on debug info of mybatis,and it execute log:
DEBUG [http-nio-11002-exec-7] - JDBC Connection [com.alibaba.druid.proxy.jdbc.ConnectionProxyImpl#33d1051f] will be managed by Spring
DEBUG [http-nio-11002-exec-7] - ==> Preparing: insert into wallet_locked_record_history ( locked_amount, created_time, updated_time, user_id, locked_type, operate_type, param ) values ( ?, ?, ?, ?, ?, ?, ? )
DEBUG [http-nio-11002-exec-7] - ==> Parameters: 1(Integer), 1566978734712(Long), 1566978734712(Long), 3114(Long), RED_ENVELOPE_BUMPED_LOCK(String), LOCKED(String), {"amount":1,"lockedType":"RED_ENVELOPE_BUMPED_LOCK","userId":3114}(String)
DEBUG [http-nio-11002-exec-7] - <== Updates: 1
DEBUG [http-nio-11002-exec-7] - ==> Preparing: SELECT LAST_INSERT_ID()
DEBUG [http-nio-11002-exec-7] - ==> Parameters:
DEBUG [http-nio-11002-exec-7] - <== Total: 1
DEBUG [http-nio-11002-exec-7] - Releasing transactional SqlSession [org.apache.ibatis.session.defaults.DefaultSqlSession#420ad884]
Is the transaction affect the results?
It seems that the query that retrieves the value of the generated id uses the separate connection to mysql.
This is from mysql documentation for LAST_INSERT_ID function:
The ID that was generated is maintained in the server on a per-connection basis. This means that the value returned by the function to a given client is the first AUTO_INCREMENT value generated for most recent statement affecting an AUTO_INCREMENT column by that client
You are using connection pool and depending on its configuration it might happen that different queries are executed using different native JDBC Connection objects, that is using different connections to mysql. So the second query returns the value that was generated (at some earlier time) for the wrong connection from the pool.
To overcome this you do need to configure connection pool so that it does not release the connection after the each statement. You need to configure it so that the pool uses the same connection until the proxy connection is released by you code (that is when mybatis closes connection in the end of the transaction).
I have a spring batch job which copies data from tables from one oracle schema to another.
To do this, I have written a partitioned job.
For example:
Case A:
I have a table A with 100000 rows and I split into 100 steps of 1000 rows each. all these inserts are done in parallel using ThreadPool task executor. If 4 steps failed due to some issue. I restarted the job, it successfully ran the failed 4 steps within seconds as I expected.
Case B:
Say a table A containing 32 million rows is to be copied from source to destination.
So I split this job into steps of 1000 rows each so 32000 steps are created.
Out of these 32000 steps 4 steps fails due to some db issue. When I try to restart this job, Spring batch just hangs or I dont know what processing is going on behind restart it just does not do anything. I waited for 5 hours and then killed the job.
so Can someone help me with what happens behind the restart, how the total number of steps affects the restart ? and what should I do to improve this speed?
Please let me know if any more info is needed.
Updates:
I was able to speed up the Case B by implementing PartitionNameProvider and was returning the names of failed steps alone via getPartitionNames during restart. Great. I was testing this restart with more number of failures and I have one more case now.
Case C:
If I have 20000 steps and all 20000 steps failed. When I try to restart this case. the getPartitionNames returns all 20000 steps. In this case, Im facing the problem I mentioned above. The process hangs.
I tried to understand what was going on behind the job by enabling spring debug logs (which took me so long to discover but worth it). I saw a specific set of queries running. they are:
2019-02-22 06:40:27 DEBUG ResourcelessTransactionManager:755 - Initiating transaction commit
2019-02-22 06:40:27 DEBUG ResourcelessTransactionManager:39 - Committing resourceless transaction on [org.springframework.batch.support.transaction.ResourcelessTransactionManager$ResourcelessTransaction#5743a94e]
2019-02-22 06:40:27 DEBUG ResourcelessTransactionManager:367 - Creating new transaction with name [org.springframework.batch.core.repository.support.SimpleJobRepository.getLastStepExecution]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2019-02-22 06:40:27 DEBUG JdbcTemplate:682 - Executing prepared SQL query
2019-02-22 06:40:27 DEBUG JdbcTemplate:616 - Executing prepared SQL statement [SELECT JOB_EXECUTION_ID, START_TIME, END_TIME, STATUS, EXIT_CODE, EXIT_MESSAGE, CREATE_TIME, LAST_UPDATED, VERSION, JOB_CONFIGURATION_LOCATION from COPY_JOB_EXECUTION where JOB_INSTANCE_ID = ? order by JOB_EXECUTION_ID desc]
2019-02-22 06:40:27 DEBUG DataSourceUtils:110 - Fetching JDBC Connection from DataSource
2019-02-22 06:40:27 DEBUG DataSourceUtils:114 - Registering transaction synchronization for JDBC Connection
2019-02-22 06:40:27 DEBUG JdbcTemplate:682 - Executing prepared SQL query
2019-02-22 06:40:27 DEBUG JdbcTemplate:616 - Executing prepared SQL statement [SELECT JOB_EXECUTION_ID, KEY_NAME, TYPE_CD, STRING_VAL, DATE_VAL, LONG_VAL, DOUBLE_VAL, IDENTIFYING from COPY_JOB_EXECUTION_PARAMS where JOB_EXECUTION_ID = ?]
2019-02-22 06:40:27 DEBUG JdbcTemplate:682 - Executing prepared SQL query
2019-02-22 06:40:27 DEBUG JdbcTemplate:616 - Executing prepared SQL statement [SELECT STEP_EXECUTION_ID, STEP_NAME, START_TIME, END_TIME, STATUS, COMMIT_COUNT, READ_COUNT, FILTER_COUNT, WRITE_COUNT, EXIT_CODE, EXIT_MESSAGE, READ_SKIP_COUNT, WRITE_SKIP_COUNT, PROCESS_SKIP_COUNT, ROLLBACK_COUNT, LAST_UPDATED, VERSION from COPY_STEP_EXECUTION where JOB_EXECUTION_ID = ? order by STEP_EXECUTION_ID]
2019-02-22 06:40:27 DEBUG JdbcTemplate:682 - Executing prepared SQL query
2019-02-22 06:40:27 DEBUG JdbcTemplate:616 - Executing prepared SQL statement [SELECT STEP_EXECUTION_ID, STEP_NAME, START_TIME, END_TIME, STATUS, COMMIT_COUNT, READ_COUNT, FILTER_COUNT, WRITE_COUNT, EXIT_CODE, EXIT_MESSAGE, READ_SKIP_COUNT, WRITE_SKIP_COUNT, PROCESS_SKIP_COUNT, ROLLBACK_COUNT, LAST_UPDATED, VERSION from COPY_STEP_EXECUTION where JOB_EXECUTION_ID = ? order by STEP_EXECUTION_ID]
2019-02-22 06:40:30 DEBUG JdbcTemplate:682 - Executing prepared SQL query
2019-02-22 06:40:30 DEBUG JdbcTemplate:616 - Executing prepared SQL statement [SELECT SHORT_CONTEXT, SERIALIZED_CONTEXT FROM COPY_STEP_EXECUTION_CONTEXT WHERE STEP_EXECUTION_ID = ?]
2019-02-22 06:40:30 DEBUG JdbcTemplate:682 - Executing prepared SQL query
2019-02-22 06:40:30 DEBUG JdbcTemplate:616 - Executing prepared SQL statement [SELECT SHORT_CONTEXT, SERIALIZED_CONTEXT FROM COPY_JOB_EXECUTION_CONTEXT WHERE JOB_EXECUTION_ID = ?]
2019-02-22 06:40:30 DEBUG DataSourceUtils:327 - Returning JDBC Connection to DataSource
and so on...
I understood spring is executing getLastStepExecution for each failed step one by one. But why is getLastStepExecution done this way? or is there any way we can configure this? like reading all the step executions in bulk and start processing so as to reduce the restart time.
Thanks in advance.
I'm configuring Rundeck to run using an external Oracle DB.
I set rundeck-config.properties
#loglevel.default is the default log level for jobs: ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=${rdeck.base}
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
grails.serverURL=http://${server.hostname}:${server.http.port}${server.web.context}
dataSource.dbCreate = update
#dataSource.url = jdbc:h2:file:${server.datastore.path};MVCC=true
dataSource.url = jdbc:oracle:thin:#//10.237.154.215:1521/Q12353AP10
dataSource.driverClassName = oracle.jdbc.driver.OracleDriver
dataSource.username = userrundeck5
dataSource.password = Rundeck_0001
#dataSource.dialect = org.hibernate.dialect.Oracle10gDialect
dataSource.dialect = com.rundeck.hibernate.RundeckOracleDialect
hibernate.globally_quoted_identifiers=true
# Pre Auth mode settings
rundeck.security.authorization.preauthenticated.enabled=false
rundeck.security.authorization.preauthenticated.attributeName=REMOTE_USER_GROUPS
rundeck.security.authorization.preauthenticated.delimiter=,
# Header from which to obtain user name
rundeck.security.authorization.preauthenticated.userNameHeader=X-Forwarded-Uuid
# Header from which to obtain list of roles
rundeck.security.authorization.preauthenticated.userRolesHeader=X-Forwarded-Roles
# Redirect to upstream logout url
rundeck.security.authorization.preauthenticated.redirectLogout=false
rundeck.security.authorization.preauthenticated.redirectUrl=/oauth2/sign_in
During the start I see:
2018-06-14 18:00:29.682:INFO:/rundeck:main: Initializing Spring root WebApplicationContext
2018-06-14 18:00:39,914 [main] ERROR hbm2ddl.SchemaUpdate - HHH000388: Unsuccessful: alter table "auth_token" add constraint FK_aiqc20kpjasth5bxogsragoif foreign key (user_id) references "rduser"
2018-06-14 18:00:39,914 [main] ERROR hbm2ddl.SchemaUpdate - ORA-02275: such a referential constraint already exists in the table
2018-06-14 18:00:39,917 [main] ERROR hbm2ddl.SchemaUpdate - HHH000388: Unsuccessful: alter table "log_file_storage_request" add constraint FK_trqsa9so0qcv6okcd6fan88yf foreign key (execution_id) references "execution"
2018-06-14 18:00:39,917 [main] ERROR hbm2ddl.SchemaUpdate - ORA-02275: such a referential constraint already exists in the table
How can I fix?
Thanks in advance,
Andrea
I solved using the guide at the following page : http://support.rundeck.com/customer/en/portal/articles/2447898-oracle-script-
I am trying to connect to Postgres DB using Sqoop (the end goal is to import tables directly into HDFS), however I am facing the issue below.
sqoop list-tables --connect jdbc:postgresql://<server_name>:5432/aae_data --username my_username -P --verbose
Warning: /opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p2260.2452/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/24 00:13:40 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.9.1
18/04/24 00:13:40 DEBUG tool.BaseSqoopTool: Enabled debug logging.
Enter password:
18/04/24 00:13:44 DEBUG sqoop.ConnFactory: Loaded manager factory: org.apache.sqoop.manager.oracle.OraOopManagerFactory
18/04/24 00:13:44 DEBUG sqoop.ConnFactory: Loaded manager factory: com.cloudera.sqoop.manager.DefaultManagerFactory
18/04/24 00:13:44 DEBUG sqoop.ConnFactory: Trying ManagerFactory: org.apache.sqoop.manager.oracle.OraOopManagerFactory
18/04/24 00:13:45 DEBUG oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop can be called by Sqoop!
18/04/24 00:13:45 DEBUG sqoop.ConnFactory: Trying ManagerFactory: com.cloudera.sqoop.manager.DefaultManagerFactory
18/04/24 00:13:45 DEBUG manager.DefaultManagerFactory: Trying with scheme: jdbc:postgresql:
18/04/24 00:13:45 INFO manager.SqlManager: Using default fetchSize of 1000
18/04/24 00:13:45 DEBUG sqoop.ConnFactory: Instantiated ConnManager org.apache.sqoop.manager.PostgresqlManager#56a6d5a6
18/04/24 00:13:45 DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection.
Does anyone know what might be the issue here?
Do I need to specify a connection manager? If yes, how do I pass the jar file?
Thank You.
I am new to spring boot. What is the configuration setting for sql parameter binding? For example, in the following line I should be able to see values for all '?'.
SELECT * FROM MyFeed WHERE feedId > ? AND isHidden = false ORDER BY feedId DESC LIMIT ?
Currently, I have the configuration as
spring.jpa.show-sql: true
Add these to application.properties and you should see the logs in details.
logging.level.org.hibernate.SQL=debug
logging.level.org.hibernate.type.descriptor.sql=trace
In the application yml add the following property.
logging:
level:
org:
hibernate:
type: trace
Add the following to print the formatted SQL in the console
spring:
jpa:
show-sql: true
properties:
hibernate:
format_sql: true
Presume you are finding a student record by the id and you will be able to see the binding param as follows
Hibernate: select student0_.id as id8_5_0_ from student student0_
where student0_.id=?
2020-07-30 12:20:44.005 TRACE 1328 --- [nio-8083-exec-8]
o.h.type.descriptor.sql.BasicBinder : binding parameter [1] as
[BIGINT] - [1]
This is just a hint to the underlying persistence provider e.g. Hibernate, EclipseLink etc. Without knowing what you are using it is difficult to say.
For Hibernate you can configure logging to also output the bind parameters:
http://www.mkyong.com/hibernate/how-to-display-hibernate-sql-parameter-values-log4j/
which will give you output like:
Hibernate: INSERT INTO transaction (A, B)
VALUES (?, ?)
13:33:07,253 DEBUG FloatType:133 - binding '10.0' to parameter: 1
13:33:07,253 DEBUG FloatType:133 - binding '1.1' to parameter: 2
An alternative solution which should work across all JPA providers is to use something like log4jdbc which would give you the nicer output:
INSERT INTO transaction (A, B) values (10.0, 1.1);
See:
https://code.google.com/p/log4jdbc-log4j2/
Add these to the property file
#to show sql
spring.jpa.properties.hibernate.show_sql=true
#formatting
spring.jpa.properties.hibernate.format_sql=true
#printing parameter values in order
logging.level.org.hibernate.type.descriptor.sql=trace
For Spring Boot 3, as it uses Hibernate 6, the aboves is not working.
Try:
logging:
pattern:
level:
org.hibernate.orm.jdbc.bind: trace
See:
https://stackoverflow.com/a/74587796/2648077 and https://stackoverflow.com/a/74862954/2648077
For Eclipse link, Add these lines in appilication.properties
jpa.eclipselink.showsql=true
jpa.eclipselink.logging-level=FINE