I'm working with postgresql 9.2 and C3p0 0.9.2.1, and I created a connection customizer to disable autoCommit and set transactionMode but when I do a lookup on InitialContext to retrieve the dataSource, autoCommit is not disabled on the connection (log at bottom). How can I disable auto commit ?
Connection Customizer :
public class IsolationLevelConnectionCustomizer extends
AbstractConnectionCustomizer {
#Override
public void onAcquire(Connection c, String parentDataSourceIdentityToken)
throws Exception {
super.onAcquire(c, parentDataSourceIdentityToken);
System.out.println("Connection acquired, set autocommit off and repeatable read transaction mode.");
c.setAutoCommit(false);
c.setTransactionIsolation(Connection.TRANSACTION_REPEATABLE_READ);
}
}
Class to retrieve datasource for DAOs :
public class DAOAcquire {
private ComboPooledDataSource m_cpdsDataSource = null;
private static final String LOOKUP_CONNECT = "jdbc/mydb";
public DAOAcquire() throws NamingException {
InitialContext context = new InitialContext();
m_cpdsDataSource = (ComboPooledDataSource) context.lookup(LOOKUP_CONNECT);
if (m_cpdsDataSource != null) {
try {
System.out.println("Autocommit = "+String.valueOf(m_cpdsDataSource.getConnection().getAutoCommit()));
} catch (SQLException e) {
System.out.println("Could not get autocommit value : "+e.getMessage());
e.printStackTrace();
}
}
}
public ComboPooledDataSource getComboPooledDataSource() {
return m_cpdsDataSource;
}
/**
* #return the jdbcTemplate
* #throws NamingException
*/
public JdbcTemplate getJdbcTemplate() throws NamingException {
return new JdbcTemplate(m_cpdsDataSource);
}
/**
* Commit transactions
* #throws SQLException
*/
public void commit() throws SQLException {
if (m_cpdsDataSource != null) {
m_cpdsDataSource.getConnection().commit();
} else {
throw new SQLException("Could not commit. Reason : Unable to connect to database, dataSource is null.");
}
}
/**
* rollback all transactions to previous save point
* #throws SQLException
*/
public void rollback() throws SQLException {
if (m_cpdsDataSource != null) {
m_cpdsDataSource.getConnection().rollback();
} else {
throw new SQLException("Could not rollback. Reason : Unable to connect to database, dataSource is null.");
}
}
}
Log :
Connection acquired, set autocommit off and repeatable read transaction mode.
Connection acquired, set autocommit off and repeatable read transaction mode.
Connection acquired, set autocommit off and repeatable read transaction mode.
Autocommit = true
By default, postgresql auto commit mode is disabled so why does c3p0 activate it automatically ? Should I set forceIgnoreUnresolvedTransactions to true ?
EDIT : whenever I commit a transaction after retrieving the datasource, I get this error :
org.postgresql.util.PSQLException: Cannot commit when autoCommit is enabled.
The JDBC spec states that, "The default is for auto-commit mode to be enabled when the Connection object is created." That's a cross DBMS default, regardless of how the database behaves in other contexts. JDBC programmers may rely on autoCommit being set unless they explicitly call setAutoCommit( false ). c3p0 honors this.
c3p0 allows ConnectionCustomizers to persistently override Connection defaults in the onAcquire() method when the no single behavior is specified. For example, the spec states that "The default transaction level for a Connection object is determined by the driver
supplying the connection." So, for transactionIsolation, if you reset that in onAcquire(...), c3p0 will remember the default you have chosen, and always restore the transactionIsolation back to that default prior to checkout. However, c3p0 explicitly will not permit you to disable autoCommit once in onAcquire(...) and have autoCommit be disabled by default. at the moment of check-out, c3p0 insists you have a spec conformant Connection.
You can get the behavior that you want by overriding the onCheckOut(...) method. The Connection is already checked-out when onCheckOut(...) is called, you can do anything you want there, c3p0 has exhausted it obligations to the specification gods at that point. If you want your clients to always see non-autoCommit Connections, call setAutoCommit( false ) in onCheckOut(...). But do beware that this renders your client code unportable. If you leave c3p0 and switch to a different DataSource, you'll need to use some other library-specific means of always disabling autoCommit or else you'll find that your application misbehaves. Because even for postgres, JDBC Connections are autoCommit by default.
Note: The Connection properties whose values are not fixed by the spec and so can be persistently overridden in an onAcquire(...) method are catalog, holdability, transactionIsolation, readOnly, and typeMap.
p.s. don't set forceIgnoreUnresolvedTransactions to true. yuk.
Related
I am New to SpringBoot.
I have a Spring Boot Application and the Database I am using is PostgreSQL, I am using JdbcTemplate and I have 2 datasource connections.
The code works fine,but I observed that in PostgreSQL pgAdmin in the Server Status Dashboard,it shows the connection pool(Say 10 connection) in idle mode.
While earlier I was working with single datasource, I observed the same thing,but I solved it by setting some properties within application.properties file.
For.e.g:
spring.datasource.hikari.minimum-idle=somevalue
spring.datasource.hikari.idle-timeout=somevalue
How do I achieve the same with multiple Datasources.
properties file
spring.datasource.jdbcUrl=jdbc:postgresql://localhost:5432/stsdemo
spring.datasource.username=postgres
spring.datasource.password=********
spring.datasource.driver-class-name=org.postgresql.Driver
spring.seconddatasource.jdbcUrl=jdbc:postgresql://localhost:5432/postgres
spring.seconddatasource.username=postgres
spring.seconddatasource.password=********
spring.seconddatasource.driver-class-name=org.postgresql.Driver
DbConfig
#Configuration
public class DbConfig {
#Bean(name="db1")
#Primary
#ConfigurationProperties(prefix="spring.datasource")
public DataSource firstDatasource()
{
return DataSourceBuilder.create().build();
}
#Bean(name = "jdbcTemplate1")
public JdbcTemplate jdbcTemplate1(#Qualifier("db1") DataSource ds) {
return new JdbcTemplate(ds);
}
#Bean(name="db2")
#ConfigurationProperties(prefix="spring.seconddatasource")
public DataSource secondDatasource()
{
return DataSourceBuilder.create().build();
}
#Bean(name="jdbcTemplate2")
public JdbcTemplate jdbcTemplate2(#Qualifier("db2") DataSource ds)
{
return new JdbcTemplate(ds);
}
}
That's to be expected.
"closing" a connection (i.e. calling close()) that is obtained from am pool will only return it to the pool. The pool will not immediately close the physical connection to the database to avoid costly reconnects (which is the whole point of using a connection pool)
An "idle" connection is also no real problem in Postgres.
If you have connections that stay "idle in transaction" for a long time - that would be a problem.
We use transaction scope in our .Net core application and Postgres DB.
We have found out strange behavior on one of our environments:
We have the root session with PID 66885 witch blocks all other sessions. And this session is blocked by PID 0 itself.
And if we kill this session, we have the same picture but with other sessions:
So it is just a cascade (or some kind of tree) of interlocks. And if we go to Locks we see a lot of Exclusive Locks on the one tuple:
So we have a queue of Exclusive Locks on one company (tuple). The third lock waits for the second one. The second one waits for the first one. But what is the first lock waiting for? What is the strange process with PID = 0?
My assumption is:
We use .Net transaction scope and the root session is waiting for the commit (confirmation)
But because of the initial connection was aborted because of the timeout the transaction can't receive the commit.
Does it look like a true and how could we avoid this?
We use - Npgsql (3.2.5), Net.Core (2.2), Postgres (9.6) and have enlist=true option in our connection string.
UPD 1: add our transaction settings
public class DbTransaction : IDbTransaction, IDisposable
{
private readonly TransactionScope _innerScope;
private DbTransaction(TransactionScope scope)
{
this._innerScope = scope;
}
public static DbTransaction Begin(TimeSpan timeout)
{
IsolationLevel isolationLevel = IsolationLevel.ReadCommitted;
return new DbTransaction(new TransactionScope(TransactionScopeOption.RequiresNew, new TransactionOptions()
{
IsolationLevel = isolationLevel,
Timeout = timeout
}, TransactionScopeAsyncFlowOption.Enabled));
}
public void Complete()
{
this._innerScope.Complete();
}
public void Dispose()
{
this._innerScope.Dispose();
}
}
public class DbTransactionScope : IDbTransactionScope
{
public IDbTransaction Begin(TimeSpan timeout)
{
return (IDbTransaction) DbTransaction.Begin(timeout);
}
public IDbTransaction Begin()
{
return this.Begin(TimeSpan.FromMinutes(3.0));
}
}
I am Using Spring Data Jpa and adding inserting into 2 table when something happen while adding into second table the first transaction is not rollbacking
and first insert is commiting immidiatally after insert
#Override
#Transactional(propagation = Propagation.REQUIRED, rollbackFor =
Exception.class)
public void addVehicleType(Map<String, Object> model)throws Exception {
VehicleType vehicleType = null;
VehicleStatus vehicleStatus = null;
try {
vehicleType = (VehicleType) model.get("vehicleType");
vehicleStatus = (VehicleStatus) model.get("vehicleStatus");
vehicleStatusRepository.save(vehicleStatus);
vehicleTypeRepository.save(vehicleType);
} catch (Exception e) {
throw e;
}
VehicleTypeRepository.java
public interface VehicleTypeRepository extends JpaRepository<VehicleType, Long> {
#Override
void delete(VehicleType role);
long count();
}
If you use mysql, you must have InnoDB Engine.
Second, problem could be if you are testing on local pc.
Uncomment in my.ini default_tmp_storage_engine=MYISAM
; The default storage engine that will be used when create new tables
; default-storage-engine=MYISAM
; New for MySQL 5.6 default_tmp_storage_engine if skip-innodb enable
default_tmp_storage_engine=MYISAM
The only exceptions that set a transaction to rollback state by default are the unchecked exceptions (like RuntimeException).
Please note that the Spring Framework's transaction infrastructure code will, by default, only mark a transaction for rollback in the case of runtime, unchecked exceptions; that is, when the thrown exception is an instance or subclass of RuntimeException. (Errors will also - by default - result in a rollback.) Checked exceptions that are thrown from a transactional method will not result in the transaction being rolled back.
Is it possible that the connection string is redirected to a method (instead using App.config)?
(App.config is not an option because the location for a sdf can be changed)
My connection string (sdf) can change via an OpenFileDialog for instance.
The hint concerning SQLExpress was seen.
When debugging, the migration goes towards the IMigrationMetadata.Id.
Then, DataException - An exception occurred while initializing the database.
at
System.Data.Common.DbProviderServices.GetProviderManifestToken(DbConnection connection)
ProviderIncompatibleException - The provider did not return a ProviderManifestToken string.
{"An error occurred while getting provider information from the database. This can be caused by Entity Framework using an incorrect connection string. "}
at System.Data.Entity.ModelConfiguration.Utilities.DbProviderServicesExtensions.GetProviderManifestTokenChecked(DbProviderServices providerServices, DbConnection connection)
The default ctor applies a connection string that cannot be used:
Data Source=.\SQLEXPRESS;Initial Catalog=Namespace.AbacusContext;Integrated Security=True;MultipleActiveResultSets=True;Application Name=EntityFrameworkMUE
But when invoking
this.Database.Initialize( false ); //good connStr applied
within a ctor (incl. arguments), then the default ctor is automatically invoked (but not by me) and that uses an impaired connection string. See overview:
public AbacusContext( DbConnection connection, bool contextOwnsConnection ) :
base( connection, contextOwnsConnection )
{
Database.SetInitializer( new MigrateDatabaseToLatestVersion<AbacusContext, Configuration>() );
string goodConnStr = this.Database.Connection.ConnectionString;//was used for a breakpoint
this.Database.Initialize( false );
}
public AbacusContext()
{
string badConnStr = this.Database.Connection.ConnectionString;
//SQLExpress and some other stuff can be seen now
}
I want to implement some DB cleanup at each startup (full schema deletion and recreation while in dev-enviroment).
I'm doing it in Global.beforeStart. And because it's literally before start I need to load DB drivers myself.
The code is:
#Override
public void beforeStart(Application app){
System.out.println("IN beforeStart");
try{
Class.forName("org.postgresql.Driver");
System.out.println("org.postgresql.Driver LOADED");
} catch (ClassNotFoundException cnfe){
System.out.println("NOT LOADED org.postgresql.Driver");
cnfe.printStackTrace();
}
ServerConfig config = new ServerConfig();
config.setName("pgtest");
DataSourceConfig postgresDb = new DataSourceConfig ();
postgresDb.setDriver("org.postgresql.Driver");
postgresDb.setUsername("postgres");
postgresDb.setPassword("postgrespassword");
postgresDb.setUrl("postgres://postgres:postgrespassword#localhost:5432/TotoIntegration2");
config.setDataSourceConfig(postgresDb);
config.setDefaultServer(true);
EbeanServer server = EbeanServerFactory.create(config);
SqlQuery countTables = Ebean.createSqlQuery("select count(*) from pg_stat_user_tables;");
Integer numTables = countTables.findUnique().getInteger("count");
System.out.println("numTables = " + numTables);
if(numTables>2){
DbHelper.cleanSchema();
}
System.out.println("beforeStart EXECUTED");
//DbHelper.cleanSchema();
}
Class.forName("org.postgresql.Driver") passed without exceptions, but then I'm getting:
com.avaje.ebeaninternal.server.lib.sql.DataSourceException: java.sql.SQLException: No suitable driver found for postgres
on the line EbeanServer server = EbeanServerFactory.create(config);
Why?
Use onStart instead, it's performed right after beforeStart but it's natural candidate for operating on database (in production mode it doesn't wait for first request), javadoc for them:
/**
* Executed before any plugin - you can set-up your database schema here, for instance.
*/
public void beforeStart(Application app) {
}
/**
* Executed after all plugins, including the database set-up with Evolutions and the EBean wrapper.
* This is a good place to execute some of your application code to create entries, for instance.
*/
public void onStart(Application app) {
}
Note, that you don't need include DB config additionally here, you can use your models here the same way as you do in controller.