Connectjboss 4 with Firebird 3 via Jaybird 2.214-jdk1.6 [duplicate] - jboss

As soon as my code gets to my while(rs.next()) loop it produces the ResultSet is closed exception. What causes this exception and how can I correct for it?
EDIT: I notice in my code that I am nesting while(rs.next()) loop with another (rs2.next()), both result sets coming from the same DB, is this an issue?

Sounds like you executed another statement in the same connection before traversing the result set from the first statement. If you're nesting the processing of two result sets from the same database, you're doing something wrong. The combination of those sets should be done on the database side.

This could be caused by a number of reasons, including the driver you are using.
a) Some drivers do not allow nested statements. Depending if your driver supports JDBC 3.0 you should check the third parameter when creating the Statement object. For instance, I had the same problem with the JayBird driver to Firebird, but the code worked fine with the postgres driver. Then I added the third parameter to the createStatement method call and set it to ResultSet.HOLD_CURSORS_OVER_COMMIT, and the code started working fine for Firebird too.
static void testNestedRS() throws SQLException {
Connection con =null;
try {
// GET A CONNECTION
con = ConexionDesdeArchivo.obtenerConexion("examen-dest");
String sql1 = "select * from reportes_clasificacion";
Statement st1 = con.createStatement(
ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_READ_ONLY,
ResultSet.HOLD_CURSORS_OVER_COMMIT);
ResultSet rs1 = null;
try {
// EXECUTE THE FIRST QRY
rs1 = st1.executeQuery(sql1);
while (rs1.next()) {
// THIS LINE WILL BE PRINTED JUST ONCE ON
// SOME DRIVERS UNLESS YOU CREATE THE STATEMENT
// WITH 3 PARAMETERS USING
// ResultSet.HOLD_CURSORS_OVER_COMMIT
System.out.println("ST1 Row #: " + rs1.getRow());
String sql2 = "select * from reportes";
Statement st2 = con.createStatement(
ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_READ_ONLY);
// EXECUTE THE SECOND QRY. THIS CLOSES THE FIRST
// ResultSet ON SOME DRIVERS WITHOUT USING
// ResultSet.HOLD_CURSORS_OVER_COMMIT
st2.executeQuery(sql2);
st2.close();
}
} catch (SQLException e) {
e.printStackTrace();
} finally {
rs1.close();
st1.close();
}
} catch (SQLException e) {
} finally {
con.close();
}
}
b) There could be a bug in your code. Remember that you cannot reuse the Statement object, once you re-execute a query on the same statement object, all the opened resultsets associated with the statement are closed. Make sure you are not closing the statement.

Also, you can only have one result set open from each statement. So if you are iterating through two result sets at the same time, make sure they are executed on different statements. Opening a second result set on one statement will implicitly close the first.
http://java.sun.com/javase/6/docs/api/java/sql/Statement.html

The exception states that your result is closed. You should examine your code and look for all location where you issue a ResultSet.close() call. Also look for Statement.close() and Connection.close(). For sure, one of them gets called before rs.next() is called.

You may have closed either the Connection or Statement that made the ResultSet, which would lead to the ResultSet being closed as well.

Proper jdbc call should look something like:
try {
Connection conn;
Statement stmt;
ResultSet rs;
try {
conn = DriverManager.getConnection(myUrl,"","");
stmt = conn.createStatement();
rs = stmt.executeQuery(myQuery);
while ( rs.next() ) {
// process results
}
} catch (SqlException e) {
System.err.println("Got an exception! ");
System.err.println(e.getMessage());
} finally {
// you should release your resources here
if (rs != null) {
rs.close();
}
if (stmt != null) {
stmt.close();
}
if (conn != null) {
conn.close();
}
}
} catch (SqlException e) {
System.err.println("Got an exception! ");
System.err.println(e.getMessage());
}
you can close connection (or statement) only after you get result from result set. Safest way is to do it in finally block. However close() could also throe SqlException, hence the other try-catch block.

I got same error everything was correct only i was using same statement interface object to execute and update the database.
After separating i.e. using different objects of statement interface for updating and executing query i resolved this error. i.e. do get rid from this do not use same statement object for both updating and executing the query.

Check whether you have declared the method where this code is executing as static. If it is static there may be some other thread resetting the ResultSet.

make sure you have closed all your statments and resultsets before running rs.next. Finaly guarantees this
public boolean flowExists( Integer idStatusPrevious, Integer idStatus, Connection connection ) {
LogUtil.logRequestMethod();
PreparedStatement ps = null;
ResultSet rs = null;
try {
ps = connection.prepareStatement( Constants.SCRIPT_SELECT_FIND_FLOW_STATUS_BY_STATUS );
ps.setInt( 1, idStatusPrevious );
ps.setInt( 2, idStatus );
rs = ps.executeQuery();
Long count = 0L;
if ( rs != null ) {
while ( rs.next() ) {
count = rs.getLong( 1 );
break;
}
}
LogUtil.logSuccessMethod();
return count > 0L;
} catch ( Exception e ) {
String errorMsg = String
.format( Constants.ERROR_FINALIZED_METHOD, ( e.getMessage() != null ? e.getMessage() : "" ) );
LogUtil.logError( errorMsg, e );
throw new FatalException( errorMsg );
} finally {
rs.close();
ps.close();
}

A ResultSetClosedException could be thrown for two reasons.
1.) You have opened another connection to the database without closing all other connections.
2.) Your ResultSet may be returning no values. So when you try to access data from the ResultSet java will throw a ResultSetClosedException.

It happens also when using a ResultSet without being in a #Transactional method.
ScrollableResults results = getScrollableResults("select e from MyEntity e");
while (results.next()) {
...
}
results.close();
if MyEntity has eager relationships with other entities. the second time results.next() is invoked the ResultSet is closed exception is raised.
so if you use ScrollableResults on entities with eager relationships make sure your method is run transactionally.

"result set is closed" happened to me when using tag <collection> in MyBatis nested (one-to-many) xml <select> statement
A Spring solution could be to have a (Java) Spring #Service layer, where class/methods calling MyBatis select-collection statements are annotated with
#Transactional(propagation = Propagation.REQUIRED)
annotations being:
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;
this solution does not require to set the following datasource properties (i.e., in JBoss EAP standalone*.xml):
<xa-datasource-property name="downgradeHoldCursorsUnderXa">**true**\</xa-datasource-property>
<xa-datasource-property name="resultSetHoldability">**1**</xa-datasource-property>

Related

SqlBulkCopy with ObjectReader - Failed to convert parameter value from a String to a Int32

I am using SqlBulkCopy (.NET) with ObjectReader (FastMember) to perform an import from XML based file. I have added the proper column mappings.
At certain instances I get an error: Failed to convert parameter value from a String to a Int32.
I'd like to understand how to
1. Trace the actual table column which has failed
2. Get the "current" on the ObjectReader
sample code:
using (ObjectReader reader = genericReader.GetReader())
{
try
{
sbc.WriteToServer(reader); //sbc is SqlBulkCopy instance
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
Does the "ex" carry more information then just the error:
System.InvalidOperationException : The given value of type String from the data source cannot be converted to type int of the specified target column.
Simple Answer
The simple answer is no. One of the reasons .NET's SqlBulkCopy is so fast is that it does not log anything it does. You can't directly get any additional information from the .NET's SqlBulkCopy exception. However, that said David Catriel has wrote an article about this and has delivered a possible solution you can read fully about here.
Even though this method may provide the answer you are looking for I suggest only using the helper method when debugging as this quite possibly could have some performance impact if ran consistently within your code.
Why Use A Work Around
The lack of logging definitely speeds things up, but when you are
pumping hundreds of thousands of rows and suddenly have a failure on
one of them because of a constraint, you're stuck. All the
SqlException will tell you is that something went wrong with a given
constraint (you'll get the constraint's name at least), but that's
about it. You're then stuck having to go back to your source, run
separate SELECT statements on it (or do manual searches), and find the
culprit rows on your own.
On top of that, it can be a very long and iterative process if you've
got data with several potential failures in it because SqlBulkCopy
will stop as soon as the first failure is hit. Once you correct that
one, you need to rerun the load to find the second error, etc.
advantages:
Reports all possible errors that the SqlBulkCopy would encounter
Reports all culprit data rows, along with the exception that row would be causing
The entire thing is run in a transaction that is rolled back at the end, so no changes are committed.
disadvantages:
For extremely large amounts of data it might take a couple of minutes.
This solution is reactive; i.e. the errors are not returned as part of the exception raised by your SqlBulkCopy.WriteToServer() process. Instead, this helper method is executed after the exception is raised to try and capture all possible errors along with their related data. This means that in case of an exception, your process will take longer to run than just running the bulk copy.
You cannot reuse the same DataReader object from the failed SqlBulkCopy, as readers are forward only fire hoses that cannot be reset. You'll need to create a new reader of the same type (e.g. re-issue the original SqlCommand, recreate the reader based on the same DataTable, etc).
Using the GetBulkCopyFailedData Method
private void TestMethod()
{
// new code
SqlConnection connection = null;
SqlBulkCopy bulkCopy = null;
DataTable dataTable = new DataTable();
// load some sample data into the DataTable
IDataReader reader = dataTable.CreateDataReader();
try
{
connection = new SqlConnection("connection string goes here ...");
connection.Open();
bulkCopy = new SqlBulkCopy(connection);
bulkCopy.DestinationTableName = "Destination table name";
bulkCopy.WriteToServer(reader);
}
catch (Exception exception)
{
// loop through all inner exceptions to see if any relate to a constraint failure
bool dataExceptionFound = false;
Exception tmpException = exception;
while (tmpException != null)
{
if (tmpException is SqlException
&& tmpException.Message.Contains("constraint"))
{
dataExceptionFound = true;
break;
}
tmpException = tmpException.InnerException;
}
if (dataExceptionFound)
{
// call the helper method to document the errors and invalid data
string errorMessage = GetBulkCopyFailedData(
connection.ConnectionString,
bulkCopy.DestinationTableName,
dataTable.CreateDataReader());
throw new Exception(errorMessage, exception);
}
}
finally
{
if (connection != null && connection.State == ConnectionState.Open)
{
connection.Close();
}
}
}
GetBulkCopyFailedData() then opens a new connection to the database,
creates a transaction, and begins bulk copying the data one row at a
time. It does so by reading through the supplied DataReader and
copying each row into an empty DataTable. The DataTable is then bulk
copied into the destination database, and any exceptions resulting
from this are caught, documented (along with the DataRow that caused
it), and the cycle then repeats itself with the next row. At the end
of the DataReader we rollback the transaction and return the complete
error message. Fixing the problems in the data source should now be a
breeze.
The GetBulkCopyFailedData Method
/// <summary>
/// Build an error message with the failed records and their related exceptions.
/// </summary>
/// <param name="connectionString">Connection string to the destination database</param>
/// <param name="tableName">Table name into which the data will be bulk copied.</param>
/// <param name="dataReader">DataReader to bulk copy</param>
/// <returns>Error message with failed constraints and invalid data rows.</returns>
public static string GetBulkCopyFailedData(
string connectionString,
string tableName,
IDataReader dataReader)
{
StringBuilder errorMessage = new StringBuilder("Bulk copy failures:" + Environment.NewLine);
SqlConnection connection = null;
SqlTransaction transaction = null;
SqlBulkCopy bulkCopy = null;
DataTable tmpDataTable = new DataTable();
try
{
connection = new SqlConnection(connectionString);
connection.Open();
transaction = connection.BeginTransaction();
bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.CheckConstraints, transaction);
bulkCopy.DestinationTableName = tableName;
// create a datatable with the layout of the data.
DataTable dataSchema = dataReader.GetSchemaTable();
foreach (DataRow row in dataSchema.Rows)
{
tmpDataTable.Columns.Add(new DataColumn(
row["ColumnName"].ToString(),
(Type)row["DataType"]));
}
// create an object array to hold the data being transferred into tmpDataTable
//in the loop below.
object[] values = new object[dataReader.FieldCount];
// loop through the source data
while (dataReader.Read())
{
// clear the temp DataTable from which the single-record bulk copy will be done
tmpDataTable.Rows.Clear();
// get the data for the current source row
dataReader.GetValues(values);
// load the values into the temp DataTable
tmpDataTable.LoadDataRow(values, true);
// perform the bulk copy of the one row
try
{
bulkCopy.WriteToServer(tmpDataTable);
}
catch (Exception ex)
{
// an exception was raised with the bulk copy of the current row.
// The row that caused the current exception is the only one in the temp
// DataTable, so document it and add it to the error message.
DataRow faultyDataRow = tmpDataTable.Rows[0];
errorMessage.AppendFormat("Error: {0}{1}", ex.Message, Environment.NewLine);
errorMessage.AppendFormat("Row data: {0}", Environment.NewLine);
foreach (DataColumn column in tmpDataTable.Columns)
{
errorMessage.AppendFormat(
"\tColumn {0} - [{1}]{2}",
column.ColumnName,
faultyDataRow[column.ColumnName].ToString(),
Environment.NewLine);
}
}
}
}
catch (Exception ex)
{
throw new Exception(
"Unable to document SqlBulkCopy errors. See inner exceptions for details.",
ex);
}
finally
{
if (transaction != null)
{
transaction.Rollback();
}
if (connection.State != ConnectionState.Closed)
{
connection.Close();
}
}
return errorMessage.ToString();

Check whether insertions were successful (MongoDB C# driver)

Suppose "doc" is some document I want to insert into a MongoDB collection and "collection" is the collection I am inserting the document into.
I have something like the following:
try
{
WriteConcern wc = new WriteConcern();
wc.W = 1;
wc.Journal = true;
WriteConcernResult wcResult = collection.Insert(doc, wc);
if (!string.IsNullOrWhiteSpace(wcResult.ErrorMessage) || !wcResult.Ok)
{
return ErrorHandler(...);
}
else
{
return SuccessFunction(...);
}
}
catch (Exception e)
{
return e.Message;
}
Basically, if the insertion fails for any reason (other than hardware no longer working properly) I want to handle it (through the ErrorHandler function or the catch clause), while if it succeeds I want to call SuccessFunction.
My question: Is the above code sufficient for error checking purposes? In other words, will all failed insertions be caught, so that SuccessFunction is never called in those situations?
You don't even need to do any checking. collection.Insert will throw an exception if the write was not successful when you are using any write concern other than unacknowledged.
If you want to know if an error occured, you need to catch a WriteConcernException.

POSTGRESQL: Batch entry 0 SELECT

Hi I have the following method which calls a stored function in postgresql. The call works when I use a standard executequery() method but does not work when I start using batchs. Any help will be appreciated.
public void addstuff3() throws Exception {
Statement statement = null;
ResultSet resultSet = null;
Connection conn = null;
try {
// this will load the MySQL driver, each DB has its own driver
Class.forName("org.postgresql.Driver");
// setup the connection with the DB.
conn = DriverManager
.getConnection("jdbc:postgresql://localhost/newmydb?"
+ "user=new_user&password=password");
// statements allow to issue SQL queries to the database
statement = conn.createStatement();
conn.setAutoCommit(false);
statement.addBatch("SELECT ADDSTUFF('comp1', 'mdel1','power','PROPERTY','STRING','ON', '1396983600000', 'testing');");
statement.addBatch("SELECT ADDSTUFF('comp2', 'mdel2','power','PROPERTY','STRING','ON', '1396983600000', 'testing');");
conn.commit();
statement.executeBatch();
} catch (ClassNotFoundException | SQLException e) {
throw e;
} finally {
conn.close();
// resultSet.close();
statement.close();
}
This is the Error I get:
Batch entry 0 SELECT ADDSTUFF('comp1', 'mdel1','power','PROPERTY','STRING','ON', '1396983600000', 'testing') was aborted. Call getNextException to see the cause.
at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Sta tement.java:2743)
at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:461)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1928)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:405)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2892)
at jdbc.testing.MySQLAccess.addIndicators3(MySQLAccess.java:125)
at jdbc.testing.JDBCTesting.main(JDBCTesting.java:21)
Any help? I am using jdbc and postgresql.
ok thanks to #Dave I found that
e.getNextException()
Prints:
A result was returned when none was expected
I should not return a value
Works!

Hibernate stalling when attempting to executing JPA query on Sybase

I want to perform the following select using JPA:
select * from permissions_table where permissions.role in ("Role1", "Role2")
What I have so far looks like this:
protected Set<String> getPermissions(Connection conn, String username, Collection<String> roleNames) throws SQLException {
PreparedStatement ps = null;
Set<String> permissions = new LinkedHashSet<String>();
try {
EntityManager em = entityManagerFactory.createEntityManager();
CriteriaBuilder builder = em.getCriteriaBuilder();
CriteriaQuery<HierarchicalPermission> criteria = builder.createQuery( HierarchicalPermission.class );
Root<HierarchicalPermission> permission = criteria.from(HierarchicalPermission.class);
criteria.select(permission).where(permission.get("Role").in(roleNames));
List<HierarchicalPermission> hPermissions = em.createQuery(criteria).getResultList();
for ( HierarchicalPermission p : hPermissions ) {
System.out.println( "Permission (" + p.getRole() +")");
}
}
catch(Exception ex)
{
System.out.println( ex.getMessage());
}
finally {
JdbcUtils.closeStatement(ps);
}
return permissions;
}
When I step over this line:
List<HierarchicalPermission> hPermissions = em.createQuery(criteria).getResultList();
I see the following in my Eclipse output window:
Hibernate: select hierarchic0_.iIdentity as iIdentity0_, hierarchic0_.timestamp as timestamp0_, hierarchic0_.szRole as szRole0_, hierarchic0_.szDescription as szDescri4_0_, hierarchic0_.iResource as iResource0_ from occ.ROLE_PERMISSIONS hierarchic0_ where hierarchic0_.szRole in (?)
and Eclipse debugger appears to stall. At this point, I can only pause or stop execution as shown in this screen shot.
What is this supposed to mean? Is this not a valid representation of the above query?
Database was locked by Sybase Interactive SQL on another machine so Hibernate was stalling while attempting to execute query. One would think that Hibernate would throw some sort of exception instead of simply stalling when it encounters resource contention but I guess this is not the case.

Monotouch data sync - why does my code sometimes cause sqlite errors?

I have the following calls (actually a few more than this - it's the overall method that's in question here):
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshEventData);
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshLocationData);
ThreadPool.QueueUserWorkItem(Database.Instance.RefreshActData);
1st point is - is it OK to call methods that call WCF services like this? I tried daisy chaining them and it was a mess.
An example of one of the refresh methods being called above is (they all follow the same pattern, just call different services and populate different tables):
public void RefreshEventData (object state)
{
Console.WriteLine ("in RefreshEventData");
var eservices = new AppServicesClient (new BasicHttpBinding (), new EndpointAddress (this.ServciceUrl));
//default the delta to an old date so that if this is first run we get everything
var eventsLastUpdated = DateTime.Now.AddDays (-100);
try {
eventsLastUpdated = (from s in GuideStar.Data.Database.Main.Table<GuideStar.Data.Event> ()
orderby s.DateUpdated descending
select s).ToList ().FirstOrDefault ().DateUpdated;
} catch (Exception ex1) {
Console.WriteLine (ex1.Message);
}
try {
eservices.GetAuthorisedEventsWithExtendedDataAsync (this.User.Id, this.User.Password, eventsLastUpdated);
} catch (Exception ex) {
Console.WriteLine ("error updating events: " + ex.Message);
}
eservices.GetAuthorisedEventsWithExtendedDataCompleted += delegate(object sender, GetAuthorisedEventsWithExtendedDataCompletedEventArgs e) {
try {
List<Event> newEvents = e.Result.ToList ();
GuideStar.Data.Database.Main.EventsAdded = e.Result.Count ();
lock (GuideStar.Data.Database.Main) {
GuideStar.Data.Database.Main.Execute ("BEGIN");
foreach (var s in newEvents) {
GuideStar.Data.Database.Main.InsertOrUpdateEvent (new GuideStar.Data.Event {
Name = s.Name,
DateAdded = s.DateAdded,
DateUpdated = s.DateUpdated,
Deleted = s.Deleted,
StartDate = s.StartDate,
Id = s.Id,
Lat = s.Lat,
Long = s.Long
});
}
GuideStar.Data.Database.Main.Execute ("COMMIT");
LocationsCount = 0;
}
} catch (Exception ex) {
Console.WriteLine("error InsertOrUpdateEvent " + ex.Message);
} finally {
OnDatabaseUpdateStepCompleted (EventArgs.Empty);
}
};
}
OnDatabaseUpdateStepCompleted - just iterates an updateComplete counter when it's called and when it knows that all of the services have come back ok it removes the waiting spinner and the app carries on.
This works OK 1st time 'round - but then sometimes it doesn't with one of these: http://monobin.com/__m6c83107d
I think the 1st question is - is all this OK? I'm not used to using threading and locks so I am wandering into new ground for me. Is using QueueUserWorkItem like this ok? Should I even be using lock before doing the bulk insert/update? An example of which:
public void InsertOrUpdateEvent(Event festival){
try {
if (!festival.Deleted) {
Main.Insert(festival, "OR REPLACE");
}else{
Main.Delete<Event>(festival);
}
} catch (Exception ex) {
Console.WriteLine("InsertOrUpdateEvent failed: " + ex.Message);
}
}
Then the next question is - what am I doing wrong that is causing these sqlite issues?
w://
Sqlite is not thread safe.
If you want to access Sqlite from more than one thread, you must take a lock before you access any SQLite related structures.
Like this:
lock (db){
// Do your query or insert here
}
Sorry, no specific answers, but some thoughts:
Is SqlLite even threadsafe? I'm not sure - it may be that it's not (to the wrapper isn't). Can you lock on a more global object, so no two threads are inserting at the same time?
It's possible that the MT GC is getting a little overenthusiastic, and releasing your string before it's been used. Maybe keep a local reference to it around during the insert? I've had this happen with view controllers, where I had them in an array (tabcontrollers, specificially), but if I didn't keep an member variable around with the reference, they got GC'ed.
Could you get the data in a threaded manner, then queue everything up and insert them in a single thread? Atleast as a test anyway.