Hey I need some help here for How to use timeouts in flutter correctly. First of all to explain what the main goal is:
I want to recive data from my Firebase RealTime Database but need to secure this request api call with an time out of 15 sec. So after 15 sec my timeout should throw an exception that will return to the Users frontend the alert for reasons of time out.
So I used the simple way to call timeouts on future functions:
This functions should only check if on some firebase node an ID is existing or not:
Inside this class where I have declared this functions I also have an instance which called : timeoutControl this is a class which contains a duration and some reasons for the exceptions.
Future<bool> isUserCheckedIn(String oid, String maybeCheckedInUserIdentifier, String onGateId) async {
try {
databaseReference = _firebaseDatabase.ref("Boarding").child(oid).child(onGateId);
final snapshot = await databaseReference.get().timeout(Duration(seconds: timeoutControl.durationForTimeOutInSec), onTimeout: () => timeoutControl.onEppTimeoutForTask());
if(snapshot.hasChild(maybeCheckedInUserIdentifier)) {
return true;
}
else {
return false;
}
}
catch (exception) {
return false;
}
}
The TimeOutClass where the instance timeoutControl comes from:
class CustomTimeouts {
int durationForTimeOutInSec = 15; // The seconds for how long to try until we throw an timeout exception
CustomTimeouts();
// TODO: Implement the exception reasons here later ...
onEppTimeoutForUpload() {
throw Exception("Some reason ...");
}
onEppTimeoutForTask() {
throw Exception("Some reason ...");
}
onEppTimeoutForDownload() {
throw Exception("Some reason ...");
}
}
So as you can see for example I tried to use this implementation above. This works fine ... sometimes I need to fight with un explain able things -_-. Let me try to introduce what in somecases are the problem:
Inside the frontend class make this call:
bool isUserCheckedIn = await service.isUserCheckedIn(placeIdentifier, userId, gateId);
Map<String, dynamic> data = {"gateIdActive" : isUserCheckedIn};
/*
The response here is an Custom transaction handler which contains an error or an returned param
etc. so this isn't relevant for you ...
*/
_gateService.updateGate(placeIdentifier, gateId, data).then((response) {
if(response.hasError()) {
setState(() {
EppDialog.showErrorToast(response.getErrorMessage()); // Shows an error message
isSendButtonDiabled = false; /*Reset buttons state*/
});
}
else {
// Create an gate process here ...
createGateEntrys(); // <-- If the closures update was successful we also handle some
// other data inside the RTDB for other reasons here ...
}
});
IMPORTANT to know for you guys is that I am gonna use the returned "boolean" value from this function call to update some other data which will be pushed and uploaded into another RTDB other node location for other reasons. And if this was also successful the application is going on to update some entrys also inside the RTDB -->createGateEntrys()<-- This function is called as the last one and is also marked as an async function and called with its closures context and no await statement.
The Data inside my Firebase RTDB:
"GateCheckIns" / "4mrithabdaofgnL39238nH" (The place identifier) / "NFdxcfadaies45a" (The Gate Identifier)/ "nHz2mhagadzadzgadHjoeua334" : 1 (as top of the key some users id who is checked in)
So on real devices this works always without any problems... But the case of an real device or simulator could not be the reason why I'am faceing with this problem now. Sometimes inside the Simulator this Function returns always false no matter if the currentUsers Identifier is inside the this child nodes or not. Therefore I realized the timeout is always called immediately so right after 1-2 sec because the exception was always one of these I was calling from my CustomTimeouts class and the function which throws the exception inside the .timeout(duration, onTimeout: () => ...) call. I couldn't figure it out because as I said on real devices I was not faceing with this problem.
Hope I was able to explain the problem it's a little bit complicated I know but for me is important that someone could explain me for what should I pay attention to if I am useing timeouts in this style etc.
( This is my first question here on StackOverFlow :) )
As soon as my code gets to my while(rs.next()) loop it produces the ResultSet is closed exception. What causes this exception and how can I correct for it?
EDIT: I notice in my code that I am nesting while(rs.next()) loop with another (rs2.next()), both result sets coming from the same DB, is this an issue?
Sounds like you executed another statement in the same connection before traversing the result set from the first statement. If you're nesting the processing of two result sets from the same database, you're doing something wrong. The combination of those sets should be done on the database side.
This could be caused by a number of reasons, including the driver you are using.
a) Some drivers do not allow nested statements. Depending if your driver supports JDBC 3.0 you should check the third parameter when creating the Statement object. For instance, I had the same problem with the JayBird driver to Firebird, but the code worked fine with the postgres driver. Then I added the third parameter to the createStatement method call and set it to ResultSet.HOLD_CURSORS_OVER_COMMIT, and the code started working fine for Firebird too.
static void testNestedRS() throws SQLException {
Connection con =null;
try {
// GET A CONNECTION
con = ConexionDesdeArchivo.obtenerConexion("examen-dest");
String sql1 = "select * from reportes_clasificacion";
Statement st1 = con.createStatement(
ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_READ_ONLY,
ResultSet.HOLD_CURSORS_OVER_COMMIT);
ResultSet rs1 = null;
try {
// EXECUTE THE FIRST QRY
rs1 = st1.executeQuery(sql1);
while (rs1.next()) {
// THIS LINE WILL BE PRINTED JUST ONCE ON
// SOME DRIVERS UNLESS YOU CREATE THE STATEMENT
// WITH 3 PARAMETERS USING
// ResultSet.HOLD_CURSORS_OVER_COMMIT
System.out.println("ST1 Row #: " + rs1.getRow());
String sql2 = "select * from reportes";
Statement st2 = con.createStatement(
ResultSet.TYPE_SCROLL_INSENSITIVE,
ResultSet.CONCUR_READ_ONLY);
// EXECUTE THE SECOND QRY. THIS CLOSES THE FIRST
// ResultSet ON SOME DRIVERS WITHOUT USING
// ResultSet.HOLD_CURSORS_OVER_COMMIT
st2.executeQuery(sql2);
st2.close();
}
} catch (SQLException e) {
e.printStackTrace();
} finally {
rs1.close();
st1.close();
}
} catch (SQLException e) {
} finally {
con.close();
}
}
b) There could be a bug in your code. Remember that you cannot reuse the Statement object, once you re-execute a query on the same statement object, all the opened resultsets associated with the statement are closed. Make sure you are not closing the statement.
Also, you can only have one result set open from each statement. So if you are iterating through two result sets at the same time, make sure they are executed on different statements. Opening a second result set on one statement will implicitly close the first.
http://java.sun.com/javase/6/docs/api/java/sql/Statement.html
The exception states that your result is closed. You should examine your code and look for all location where you issue a ResultSet.close() call. Also look for Statement.close() and Connection.close(). For sure, one of them gets called before rs.next() is called.
You may have closed either the Connection or Statement that made the ResultSet, which would lead to the ResultSet being closed as well.
Proper jdbc call should look something like:
try {
Connection conn;
Statement stmt;
ResultSet rs;
try {
conn = DriverManager.getConnection(myUrl,"","");
stmt = conn.createStatement();
rs = stmt.executeQuery(myQuery);
while ( rs.next() ) {
// process results
}
} catch (SqlException e) {
System.err.println("Got an exception! ");
System.err.println(e.getMessage());
} finally {
// you should release your resources here
if (rs != null) {
rs.close();
}
if (stmt != null) {
stmt.close();
}
if (conn != null) {
conn.close();
}
}
} catch (SqlException e) {
System.err.println("Got an exception! ");
System.err.println(e.getMessage());
}
you can close connection (or statement) only after you get result from result set. Safest way is to do it in finally block. However close() could also throe SqlException, hence the other try-catch block.
I got same error everything was correct only i was using same statement interface object to execute and update the database.
After separating i.e. using different objects of statement interface for updating and executing query i resolved this error. i.e. do get rid from this do not use same statement object for both updating and executing the query.
Check whether you have declared the method where this code is executing as static. If it is static there may be some other thread resetting the ResultSet.
make sure you have closed all your statments and resultsets before running rs.next. Finaly guarantees this
public boolean flowExists( Integer idStatusPrevious, Integer idStatus, Connection connection ) {
LogUtil.logRequestMethod();
PreparedStatement ps = null;
ResultSet rs = null;
try {
ps = connection.prepareStatement( Constants.SCRIPT_SELECT_FIND_FLOW_STATUS_BY_STATUS );
ps.setInt( 1, idStatusPrevious );
ps.setInt( 2, idStatus );
rs = ps.executeQuery();
Long count = 0L;
if ( rs != null ) {
while ( rs.next() ) {
count = rs.getLong( 1 );
break;
}
}
LogUtil.logSuccessMethod();
return count > 0L;
} catch ( Exception e ) {
String errorMsg = String
.format( Constants.ERROR_FINALIZED_METHOD, ( e.getMessage() != null ? e.getMessage() : "" ) );
LogUtil.logError( errorMsg, e );
throw new FatalException( errorMsg );
} finally {
rs.close();
ps.close();
}
A ResultSetClosedException could be thrown for two reasons.
1.) You have opened another connection to the database without closing all other connections.
2.) Your ResultSet may be returning no values. So when you try to access data from the ResultSet java will throw a ResultSetClosedException.
It happens also when using a ResultSet without being in a #Transactional method.
ScrollableResults results = getScrollableResults("select e from MyEntity e");
while (results.next()) {
...
}
results.close();
if MyEntity has eager relationships with other entities. the second time results.next() is invoked the ResultSet is closed exception is raised.
so if you use ScrollableResults on entities with eager relationships make sure your method is run transactionally.
"result set is closed" happened to me when using tag <collection> in MyBatis nested (one-to-many) xml <select> statement
A Spring solution could be to have a (Java) Spring #Service layer, where class/methods calling MyBatis select-collection statements are annotated with
#Transactional(propagation = Propagation.REQUIRED)
annotations being:
import org.springframework.transaction.annotation.Propagation;
import org.springframework.transaction.annotation.Transactional;
this solution does not require to set the following datasource properties (i.e., in JBoss EAP standalone*.xml):
<xa-datasource-property name="downgradeHoldCursorsUnderXa">**true**\</xa-datasource-property>
<xa-datasource-property name="resultSetHoldability">**1**</xa-datasource-property>
I am using SqlBulkCopy (.NET) with ObjectReader (FastMember) to perform an import from XML based file. I have added the proper column mappings.
At certain instances I get an error: Failed to convert parameter value from a String to a Int32.
I'd like to understand how to
1. Trace the actual table column which has failed
2. Get the "current" on the ObjectReader
sample code:
using (ObjectReader reader = genericReader.GetReader())
{
try
{
sbc.WriteToServer(reader); //sbc is SqlBulkCopy instance
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
}
}
Does the "ex" carry more information then just the error:
System.InvalidOperationException : The given value of type String from the data source cannot be converted to type int of the specified target column.
Simple Answer
The simple answer is no. One of the reasons .NET's SqlBulkCopy is so fast is that it does not log anything it does. You can't directly get any additional information from the .NET's SqlBulkCopy exception. However, that said David Catriel has wrote an article about this and has delivered a possible solution you can read fully about here.
Even though this method may provide the answer you are looking for I suggest only using the helper method when debugging as this quite possibly could have some performance impact if ran consistently within your code.
Why Use A Work Around
The lack of logging definitely speeds things up, but when you are
pumping hundreds of thousands of rows and suddenly have a failure on
one of them because of a constraint, you're stuck. All the
SqlException will tell you is that something went wrong with a given
constraint (you'll get the constraint's name at least), but that's
about it. You're then stuck having to go back to your source, run
separate SELECT statements on it (or do manual searches), and find the
culprit rows on your own.
On top of that, it can be a very long and iterative process if you've
got data with several potential failures in it because SqlBulkCopy
will stop as soon as the first failure is hit. Once you correct that
one, you need to rerun the load to find the second error, etc.
advantages:
Reports all possible errors that the SqlBulkCopy would encounter
Reports all culprit data rows, along with the exception that row would be causing
The entire thing is run in a transaction that is rolled back at the end, so no changes are committed.
disadvantages:
For extremely large amounts of data it might take a couple of minutes.
This solution is reactive; i.e. the errors are not returned as part of the exception raised by your SqlBulkCopy.WriteToServer() process. Instead, this helper method is executed after the exception is raised to try and capture all possible errors along with their related data. This means that in case of an exception, your process will take longer to run than just running the bulk copy.
You cannot reuse the same DataReader object from the failed SqlBulkCopy, as readers are forward only fire hoses that cannot be reset. You'll need to create a new reader of the same type (e.g. re-issue the original SqlCommand, recreate the reader based on the same DataTable, etc).
Using the GetBulkCopyFailedData Method
private void TestMethod()
{
// new code
SqlConnection connection = null;
SqlBulkCopy bulkCopy = null;
DataTable dataTable = new DataTable();
// load some sample data into the DataTable
IDataReader reader = dataTable.CreateDataReader();
try
{
connection = new SqlConnection("connection string goes here ...");
connection.Open();
bulkCopy = new SqlBulkCopy(connection);
bulkCopy.DestinationTableName = "Destination table name";
bulkCopy.WriteToServer(reader);
}
catch (Exception exception)
{
// loop through all inner exceptions to see if any relate to a constraint failure
bool dataExceptionFound = false;
Exception tmpException = exception;
while (tmpException != null)
{
if (tmpException is SqlException
&& tmpException.Message.Contains("constraint"))
{
dataExceptionFound = true;
break;
}
tmpException = tmpException.InnerException;
}
if (dataExceptionFound)
{
// call the helper method to document the errors and invalid data
string errorMessage = GetBulkCopyFailedData(
connection.ConnectionString,
bulkCopy.DestinationTableName,
dataTable.CreateDataReader());
throw new Exception(errorMessage, exception);
}
}
finally
{
if (connection != null && connection.State == ConnectionState.Open)
{
connection.Close();
}
}
}
GetBulkCopyFailedData() then opens a new connection to the database,
creates a transaction, and begins bulk copying the data one row at a
time. It does so by reading through the supplied DataReader and
copying each row into an empty DataTable. The DataTable is then bulk
copied into the destination database, and any exceptions resulting
from this are caught, documented (along with the DataRow that caused
it), and the cycle then repeats itself with the next row. At the end
of the DataReader we rollback the transaction and return the complete
error message. Fixing the problems in the data source should now be a
breeze.
The GetBulkCopyFailedData Method
/// <summary>
/// Build an error message with the failed records and their related exceptions.
/// </summary>
/// <param name="connectionString">Connection string to the destination database</param>
/// <param name="tableName">Table name into which the data will be bulk copied.</param>
/// <param name="dataReader">DataReader to bulk copy</param>
/// <returns>Error message with failed constraints and invalid data rows.</returns>
public static string GetBulkCopyFailedData(
string connectionString,
string tableName,
IDataReader dataReader)
{
StringBuilder errorMessage = new StringBuilder("Bulk copy failures:" + Environment.NewLine);
SqlConnection connection = null;
SqlTransaction transaction = null;
SqlBulkCopy bulkCopy = null;
DataTable tmpDataTable = new DataTable();
try
{
connection = new SqlConnection(connectionString);
connection.Open();
transaction = connection.BeginTransaction();
bulkCopy = new SqlBulkCopy(connection, SqlBulkCopyOptions.CheckConstraints, transaction);
bulkCopy.DestinationTableName = tableName;
// create a datatable with the layout of the data.
DataTable dataSchema = dataReader.GetSchemaTable();
foreach (DataRow row in dataSchema.Rows)
{
tmpDataTable.Columns.Add(new DataColumn(
row["ColumnName"].ToString(),
(Type)row["DataType"]));
}
// create an object array to hold the data being transferred into tmpDataTable
//in the loop below.
object[] values = new object[dataReader.FieldCount];
// loop through the source data
while (dataReader.Read())
{
// clear the temp DataTable from which the single-record bulk copy will be done
tmpDataTable.Rows.Clear();
// get the data for the current source row
dataReader.GetValues(values);
// load the values into the temp DataTable
tmpDataTable.LoadDataRow(values, true);
// perform the bulk copy of the one row
try
{
bulkCopy.WriteToServer(tmpDataTable);
}
catch (Exception ex)
{
// an exception was raised with the bulk copy of the current row.
// The row that caused the current exception is the only one in the temp
// DataTable, so document it and add it to the error message.
DataRow faultyDataRow = tmpDataTable.Rows[0];
errorMessage.AppendFormat("Error: {0}{1}", ex.Message, Environment.NewLine);
errorMessage.AppendFormat("Row data: {0}", Environment.NewLine);
foreach (DataColumn column in tmpDataTable.Columns)
{
errorMessage.AppendFormat(
"\tColumn {0} - [{1}]{2}",
column.ColumnName,
faultyDataRow[column.ColumnName].ToString(),
Environment.NewLine);
}
}
}
}
catch (Exception ex)
{
throw new Exception(
"Unable to document SqlBulkCopy errors. See inner exceptions for details.",
ex);
}
finally
{
if (transaction != null)
{
transaction.Rollback();
}
if (connection.State != ConnectionState.Closed)
{
connection.Close();
}
}
return errorMessage.ToString();
Suppose "doc" is some document I want to insert into a MongoDB collection and "collection" is the collection I am inserting the document into.
I have something like the following:
try
{
WriteConcern wc = new WriteConcern();
wc.W = 1;
wc.Journal = true;
WriteConcernResult wcResult = collection.Insert(doc, wc);
if (!string.IsNullOrWhiteSpace(wcResult.ErrorMessage) || !wcResult.Ok)
{
return ErrorHandler(...);
}
else
{
return SuccessFunction(...);
}
}
catch (Exception e)
{
return e.Message;
}
Basically, if the insertion fails for any reason (other than hardware no longer working properly) I want to handle it (through the ErrorHandler function or the catch clause), while if it succeeds I want to call SuccessFunction.
My question: Is the above code sufficient for error checking purposes? In other words, will all failed insertions be caught, so that SuccessFunction is never called in those situations?
You don't even need to do any checking. collection.Insert will throw an exception if the write was not successful when you are using any write concern other than unacknowledged.
If you want to know if an error occured, you need to catch a WriteConcernException.
I have this method that delete object if exist and insert the new instance any way :
internal void SaveCarAccident(WcfContracts.BLObjects.Contract.Dtos.CarAccident DTOCarAccident)
{
using(var context = BLObjectsFactory.Create())
{
context.ContextOptions.ProxyCreationEnabled = false;
CarAccident NewCarAccident = ConvertToCarAccident(DTOCarAccident);
CarAccident carFromDB = context.CarAccident.FirstOrDefault(current => current.CarAccidentKey.Equals(NewCarAccident.CarAccidentKey));
if(carFromDB != null)
context.CarAccident.DeleteObject(carFromDB);
context.CarAccident.AddObject(NewCarAccident);
context.SaveChanges();
}
}
I sometimes get exception that the key already exist in table.
I wnted to know if the way I save the changes is a problem (saving after delete and insert and not after each one)
At the time I got the exception there were few clients that activate the method at the same time I blocked other clients from writing already, but is this may be the problem ?
Thanks
Eran