What is the difference between closing FileStream and setting it to null? - c#-3.0

I have the following code to read and write a single line to a text file. Can we assign null to the reader and the writer objects at the end of the processing? or should I invoke Close() method? Thank you.
FileInfo JFile = new FileInfo(#"C:\test.txt");
using (FileStream JStream = JFile.Open(FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.None))
{
StreamReader reader = null;
StreamWriter writer = null;
try
{
int n;
reader = new StreamReader(JStream);
string line = reader.ReadLine();
int.TryParse(line, out n);
n = n + 1;
writer = new StreamWriter(JStream);
JStream.Seek(0, SeekOrigin.Begin);
writer.WriteLine(n);
writer.Flush();
}
catch (Exception x)
{
string message = string.Format("Error while processing the file. {0}", x.Message);
System.Windows.Forms.MessageBox.Show(message);
}
reader = null;
writer = null;
}

Certain things need to happen when closing a file. For instance, when your program access a file, and creates a handle on it. .Close()-ing the stream allows the framework to immediately release the handle and other similar resources.
When you set it to null, you are relying on the garbage collector to do this important functionality for you. The GC might clean it all up now, or it might not. You dont know when it is going to happen. If your program crashes, it might never release the handle at all.
Its always a good idea to .Close() the stream yourself as soon as youre done.

You should be calling the Close method so that all the underlying unmanaged resources are freed. Setting the object to null, does not free up the resources. For details see http://msdn.microsoft.com/en-us/library/system.io.stream.close.aspx
Update: Since your code is inside using statement the Dispose method would be automatically called once execution is complete inside using statement.

When you set the object to null, you tell the CLR that you no longer need it and it is scheduled for Garbage Collection. The problem you will have is that the CLR's hooks into the file you have opened are not released until your program ends and the CLR is closed. When you invoke the Close() method, the file resources are cleaned up and then the bits and pieces are all scheduled for Garbage Collection properly.
if you open the streamreader in a using block it will take care of that for you as well but you loose some of the fine grained error handling you get with a try/catch/finally block.

Both the StreamReader and the StreamWriter implement IDisposable, which is their way of saying
"When you are done with me, please call this method (Dispose()) so I know you're done with me and I can clean myself up".
(This is also called Deterministic Finalization)
So you need to call Dispose on those, or wrap them in a using statement, which just does the same thing for you. In the case of these two classes, Dispose will internally call Close if it's not already closed.

You actually want to call Close() or better yet Dispose() to ensure all contents are flushed. You risk not having the buffer written if you do not explicitly close/dispose the Writer.

I think the best solution will be to use the using statment (like you did in FileStream).
It will take care of all the cleaning.

Related

Third Party Lib disposes context

Without the first two lines of the following, the Email is saved to the database just fine.
However, when I send the email using SendGrid's lib, an exception is thrown when the repository tries to save it, as the Context has been disposed.
System.ObjectDisposedException
It makes no sense to me unless the library is somehow mishandling threads or some such.
var response = await this._emailSender.SendEmailAsync(email);
email.ResponseStatusCode = (int)response.StatusCode;
this._emailRepository.Save(email);
My workaround is to create a new context:
var context = new ApplicationDbContext(this._options, this._applicationUserProvider);
context.Add(email);
context.SaveChanges();
Is there a better way to resolve this?
unless the library is somehow mishandling threads or some such.
It's an async method. The line after await may run on a different thread at a later time.
The place to look is in the calling code. If you call the method containing this line and don't wait/await the returned task, then the calling code can Dispose your DbContext while the email-sending Task is still running.

Invoke persist inside another persist event handler

I have some code that invokes persist from inside another persist event handler, something like:
persist(someClassInstance){ message =>
confirmDelivery(message.id)
//some code
start()
}
//Somewhere else in the code
def start(): Unit = {
log.info("Starting")
persist(someClassInstance){ message =>
deliver(destination, createMessage)
log.info("Started")
}
}
When I run my application I see the log message "Starting" but I never see the "Started". I am wondering if this happens because I'm invoking persist inside another persist. Is this something that shouldn't be done? The documentation is not very explicit about this case.
I'm using Akka version 2.4-M1 so I suppose this could be the source of the problem, however is seems more likely to me that this is simply something that should not be done.
Invoking persist from another persist will block the program.
The correct way is to send a message to self. And then in the code to handle that message perform the persist.

How to call a method in a catch clause on an object defined in a try clause?

I am creating a redis pubsub client in a try-catch block. In the try block, the client is initialised with a callback to forward messages to a client. If there's a problem sending the message to the client, an exception will be thrown, in which case I need to stop the redis client. Here's the code:
try {
val redisClient = RedisPubSub(
channels = Seq(currentUserId.toString),
patterns = Seq(),
onMessage = (pubSubMessage: PubSubMessage) => {
responseObserver.onValue(pubSubMessage.data)
}
)
}
catch {
case e: RuntimeException =>
// redisClient isn't defined here...
redisClient.unsubscribe(currentUserId.toString)
redisClient.stop()
messageStreamResult.complete(Try(true))
responseObserver.onCompleted()
}
The problem is that the redis client val isn't defined in the catch block because there may have been an exception creating it. I also can't move the try-catch block into the callback because there's no way (that I can find) of referring to the redisClient object from within the callback (this doesn't resolve).
To solve this I'm instantiating redisClient as a var outside the try-catch block. Then inside the try block I stop the client and assign a new redisPubSub (created as above) to the redisClient var. That's an ugly hack which is also error prone (e.g. if there genuinely is a problem creating the second client, the catch block will try to call methods on an erroneous object).
Is there a better way of writing this code so that I can correctly call stop() on the redisClient if an exception is raised when trying to send the message to the responseObserver?
Update
I've just solved this using promises. Is there a simpler way though?
That exception handler is not going to be invoked if there is a problem sending the message. It is for problems in setting up the client. This SO answer talks about handling errors when sending messages.
As for the callback referring to the client, I think you want to register the callback after creating the client rather than trying to pass the callback in when you create it. Here is some sample code from Debashish Ghosh that does this.
Presumably that callback is going to run in another thread, so if it uses redisClient you'll have to be careful about concurrency. Ideally the callback could get to the client object through some argument. If not, then perhaps using volatile would be the easiest way to deal with that, although I suspect you'd eventually get into trouble if multiple callbacks can fail at once. Perhaps use an actor to manage the client connection, as Debashish has done?

How is the skipping implemented in Spring Batch?

I was wondering how I could determine in my ItemWriter, whether Spring Batch was currently in chunk-processing-mode or in the fallback single-item-processing-mode. In the first place I didn't find the information how this fallback mechanism is implemented anyway.
Even if I haven't found the solution to my actual problem yet, I'd like to share my knowledge about the fallback mechanism with you.
Feel free to add answers with additional information if I missed anything ;-)
The implementation of the skip mechanism can be found in the FaultTolerantChunkProcessor and in the RetryTemplate.
Let's assume you configured skippable exceptions but no retryable exceptions. And there is a failing item in your current chunk causing an exception.
Now, first of all the whole chunk shall be written. In the processor's write() method you can see, that a RetryTemplate is called. It also gets two references to a RetryCallback and a RecoveryCallback.
Switch over to the RetryTemplate. Find the following method:
protected <T> T doExecute(RetryCallback<T> retryCallback, RecoveryCallback<T> recoveryCallback, RetryState state)
There you can see that the RetryTemplate is retried as long as it's not exhausted (i.e. exactly once in our configuration). Such a retry will be caused by a retryable exception. Non-retryable exceptions will immediately abort the retry mechanism here.
After the retries are exhausted or aborted, the RecoveryCallback will be called:
e = handleRetryExhausted(recoveryCallback, context, state);
That's where the single-item-processing mode will kick-in now!
The RecoveryCallback (which was defined in the processor's write() method!) will put a lock on the input chunk (inputs.setBusy(true)) and run its scan() method. There you can see, that a single item is taken from the chunk:
List<O> items = Collections.singletonList(outputIterator.next());
If this single item can be processed by the ItemWriter correctly, than the chunk will be finished and the ChunkOrientedTasklet will run another chunk (for the next single items). This will cause a regular call to the RetryCallback, but since the chunk has been locked by the RecoveryTemplate, the scan() method will be called immediately:
if (!inputs.isBusy()) {
// ...
}
else {
scan(contribution, inputs, outputs, chunkMonitor);
}
So another single item will be processed and this is repeated, until the original chunk has been processed item-by-item:
if (outputs.isEmpty()) {
inputs.setBusy(false);
That's it. I hope you found this helpful. And I even more hope that you could find this easily via a search engine and didn't waste too much time, finding this out by yourself. ;-)
A possible approach to my original problem (the ItemWriter would like to know, whether it's in chunk or single-item mode) could be one of the following alternatives:
Only when the passed chunk is of size one, any further checks have to be done
When the passed chunk is a java.util.Collections.SingletonList, we would be quite sure, since the FaultTolerantChunkProcessor does the following:
List items = Collections.singletonList(outputIterator.next());
Unfortunately, this class is private and so we can't check it with instanceOf.
In reverse, if the chunk is an ArrayList we could also be quite sure, since the Spring Batch's Chunk class uses it:
private List items = new ArrayList();
One blurring left would be buffered items read from the execution context. But I'd expect those to be ArrayLists also.
Anyway, I still find this method too vague. I'd rather like to have this information provided by the framework.
An alternative would be to hook my ItemWriter in the framework execution. Maybe ItemWriteListener.onWriteError() is appropriate.
Update: The onWriteError() method will not be called if you're in single-item mode and throw an exception in the ItemWriter. I think that's a bug a filed it: https://jira.springsource.org/browse/BATCH-2027
So this alternative drops out.
Here's a snippet to do the same without any framework means directly in the writer
private int writeErrorCount = 0;
#Override
public void write(final List<? extends Long> items) throws Exception {
try {
writeWhatever(items);
} catch (final Exception e) {
if (this.writeErrorCount == 0) {
this.writeErrorCount = items.size();
} else {
this.writeErrorCount--;
}
throw e;
}
this.writeErrorCount--;
}
public boolean isWriterInSingleItemMode() {
return writeErrorCount != 0;
}
Attention: One should rather check for the skippable exceptions here and not for Exception in general.

WebMatrix Database.Open... Close() and Dispose()

Anytime I read about Close() and Dispose(), I end up seeing lot of references to just use the Using Block but I have yet to find how to use a Using Block in WebMatrix C# Razor Syntax.
So I don't want an answer that says just use a Using Block, unless you can tell me exactly how with example.
Specific to use of Database.Open(), after I am done with my connection/queries
My questions are:
Should I use both Close() and Dispose()?
Does it matter whether I call Close() and then Dispose() or Dispose() and then Close()?
Hoping for simple answers to simple questions. Thanks
ASP.NET Web Pages framework examples of the Database helper do not include calls to Close or Dispose because the framework itself is designed to call Dispose for you at the end of a request. If you use ADO.NET instead of the Database helper, you should employ using statements. Having said that, there is nothing to stop you from wrapping Database helper calls in using blocks:
IEnumerable<dynamic> floaters = null;
using(var db = Database.Open("MyDb")){
var sql = "SELECT * From LifeRafts";
floaters = db.Query(sql);
}
If you wanted to manage it all yourself, you can simply call Close or Dispose. They both result in the connection being returned to the ADO.NET connection pool anyway.