I would like to know the guarantees of the following pattern:
try {
//business logic here
} catch {
case t: Throwable =>
//try to signal the error
//shutdown the app
}
I'm interested in catching all unexpected exceptions (that can be thrown by any framework, library, custom code etc...), trying to record the error and shutting down the virtual machine.
In Scala what are the guarantees of the catch Throwable exception? Are there any differences with the java Exception hierarchy to take in consideration?
Throwable is defined in the JVM spec:
An exception in the Java Virtual Machine is represented by an instance of the class Throwable or one of its subclasses.
which means that both Scala and Java share the same definition of Throwable. In fact, scala.Throwable is just an alias for java.lang.Throwable. So in Scala, a catch clause that handles Throwable will catch all exceptions (and errors) thrown by the enclosed code, just like in Java.
There are any difference with the java Exception hierarchy to take in consideration?
Since Scala uses the same Throwable as Java, Exception and Errors represent the same things. The only "difference" (that I know of) is that in Scala, exceptions are sometimes used under the hood for flow control, so that if you want to catch non-fatal exceptions (thus excluding Errors), you should rather use catch NonFatal(e) instead of catch e : Exception. But this doesn't apply to catching Throwable directly
All harmless Throwables can be caught by:
try {
// dangerous
} catch {
case NonFatal(e) => log.error(e, "Something not that bad.")
}
This way, you never catch an exception that a reasonable application should not try to catch.
Related
In the Dart docs at https://dart.dev/guides/language/language-tour#exceptions, it states the following:
Dart provides Exception and Error types, as well as numerous predefined subtypes. You can, of course, define your own exceptions. However, Dart programs can throw any non-null object—not just Exception and Error objects—as an exception.
The example they give for this behavior is throw 'Out of llamas!';.
Why would I ever want to throw something that isn't an Error or Exception? What is the design decision behind allowing this?
I think it is because you may already have an object that you want to inspect if an error occurs, and can just throw that object, or just so you can throw a string like in the example.
It's worth noting that a catch block can optionally catch the stack trace, and since the stack trace is not part of the Exception, it makes sense to allow arbitrary objects to be thrown.
try {
throw 'Error!';
} catch (error, stacktrace) {
print(stacktrace);
}
Many times you will just see catch (e) in code, but you may also see catch (e, s).
I've just started learning Scala, so this might be a simple question. I want to use a try-catch block to check if a variable has been declared or not.
I am using a try-catch block and catching the NoSuchElementException if the variable doesn't exist.
try{
print(testVariable)
}
catch{
case e: NoSuchElementException => print("testVariable not found")
}
My code is showing an error that testVariable does not exist instead of throwing the exception. I then tried multiple other exceptions as well, but Scala's try-catch doesn't seem to be catching any of them (except for the divide by zero exception).
Can someone please guide me on how to use Scala's try-catch block?
In Scala (or pretty much any compiled programming language really), checking if a variable has been declared or not is the compiler's job, done at compile time. If you try to use a variable that hasn't been declared, the compiler will give an error, and your code won't be able to run.
Exceptions are a way to represent problems at run-time.
There is no overlap between "compile time" and "run-time", so what you are trying to do doesn't make sense. There just isn't an exception for "variable does not exist", and that's why you can't catch it.
To contrast, take this example:
val map = Map('a' -> 1, 'b' -> 2)
map('c') // will throw NoSuchElementException because there is no 'c' in the map
In this case, map.apply('c') (syntax sugar for apply lets you do map('c') will throw an exception because that's how Map's apply method is implemented. See the definition of Map#apply which calls Map#default if the key wasn't in the map; Map#default throws a NoSuchElementException.
You could catch that exception with a try/catch, e.g.
try {
map('c')
} catch {
case e: NoSuchElementException =>
println("got it!")
}
I am saving DStream to Cassandra. There is a column in Cassandra with map<text, text> datatype. Cassandra does not support null value in Map, but null value can occur in the stream.
I have added try catch if case something goes wrong, but the program stopped despite that and I don't see error message in the log:
try {
cassandraStream.saveToCassandra("table", "keyspace")
} catch {
case e: Exception => log.error("Error in saving data in Cassandra" + e.getMessage, e)
}
Exception
Caused by: java.lang.NullPointerException: Map values cannot be null
at com.datastax.driver.core.TypeCodec$AbstractMapCodec.serialize(TypeCodec.java:2026)
at com.datastax.driver.core.TypeCodec$AbstractMapCodec.serialize(TypeCodec.java:1909)
at com.datastax.driver.core.AbstractData.set(AbstractData.java:530)
at com.datastax.driver.core.AbstractData.set(AbstractData.java:536)
at com.datastax.driver.core.BoundStatement.set(BoundStatement.java:870)
at com.datastax.spark.connector.writer.BoundStatementBuilder.com$datastax$spark$connector$writer$BoundStatementBuilder$$bindColumnUnset(BoundStatementBuilder.scala:73)
at com.datastax.spark.connector.writer.BoundStatementBuilder$$anonfun$6.apply(BoundStatementBuilder.scala:84)
at com.datastax.spark.connector.writer.BoundStatementBuilder$$anonfun$6.apply(BoundStatementBuilder.scala:84)
at com.datastax.spark.connector.writer.BoundStatementBuilder$$anonfun$bind$1.apply$mcVI$sp(BoundStatementBuilder.scala:106)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at com.datastax.spark.connector.writer.BoundStatementBuilder.bind(BoundStatementBuilder.scala:101)
at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:106)
at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31)
at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$1.apply(TableWriter.scala:233)
at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$1.apply(TableWriter.scala:210)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:112)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:111)
at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:145)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:111)
at com.datastax.spark.connector.writer.TableWriter.writeInternal(TableWriter.scala:210)
at com.datastax.spark.connector.writer.TableWriter.insert(TableWriter.scala:197)
at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:183)
at com.datastax.spark.connector.streaming.DStreamFunctions$$anonfun$saveToCassandra$1$$anonfun$apply$1.apply(DStreamFunctions.scala:54)
at com.datastax.spark.connector.streaming.DStreamFunctions$$anonfun$saveToCassandra$1$$anonfun$apply$1.apply(DStreamFunctions.scala:54)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
... 3 more
I'd like to know why the program got stops, despite the try/catch block. Why is the exception not caught?
To understand the source of the failure you have to acknowledge that DStreamFunctions.saveToCassandra, same as DStream output operations in general, is not an action in strict sense. In practice it just invokes foreachRDD:
dstream.foreachRDD(rdd => rdd.sparkContext.runJob(rdd, writer.write _))
which in turn:
Apply a function to each RDD in this DStream. This is an output operator, so 'this' DStream will be registered as an output stream and therefore materialized.
The difference is subtle, but important - the operation is registered but the actual execution happens in different context, at later point in time.
It means there are no runtime failures to caught at the point you invoke saveToCassandra.
As already pointed out, try or Try would contain the driver exception, if applied directly on an action. So you'd for example re-implement saveToCassandra as
dstream.foreachRDD(rdd => try {
rdd.sparkContext.runJob(rdd, writer.write _)
} catch {
case e: Exception => log.error("Error in saving data in Cassandra" + e. getMessage, e)
})
the stream should be able to proceed, although the current batch will be completely or partially lost.
It is important to note that this is not the same as catching the original exception, which will be thrown, uncaught and visible in the log. To catch problem at its source you'd have to apply try / catch block directly in writer, and this is obviously not an option when you execute code, over which you don't have control.
Take away message is (already stated in this thread) - make sure to sanitize your data to avoid known sources of failure.
The problem is that you don't catch the exception you think you do. The code you have will catch a driver exception, and in fact code structured like this will do it.
It doesn't however mean that
the program should never stop.
While driver failure, which would be a consequence of fatal executor failure, is contained and driver can exit gracefully, stream as such is already gone. Therefore your code exits, because there is no more stream to run.
If the code in question was under your control, exception handling should be delegated to the task, but in case of 3rd party code, there is no such option.
Instead you should validate your data, and remove problematic records, before these are passed to saveToCassandra.
If a Scala future fails, and there is no continuation that "observes" that failure (or the only continuations use map/flatMap and don't run in case of failure), then errors go undetected. I would like such errors to be at least logged, so I can find bugs.
I use the term "observed error" because in .Net Tasks there is the chance to catch "unobserved task exceptions", when the Task object is collected by the GC. Similarly, with synchronous methods, uncaught exceptions that terminate the thread can be logged.
In Scala futures, to 'observe' a failure would mean that some continuation or other code reads the Exception stored in the future value before that future is disposed. I'm aware that finalization is not deterministic or reliable, and presumably that's why it's not used to catch unhandled errors, although .Net does succeed in doing this.
Is there a way to achieve this in Scala? If not, how should I organize my code to prevent unhandled error bugs?
Today I have andThen checkResult appended to various futures. But it's hard to know when to use this and when not to: if a library method returns a Future, it shouldn't checkResult and log errors itself, because the library user may handle the failure, so the responsibility falls onto the user. As I edit code I sometimes need to add checks and sometimes to remove them, and such manual management is surely wrong.
I have concluded there is no way to generically notice unhandled errors in Scala futures.
You can just use Future.recover in the function that returns the Future.
So for instance, you could just "log" the error and rethrow the original exception, in the simplest case:
def libraryFunction(): Future[Int] = {
val f = ...
f.recover {
case NonFatal(t) =>
println("Error : " + t)
throw t
}
}
Note the use of NonFatal to match all the exception types it is sensible to catch.
That recover block could equally return an alternative result if you wish.
No need to declare checked exceptions in throws clause or handling them in try/catch block in scala is the feature that I love. But it can be a problem when exception must be handled but was ignored. I'm looking for tools (maybe compiler flag/plugin) to find methods that ignored checked exceptions.
One option is catch the exception at a very high level of you application (The top would be the main-method).
Another option would be to use an UncaughtExceptionHandler (if you are on a JVM):
object MyUncaughtExceptionHandler extends Thread.UncaughtExceptionHandler {
def uncaughtException(thread: Thread, throwable: Throwable) {
println("Something bad happened!")
}
}
val t = new Thread(new Runnable {
override def run() {
null.toString
}
});
t.setUncaughtExceptionHandler(MyUncaughtExceptionHandler)
t.start()
AFAIK, there is no such tools, however one technique I've used successfully is to simply install an at-throw-point break point in your IDE (IntelliJ or Eclipse) for java.lang.Throwable which will pause execution at throw point of every java exception (or error) as the program runs and then to keep hitting "play" to see them all (at least on the path of execution you're interested in)
Cheers...