I have some spark code, I need to catch all exception and store to file for some reason, so I tried to catch the exception and print it but its print empty
try {
/* Some spark code */
}
catch {
case e: Exception => {
println(" ************** " + e.printStackTrace())
}
}
output currently printing nothing ************** ()
printStackTrace doesn't return a stacktrace. It just prints it into the stderr. If you want to store it in the file you can
a) call e.getStackTrace and save each element manually
b) call e.printStackTrace(s) where s is a PrintStream or a PrintWriter pointing to your output file.
[Please find below the link that has answer to query][1]
Scala: Silently catch all exceptions
Related
myCrashingFunc() {
var a = [];
print(a[0]);
}
try {
myCrashingFunc()
} catch (e) {
print(StackTrace.current.toString());
}
Actually, when I debug, I let everything crash so my StackTrace get me to the crashing line.
When I send my app in prod, I use try catch so new errors have default behavior handle.
Problem: with try catch, my Stacktrace stop in the catch, I would like to see the StackTrace give me insigh of the try. (ex: here my StackTrace will only go to the line print(StackTrace.current.toString());
Question: How can I get StackTrace of the function in the try. (ex: hre I would like to see my StackTrace go to line print(a[0]);)
You can access the stacktrace if you pass it as the second argument to catch as follows:
myCrashingFunc() {
var a = [];
print(a[0]);
}
try {
myCrashingFunc()
} on Exception catch (e, s) {
print(s);
}
i want to print error log and skip the invalid json when it parse wrong in flink sql. i know there is configuration like :
'json.ignore-parse-errors' = 'true'
but it doesnt match what i want. i can use try catch method when i parse json in java like this
try {
return GSON.fromJson(json, element.class);
} catch (Exception e) {
LOG.warn("parse error, json: {}, error: {}", json, e);
return null;
}
but i really dont know how to do it in flink sql.
is there any way to do this?
thanks all.
def extractor: DataFrame = {
Try{
spark.read.schema(myschema).parquet(mypath);
} match {
case Success(df) => {
log("EXTRACTION SUCCESSFUL")
df
}
case Failure(exception) => {
log("EXTRACTION UNSUCCESSFUL")
Seq.empty[myschema].toDF()
}
}
}
I call this extractor function in my Spark job A. The issue is that mypath keeps getting refreshed every half an hour by some other job B. So, when job A reads mypath - it catalogues the file names. By the time actual action is performed - the files are changed and the catalogue gets stale and Job A throws an exception - FileNotFound.
I want to be able to catch this exception and move on.
But this is what is currently happening -
The above function logs "EXTRACTION SUCCESSFUL"
But Job A throws Job aborted exception which I can see in Yarn.
How can I catch this exception and return an empty data set from the function extractor?
Spark, and hence your function, is not reading the data in the file, it is just analysing it. The data is read when the action is invoked. As such, you need to catch the exception at the action that you mention.
I am having a trouble with my scala code below:
class ClassMyHelper {
protected var logger: Logger = LogManager.getLogger(classOf[AvroHelper])
def somefunc(schema: Schema, datum: GenericRecord): Array[Byte] = {
<code>
var byteData: Array[Byte] = null
try {
<code>
byteData = os.toByteArray()
//byteData
} catch {
case e: IOException =>
logger.error("IOException encountered!! ", e)
case e: Exception =>
logger.error("Something went wrong!! ", e)
} finally try os.close()
catch {
case e: IOException =>
logger.error("IOException encountered while closing output stream!! ", e)
case e: Exception =>
logger.error("Something went wrong while closing output stream!! ", e)
}
byteData //Unreachable code
}
}
The problem is that the last line in the somefunc function I am getting an unreachable code error.
Can you please help me in identifying what am I doing wrong here.
If you add a finally {} after the 2nd catch block things appear to clear up. I'm not sure why. I never use try/catch/finally myself. I prefer the Scala Try class from the Standard Library.
BTW, next time you post code please include the required imports, and check to make sure your code compiles as presented.
In our Scala, Play, Reactivemongo we have a big problem with exception handling - when there is an error, in Iteratee/Enumeratee or in Actor system Play just swallows it, without logging any error to the output. So we effectively need to guess where, and why this error might happen.
We made Globals override, to always print the error, and specified logger.root=TRACE, but still saw no output, from which we could analyse our problems.
How to forcebly make Play print all the errors
Didn't found the way to explicitly log everything but there is a way to log exceptions locally.
I did this:
def recover[T] = Enumeratee.recover[T] {
case (e, input) => Logger.error("Error happened on:" + input, e)
}
and then used it on all the enumeratees that can produce errors
def transform(from: Enumerator[From]): Enumerator[String] = {
heading >>> (from &> recover[From] ><> mapper) >>> tailing
}
here, mapper throws exception, and they are all logged.
I think your problem is with how Future works in scala, let's take the following exemple :
val f :Future[Int]= Future {
throw new NullPointerException("NULL")
1
}
f.map(s1 => {println(s" ==> $s1");s" ==> $s1"})
This code will throw an exception but the stack trace will not be printed as futures handle the error.
If you want to get the error that happened you can
just call:
f.onComplete{
case Success(e) => {}
case Failure(e) => e.printStackTrace()
}
e is a throwable that you can use as you want to handle the error.
At the solution I used, is override through ErrorHandling in Play https://www.playframework.com/documentation/2.4.2/ScalaErrorHandling, basically creating ErrorHandler that logs all the errors, with needed detalization.