I need some help please.
I run this command: display(df), but when I try to download the dataframe I obtain the following error:
SparkException: Exception thrown in awaitResult: Caused by: java.io.IOException: Failed to read job commit marker: FileStatus{path=dbfs:/databricks-results/1390434353332427/_committed_8779047008713225709; isDirectory=false; length=114; replication=1; blocksize=67108864; modification_time=1583486899000; access_time=0; owner=; group=; permission=rwx-wx-wx; isSymlink=false}
Thanks in advance!
Related
A java.lang.NullPointerException exception has occurred.
However, the system should continue working without further problems.
When I create a class, I get this error.
Can you help me fix this error?
when trying to deploy springboot quartz postgresesql in heroku faceing below error,
with this same configuration the app connects from my ide to postgrese without any issue,
any help would be greatly appreciated.
## QuartzProperties
spring.quartz.job-store-type=jdbc
spring.quartz.properties.org.quartz.threadPool.threadCount=5
spring.quartz.jdbc.initialize-schema=never
spring.quartz.jdbc.schema=pw
org.springframework.context.ApplicationContextException: Failed to start bean 'quartzScheduler'; nested exception is org.springframework.scheduling.SchedulingException: Could not start Quartz Scheduler; nested exception is org.quartz.SchedulerConfigException: Failure occured during job recovery. [See nested exception: org.quartz.impl.jdbcjobstore.LockException: Failure obtaining db row lock: ERROR: current transaction is aborted, commands ignored until end of transaction block [See nested exception: org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block]]
Caused by: org.postgresql.util.PSQLException: ERROR: relation "qrtz_locks" does not exist
I want to show spark dataframe and I used:
df.writeStream.outputMode("append").start().awaitTermination()
But when I got the error when run this line:
21/07/16 01:20:53 ERROR MicroBatchExecution: Query [id = f243e6e6-c02e-4e70-b5c3-6a821fd33232, runId = 312544cf-fea8-45b4-94a1-c052306538cf] terminated with error
java.lang.NoSuchMethodError: org.apache.spark.sql.internal.SQLConf.useDeprecatedKafkaOffsetFetching()Z
Check the version of spark and version of dependencies you have added. Make sure both are having the same versions. This will resolve the issue.
There is an exception being thrown when I execute my Scala app with functionality of myRDD.saveToEs (I also tried saveToEs from a dataframe). My ES version is 2.3.5.
I am using Spark 1.5.0 so maybe there is a way to configure this in the SparkContext which I am not aware of.
The stack trace is as under -
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 1 times, most recent failure: Lost task 0.0 in stage 2.0 (TID 2, localhost): org.apache.spark.util.TaskCompletionListenerException: Found unrecoverable error [127.0.0.1:9200] returned Bad Request(400) - failed to parse [foo_eff_dt];Invalid format: ""; Bailing out..
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:90)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
The field named foo_eff_dt does have values and in certain cases doesnt (i.e., empty). I am not sure if this is causing the exception.
My scala code snippet looks like this :
fooRDD.saveToEs("foo/bar")
Please help/guide me in resolving this.
TIA.
I think you are trying to insert Date into Elastic and in Elastic Date can be empty.
{
"format": "strict_date_optional_time||epoch_millis",
"type": "date"
}
If you don't have strict need for date field then you can easily resolve this by changing this into string.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.NullPointerException: charsetName
Caused by: java.lang.NullPointerException: charsetName
at java.io.InputStreamReader.<init>(InputStreamReader.java:99)
at net.adamcin.granite.client.packman.AbstractPackageManagerClient.parseDetailedResponse(AbstractPackageManagerClient.java:383)
at net.adamcin.granite.client.packman.async.AsyncPackageManagerClient.access$400(AsyncPackageManagerClient.java:60)
This error occured when Im running a jenkins job.
I do not find any solution in Java or Jenkins side.
Any pointer would help me..