Im trying to build the TPCDS benchmark datasets, by following this website.
https://xuechendi.github.io/2019/07/12/Prepare-TPCDS-For-Spark
when I run this:
scala> [troberts#master1 spark-sql-perf]$ spark-shell --master yarn --deploy-mode cliers /home/troberts/spark-sql-perf/target/scala-2.11/spark-sql-perf_2.11-0.5.1-SNAPSHOT.jar -i TPCDPreparation.scala
I get this error? Im wondering if its something to do with permissions as the file dsdgen definitely exists at that location on each of the worker nodes /home/troberts/spark-sql-perf/tpcds-kit/tools
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure:
Aborting TaskSet 0.0 because task 0 (partition 0)
cannot run anywhere due to node and executor blacklist.
Most recent failure:
Lost task 0.0 in stage 0.0 (TID 0, worker1.mycluster.com, executor 1): java.lang.RuntimeException: Could not find dsdgen at /home/troberts/spark-sql-perf/tpcds-kit/tools/dsdgen or //home/troberts/spark-sql-perf/tpcds-kit/tools/dsdgen. Run install
at scala.sys.package$.error(package.scala:27)
Any ideas appreciated.
Cheers
Could not find dsdgen at /home/troberts/spark-sql-perf/tpcds-kit/tools/dsdgen or //home/troberts/spark-sql-perf/tpcds-kit/tools/dsdgen
You need to have TPCDS installed first.
spark-sql-perf docs from tool you've used:
Before running any query, a dataset needs to be setup by creating a Benchmark object.
Generating the TPCDS data requires dsdgen built and available on the machines.
We have a fork of dsdgen that you will need.
The fork includes changes to generate TPCDS data to stdout, so that this library can pipe them directly to Spark, without intermediate files.
Therefore, this library will not work with the vanilla TPCDS kit.
TPCDS kit needs to be installed on all cluster executor nodes under the same path!
Please, configure TPCDC toolkit from databricks
Related
According to the beam harness documentation:
PROCESS: User code is executed by processes that are automatically started by the runner on each worker node.
args = [
"--runner=portableRunner",
"--streaming",
"--sdk_worker_parallelism=2",
"--environment_type=PROCESS",
"--environment_config={\"command\": \"/opt/apache/beam/boot\"}",
]
consumer_config = {
"security.protocol": "SASL_SSL",
"sasl.mechanism": "AWS_MSK_IAM",
"sasl.jaas.config": "software.amazon.msk.auth.iam.IAMLoginModule required;",
"sasl.client.callback.handler.class": "software.amazon.msk.auth.iam.IAMClientCallbackHandler",
"bootstrap.servers": bootstrap_servers,
}
with beam.Pipeline(options=PipelineOptions(args)) as p:
data = p | "Reading messages from Kafka" >> ReadFromKafka(
consumer_config=consumer_config,
topics=topics,
with_metadata=True
)
data | 'Writing to stdout' >> beam.Map(logging.info)
But when I run the code (deployed to k8s using flinkk8soperator), it is complaining:
Caused by: java.io.IOException: Cannot run program "docker": error=2, No such file or directory
Wondering if I misunderstand anything? Thanks!
After couple digging, I finally make the cross language work without using DinD or DooD. Here's the steps:
Ensure both job and task manager mount a shared volume for artifact staging. (This is required, otherwise the task manager will complained unable to find the submitted jar)
Ensure your docker image can run both java and python beam code, here's what I did:
# python SDK
COPY --from=apache/beam_python3.7_sdk:2.41.0 /opt/apache/beam/ /opt/apache/beam/
# java SDK
COPY --from=apache/beam_java8_sdk:2.41.0 /opt/apache/beam/ /opt/apache/beam_java/
In the job, you'll need to start the expansion service with extra args, for example the KafkaIo:
from apache_beam.io.kafka import ReadFromKafka, default_io_expansion_service
ReadFromKafka(
consumer_config=consumer_config,
topics=[topic],
with_metadata=False,
expansion_service=default_io_expansion_service(
append_args=[
'--defaultEnvironmentType=PROCESS',
"--defaultEnvironmentConfig={\"command\":\"/opt/apache/beam_java/boot\"}",
]
)
You portable execution relies on xLang support that requires starting a Java SDK with docker. Your cluster doesn't have docker installed.
Running basic df.show() post spark notebook installation
I am getting the following error when running scala - spark code on spark-notebook. Any idea when this occurs and how to avoid?
[org.apache.spark.repl.ExecutorClassLoader] Failed to check existence of class org.apache.spark.sql.catalyst.expressions.Object on REPL class server at spark://192.168.10.194:50935/classes
[org.apache.spark.util.Utils] Aborting task
[org.apache.spark.repl.ExecutorClassLoader] Failed to check existence of class org on REPL class server at spark://192.168.10.194:50935/classes
[org.apache.spark.util.Utils] Aborting task
[org.apache.spark.repl.ExecutorClassLoader] Failed to check existence of class
I installed the spark on local, and when I was using following code it was giving me the same error.
spark.read.format("json").load("Downloads/test.json")
I think the issue was, it was trying to find some master node and taking some random or default IP. I specified the mode and then provided the IP as 127.0.0.1 and it resolved my issue.
Solution
Run the spark using local master
usr/local/bin/spark-shell --master "local[4]" --conf spark.driver.host=127.0.0.1'
I know this is a weird way of using Spark but I'm trying to save a dataframe to the local file system (not hdfs) using Spark even though I'm in cluster mode. I know I can use client mode but I do want to run in cluster mode and don't care which node (out of 3) the application is going to run on as driver.
The code below is the pseudo code of what I'm trying to do.
// create dataframe
val df = Seq(Foo("John", "Doe"), Foo("Jane", "Doe")).toDF()
// save it to the local file system using 'file://' because it defaults to hdfs://
df.coalesce(1).rdd.saveAsTextFile(s"file://path/to/file")
And this is how I'm submitting the spark application.
spark-submit --class sample.HBaseSparkRSample --master yarn-cluster hbase-spark-r-sample-assembly-1.0.jar
This works fine if I'm in local mode but doesn't in yarn-cluster mode.
For example, java.io.IOException: Mkdirs failed to create file occurs with the above code.
I've changed the df.coalesce(1) part to df.collect and attempted to save a file using plain Scala but it ended up with a Permission denied.
I've also tried:
spark-submit with root user
chowned yarn:yarn, yarn:hadoop, spark:spark
gave chmod 777 to related directories
but no luck.
I'm assuming this has to do something with clusters, drivers and executors, and the user who's trying to write to the local file system but am pretty much stuck in solving this problem by myself.
I'm using:
Spark: 1.6.0-cdh5.8.2
Scala: 2.10.5
Hadoop: 2.6.0-cdh5.8.2
Any support is welcome and thanks in advance.
Some articles I've tried:
"Spark saveAsTextFile() results in Mkdirs failed to create for half of the directory" -> Tried changing users but nothing changed
"Failed to save RDD as text file to local file system" -> chmod didn't help me
Edited (2016/11/25)
This is the Exception I get.
java.io.IOException: Mkdirs failed to create file:/home/foo/work/rhbase/r/input/input.csv/_temporary/0/_temporary/attempt_201611242024_0000_m_000000_0 (exists=false, cwd=file:/yarn/nm/usercache/foo/appcache/application_1478068613528_0143/container_e87_1478068613528_0143_01_000001)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:449)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:920)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:813)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:135)
at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:91)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1193)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/11/24 20:24:12 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.io.IOException: Mkdirs failed to create file:/home/foo/work/rhbase/r/input/input.csv/_temporary/0/_temporary/attempt_201611242024_0000_m_000000_0 (exists=false, cwd=file:/yarn/nm/usercache/foo/appcache/application_1478068613528_0143/container_e87_1478068613528_0143_01_000001)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:449)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:920)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:813)
at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:135)
at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:91)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1193)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I'm going to answer my own question because eventually, none of the answers didn't seem to solve my problem. None the less, thanks for all the answers and pointing me to alternatives I can check.
I think #Ricardo was the closest in mentioning the user of the Spark application. I checked whoami with Process("whoami") and the user was yarn. The problem was probably that I tried to output to /home/foo/work/rhbase/r/input/input.csv and although /home/foo/work/rhbase was owned by yarn:yarn, /home/foo was owned by foo:foo. I haven't checked in detail but this may have been the cause of this permission problem.
When I hit pwd in my Spark application with Process("pwd"), it output /yarn/path/to/somewhere. So I decided to output my file to /yarn/input.csv and it was successful despite in cluster mode.
I probably can conclude that this was just a simple permission issue. Any further solution would be welcome but for now, this was the way how I solved this question.
If you run the job as yarn-cluster mode, the driver will be running in any of the machine which is managed by YARN, so if saveAsTextFile has local file path, then it will store the output in any of the machine where driver is running.
Try running the job as yarn-client mode so the driver runs in the client machine
Check if you are trying to run/write the file with a user other than the Spark service.
On that situation you can solve the permission issue by presetting the directory ACLs. Example:
setfacl -d -m group:spark:rwx /path/to/
(modify "spark" to your user group trying to write the file)
Use forEachPartition method, and then for each partition get file system object and write one by one record to it, below is the sample code here i am writing to hdfs, instead you can use local file system as well
Dataset<String> ds=....
ds.toJavaRdd.foreachPartition(new VoidFunction<Iterator<String>>() {
#Override
public void call(Iterator<String> iterator) throws Exception {
final FileSystem hdfsFileSystem = FileSystem.get(URI.create(finalOutPathLocation), hadoopConf);
final FSDataOutputStream fsDataOutPutStream = hdfsFileSystem.exists(finalOutPath)
? hdfsFileSystem.append(finalOutPath) : hdfsFileSystem.create(finalOutPath);
long processedRecCtr = 0;
long failedRecsCtr = 0;
while (iterator.hasNext()) {
try {
fsDataOutPutStream.writeUTF(iterator.next);
} catch (Exception e) {
failedRecsCtr++;
}
if (processedRecCtr % 3000 == 0) {
LOGGER.info("Flushing Records");
fsDataOutPutStream.flush();
}
}
fsDataOutPutStream.close();
}
});
Please refer to the spark documentation to understand the use of --master option in spark-submit.
--master local is supposed to be used when running locally.
--master yarn --deploy-mode cluster is supposed to be used when actually running on a yarn cluster.
Refer this and this.
I have a 200 node mesos cluster that can run around 2700 executors concurrently. Around 5-10% of my executors are LOST at the very beginning. They go only until extracting the executor tar file.
WARNING: Logging before InitGoogleLogging() is written to STDERR I0617 21:35:09.947180 45885 fetcher.cpp:76] Fetching URI 'http://download_url/remote_executor.tgz' I0617 21:35:09.947273 45885 fetcher.cpp:126] Downloading 'http://download_url/remote_executor.tgz' to '/mesos_dir/remote_executor.tgz' I0617 21:35:57.551722 45885 fetcher.cpp:64] Extracted resource '/mesos_dir/remote_executor.tgz' into '/extracting_mesos_dir/'
Please let me know if someone else is facing this issue.
I am using python to implement both the scheduler and executor. The executor code is a python file that extends base class 'Executor'. I have implemented the launchTasks method of Executor class that simply does what the executor is supposed to do.
The executor info is:
executor = mesos_pb2.ExecutorInfo()
executor.executor_id.value = "executor-%s" % (str(task_id),)
executor.command.value = 'python -m myexecutor'
# where to download executor from
tar_uri = '%s/remote_executor.tgz' % (
self.conf.remote_executor_cache_url)
executor.command.uris.add().value = tar_uri
executor.name = 'some_executor_name'
executor.source = "executor_test"
Can you provide more details about what your Executor is supposed to do (at best ExecutorInfo Definition and the Executor itself)? What is the Command you use to start the executor (CommandInfo)?
For example definition of an executor have a look at Rendler.
It includes a sample executor and the ExecutorInfo definition.
Rendler are also includes samples in Java, GO, Python, Scala, and Haskell.
We are trying to run Hive queries on HDP 2.1 using GCS Connector, it was working fine until yesterday but since today morning our jobs are randomly started failing. When we restart them manually they just work fine. I suspect it's something to do with number of parallel Hive jobs running at a given point of time.
Below is the error message:
vertexId=vertex_1407434664593_37527_2_00, diagnostics=[Vertex Input: audience_history initializer failed., java.lang.ClassNotFoundException: Class com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem not found]
DAG failed due to vertex failure. failedVertices:1 killedVertices:0
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask
Any help will be highly appreciated.
Thanks!