How to connect From EMR to GCS - scala

We have a spark streaming application running on EMR cluster, we need to store the streaming data to Google Cloud Storage in parquet format.
Please anyone help me.

To connect to Google cloud storage (GCS) using spark on EMR, you need to include the google cloud storage connector in your application jar. You can also add the jar in hadoop classpath on EMR cluster. Quickest and easiest way is to bundle the GCS connector in your application jar.
You can get the google cloud storage connector here:
https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage
It has connectors for hadoop 1.x, 2.x and 3.x.
Once you get the jar, add the following properties in your spark application
SparkConf sparkConf = new SparkConf();
sparkConf.set("spark.hadoop.google.cloud.auth.service.account.enable", "true");
sparkConf.set("spark.hadoop.google.cloud.auth.service.account.json.keyfile", "<path to your google cloud key>");
SparkSession spark = SparkSession.builder()
.appName("My spark application")
.config(sparkConf)
.getOrCreate();
spark.sparkContext().hadoopConfiguration().set("fs.AbstractFileSystem.gs.impl","com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS");
spark.sparkContext().hadoopConfiguration().set("fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem");
You can put the GCP key on your EMR cluster using a simple bootstrap action script that copies the key from an s3 location to a local path.
Bundle the cloud storage connector jar with your application and now you can read/write using the "gs" filesystem.
I got this working with Hadoop 3.x and spark 3.x on EMR 6.3.0.

I am not sure of how you process streaming data in EMR. Anyways, you can always have a custom python script using google library to connect to GCS and push your data to GCS. You can also opt to run your script as a pyspark code to quicken your process
https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/read-write-to-cloud-storage

This Google Cloud official guide of how to migrate from Amazon S3 to Cloud Storage may be helpful:
https://cloud.google.com/storage/docs/migrating
My last answer was removed, please share at least why it was deleted. Thanks.

Related

Distribute third-party jar dependency on large-scale spark application

We have a third-party jar file on which our Spark application is dependent that. This jar file size is ~15MB. Since we want to deploy our Spark application on a large-scale cluster(~500 workers), we are concerned about distributing the third-party jar file. According to the Apache Spark documentation(https://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management), we have such options as using HDFS, HTTP server, driver HTTP server, and local-path for distributing the file.
We do not prefer to use local-path because it requires copying the jar file on all workers' spark libs directory. On the other hand, if we use HDFS or HTTP server when spark workers try to get the jar file, they may make a DoS attack against our Spark driver server. So, What is the best way to address this challenge?
On the other hand, if we use HDFS or HTTP server when spark workers try to get the jar file, they may make a DoS attack against our Spark driver server. So, What is the best way to address this challenge?
If you put the 3rd jar in HDFS why it effect the spark driver server?!
each node should take the addintiol jar directly from the hdfs not from the spark server.
After examining different proposed methods, we found out that the best way to distribute the jar files is to copy all nodes(as #egor mentioned, too.) based on our deployment tools, we released a new spark ansible role that supports external jars. At this moment, Apache Spark does not provide any optimized solution. If you give a remote URL (HTTP, HTTPS, HDFS, FTP) as an external jar path, spark workers fetch the jar file every time a new job submit. So, it is not an optimized solution from a network perspective.

scala spark cassandra installation

How many ways are there to run Spark? If I just declare dependencies in build.sbt, Spark is supposed to be downloaded and works?
But if I want to run Spark locally (download the Spark tar file, winutils...), how can I specify in scala code that I want to run my code against the local Spark and not against the dependencies downloaded in the IntelliJ?
In order to connect Spark to Cassandra, do I need a local installation of Spark? I read somewhere it's not possible to connect from a "programmatically" Spark to a local Cassandra database
1) Spark runs in a slightly strange way, there is your application (the Spark Driver and Executors) and there is the Resource Manager (Spark Master/Workers, Yarn, Mesos or Local).
In your code you can run against the in process manager (local) by specifying the master as local or local[n]. The Local mode requires no installation of Spark as it will be automatically setup in the process you are running. This would be using the dependencies you downloaded.
To run against a Spark Master which is running locally, you use a spark:// url that points at your particular local Spark Master instance. Note that this will cause executor JVMS to start separate from your application necessitating the distribution of application code and dependencies. (Other resource managers have there own identifying urls)
2) You do not need a "Resource Manager" to connect to C* from Spark but this ability is basically for debugging and testing. To do this you would use the local master url. Normal Spark usage should have an external Resource Manager because without an external resource manager the system cannot be distributed.
For some more Spark Cassandra examples see
https://github.com/datastax/SparkBuildExamples

Installing a spark cluster on a hadoop cluster

I am trying to install an apache spark cluster on a hadoop cluster.
I am looking for best pracises in this regard. I am assuming that the spark master needs to be installed on the same machine as the hadoop namenode and the spark slaves on the hadoop datanodes. Also, where all do I need to install scala? Please advise.
If your Hadoop cluster is running YARN just use yarn mode for submitting your applications. That's going to be the easiest method without requiring you to install anything beyond simply downloading the Apache Spark distribution to a client machine. One additional thing you can do is deploy the Spark assembly to HDFS so that you can use the spark.yarn.jar config when you call spark-submit so that the JAR is cached on the nodes.
See here for all the details: http://spark.apache.org/docs/latest/running-on-yarn.html

Should the worker also need Hadoop installed for Spark?

I have setup scala,Hadoop & spark & started the master node successfully.
I just installed scala & spark & started the worker(slave) too. So what I am confused is shouldn't Haddop be setup in worker too for running tasks?
This link from the official Apache Spark shows how to configure a spark cluster. And the requirements are clearly explained here that both scala and hadoop are required.

How to create a Spark Streaming jar that would work in AWS EMR?

I've been developing a Spark Streaming application with Eclipse, and I'm using sbt to run it locally.
Now I want to deploy the application on AWS using a jar, but when I try to use the command package of sbt it creates a jar without all dependencies so when I upload it on AWS it won't work because of Scala being missing.
Is there a way to create a uber-jar with SBT? Am I doing something wrong with the deployment of Spark on AWS?
For creating uber-jar with sbt, use sbt plugin sbt-assembly. For more details about creating uber-jar using sbt-assembly refer the blog post
After creating you can run the assembly jar using java -jar command.
But from Spark-1.0.0 onwards the spark-submit script in Spark’s bin directory is used to launch applications on a cluster for more details refer here
You should really be following Running Spark on EC2 that reads:
The spark-ec2 script, located in Spark’s ec2 directory, allows you to
launch, manage and shut down Spark clusters on Amazon EC2. It
automatically sets up Spark, Shark and HDFS on the cluster for you.
This guide describes how to use spark-ec2 to launch clusters, how to
run jobs on them, and how to shut them down. It assumes you’ve already
signed up for an EC2 account on the Amazon Web Services site.
I've only partially followed the document so I can't comment on how well it's written.
Moreover, according to Shipping Code to the Cluster chapter in the other document:
The recommended way to ship your code to the cluster is to pass it
through SparkContext’s constructor, which takes a list of JAR files
(Java/Scala) or .egg and .zip libraries (Python) to disseminate to
worker nodes. You can also dynamically add new files to be sent to
executors with SparkContext.addJar and addFile.