Performance impact of RDD API vs UDFs mixed with DataFrame API - scala

(Scala-specific question.)
While Spark docs encourage the use of DataFrame API where possible, if DataFrame API is insufficient, the choice is usually between falling back to RDD API or using UDFs. Is there inherent performance difference between these two alternatives?
RDD and UDF are similar in that neither of them can benefit from Catalyst and Tungsten optimizations. Is there any other overhead, and if there is, does it differ between the two approaches?
To give a specific example, let's say I have a DataFrame that contains a column of text data with custom formatting (not amenable to regexp matching). I need to parse that column and add a new vector column that contains the resulting tokens.

neither of them can benefit from Catalyst and Tungsten optimizations
This is not exactly true. While UDFs don't benefit from Tungsten optimization (arguably simple SQL transformation don't get huge boost there either) you still may benefit from execution plan optimizations provided by Catalyst. Let's illustrate that with a simple example (Note: Spark 2.0 and Scala. Don't extrapolate this to earlier versions, especially with PySpark):
val f = udf((x: String) => x == "a")
val g = udf((x: Int) => x + 1)
val df = Seq(("a", 1), ("b", 2)).toDF
df
.groupBy($"_1")
.agg(sum($"_2").as("_2"))
.where(f($"_1"))
.withColumn("_2", g($"_2"))
.select($"_1")
.explain
// == Physical Plan ==
// *HashAggregate(keys=[_1#2], functions=[])
// +- Exchange hashpartitioning(_1#2, 200)
// +- *HashAggregate(keys=[_1#2], functions=[])
// +- *Project [_1#2]
// +- *Filter UDF(_1#2)
// +- LocalTableScan [_1#2, _2#3]
Execution plan shows us a couple of things:
Selection has been pushed down before aggregation.
Projection has been pushed down before aggregation and effectively removed second UDF call.
Depending on the data and pipeline this can provide a substantial performance boost almost for free.
That being said both RDDs and UDFs require migrations between safe and unsafe with the latter one being significantly less flexible. Still, if the only thing you need is a simple map-like behavior without initializing expensive objects (like database connections) then UDF is the way to go.
In slightly more complex scenarios you can easily drop down to generic Dataset and reserve RDDs for cases when you really require an access to some low level features like custom partitioning.

(Note: I don't have measured backing for this)
To me, shuffle and (de)serialization are the main costs. But after these, having clean code is most important. With that in mind:
The main drawback of using RDD operations is that a (de)serialization of/into full jvm objects is required. While using udf might only (de)serialize the required columns. Note that this is when processing column oriented data such as parquet, for other data format I don't know, but would expect that in many cases both have similar perf.
So, if your algorithm are mostly filtering and shuffling op, and/or can be simply expressed with dataframe op and local udf, you should use those. However, if your algorithms requires complex processing over many columns, it is probably better to pay deserialization up-front, and execute clean and efficient scala code on jvm objects.
So, in my personal experience where I implement complex mathematical algorithms, I typically split the code in two steps:
pure dataframe op to do as much of filtering, join and groupBy op. In rare cases I can use udf when a specific local op is required that cannot be expressed using dataframe method (and if it only need very few columns)
then convert to rdd and use (flat)map op for the math and or complex lookup parts

Related

Best practice in Spark to filter dataframe, execute different actions on resulted dataframes and then union the new dataframes back

Since I am new to Spark I would like to ask a question about a pattern that I am using in Spark but don't know if it's a bad practice ( splitting a dataframe in two based on a filter, execute different actions on them and then joining them back ).
To give an example, having dataframe df:
val dfFalse = df.filter(col === false).distinct()
val dfTrue = df.filter(col === true).join(otherDf, Seq(id), "left_anti").distinct()
val newDf = dfFalse union dfTrue
Since my original dataframe has milions of rows I am curious if this filtering twice is a bad practice and I should use some other pattern in Spark which I may not be aware of. In other cases I even need to do 3,4 filters and then apply different actions to individual data frames and then union them all back.
Kind regards,
There are several key points to take into account when you start to use Spark to process big amounts of data in order to analyze our performance:
Spark parallelism depends of the number of partitions that you have in your distributed memory representations(RDD or Dataframes). That means that the process(Spark actions) will be executed in parallel across the cluster. But note that there are two main different kind of transformations: Narrow transformations and wide transformations. The former represent operations that will be executed without shuffle, so the data don´t need to be reallocated in different partitions thus avoiding data transfer among workers. Consider that if you what to perform a distinct by a specific key Spark must reorganize the data in order to detect the duplicates. Take a look to the doc.
Regarding doing more or less filter transformations:
Spark is based on a lazy evaluation model, it means that all the transformations that you executes on a dataframe are not going to be executed unless you call an action, for example a write operation. And the Spark optimizer evaluates your transformations in order to create an optimized execution plan. So, if you have five or six filter operations it will never traverse the dataframe six times(in contrast to other dataframe frameworks). The optimizer will take your filtering operations and will create one. Here some details.
So have in mind that Spark is a distributed in memory data processor and it is a must to know these details because you can spawn hundreds of cores over hundred of Gbs.
The efficiency of this approach highly depends on the ability to reduce the amount of the overlapped data files that are scanned by both the splits.
I will focus on two techniques that allow data-skipping:
Partitions - if the predicates are based on a partitioned column, only the necessary data will be scanned, based on the condition. In your case, if you split the original dataframe into 2 based on a partitioned column filtering, each dataframe will scan only the corresponding portion of the data. In this case, your approach will be perform really well as no data will be scanned twice.
Filter/predicate pushdown - data stored in a format supporting filter pushdown (Parquet for example) allows reading only the files that contains records with values matching the condition. In case that the values of the filtered column are distributed across many files, the filter pushdown will be inefficient since the data is skipped on a file basis and if a certain file contains values for both the splits, it will be scanned twice. Writing the data sorted by the filtered column might improve the efficiency of the filter pushdown (on read) by gathering the same values into a fewer amount of files.
As long as you manage to split your dataframe, using the above techniques, and minimize the amount of the overlap between the splits, this approach will be more efficient.

scala rapids using an opaque UDF for a single column dataframe that produces another column

I am trying to acquaint myself with RAPIDS Accelerator-based computation using Spark (3.3) with Scala. The primary contention in being able to use GPU appears to arise from the blackbox nature of UDFs. An automatic solution would be the Scala UDF compiler. But it won't work with cases where there are loops.
Doubt: Would I be able to get GPU contribution if my dataframe has only one column and produces another column, as this is a trivial case. If so, at least in some cases, even with no change in Spark code, the GPU performance benefit can be attained, even in case where the size of data is much higher than GPU memory. This would be great as sometimes it would be easy to simply merge all columns into one making a single column of WrappedArray using concat_ws that a UDF can simply convert into an Array. For all practical purposes to the GPU then the data is already in columnar fashion and only negligible overhead for row (on CPU) to column (on GPU) needs to be done.The case I am referring to would look like:
val newDf = df.withColumn(colB, opaqueUdf(col("colA")))
Resources: I tried to find good sources/examples to learn Spark-based approach for using RAPIDS, but it seems to me that only Python-based examples are given. Is there any resource/tutorial that gives some sample examples in coversion of Spark UDFs to make them RAPIDS compatible.
Yes #Quiescent, you are right. The Scala UDF -> Catalyst compiler can be used for simple UDFs that have a direct translation to Catalyst. Supported operations can be found here: https://nvidia.github.io/spark-rapids/docs/additional-functionality/udf-to-catalyst-expressions.html. Loops are definitely not supported in this automatic translation, because there isn't a direct expression that we can translate it to.
It all depends on how heavy opaqueUdf is, and how many rows are in your column. The GPU is going to be really good if there are many rows and the operation in the UDF is costly (say it's doing many arithmetic or string operations successively on that column). I am not sure why you want to "merge all columns into one", so can you clarify why you want to do that? On the conversion to Array, is that the purpose of the UDF, or are you wanting to take in N columns -> perform some operation likely involving loops -> produce an Array?
Another approach to accelerating UDFs with GPUs is to use our RAPIDS Accelerated UDFs. These are java or scala UDFs that you implement purposely, and they use the cuDF API directly. The Accelerated UDF document also links to our spark-rapids-examples repo, which has information on how to write Java or Scala UDFs in this way, please take a look there as well.

Does PySpark not support the DatasetAPI because it depends on strong typing? [duplicate]

I'm just wondering what is the difference between an RDD and DataFrame (Spark 2.0.0 DataFrame is a mere type alias for Dataset[Row]) in Apache Spark?
Can you convert one to the other?
First thing is DataFrame was evolved from SchemaRDD.
Yes.. conversion between Dataframe and RDD is absolutely possible.
Below are some sample code snippets.
df.rdd is RDD[Row]
Below are some of options to create dataframe.
1) yourrddOffrow.toDF converts to DataFrame.
2) Using createDataFrame of sql context
val df = spark.createDataFrame(rddOfRow, schema)
where schema can be from some of below options as described by nice SO post..
From scala case class and scala reflection api
import org.apache.spark.sql.catalyst.ScalaReflection
val schema = ScalaReflection.schemaFor[YourScalacaseClass].dataType.asInstanceOf[StructType]
OR using Encoders
import org.apache.spark.sql.Encoders
val mySchema = Encoders.product[MyCaseClass].schema
as described by Schema can also be created using StructType and
StructField
val schema = new StructType()
.add(StructField("id", StringType, true))
.add(StructField("col1", DoubleType, true))
.add(StructField("col2", DoubleType, true)) etc...
In fact there Are Now 3 Apache Spark APIs..
RDD API :
The RDD (Resilient Distributed Dataset) API has been in Spark since
the 1.0 release.
The RDD API provides many transformation methods, such as map(),
filter(), and reduce() for performing computations on the data. Each
of these methods results in a new RDD representing the transformed
data. However, these methods are just defining the operations to be
performed and the transformations are not performed until an action
method is called. Examples of action methods are collect() and
saveAsObjectFile().
RDD Example:
rdd.filter(_.age > 21) // transformation
.map(_.last)// transformation
.saveAsObjectFile("under21.bin") // action
Example: Filter by attribute with RDD
rdd.filter(_.age > 21)
DataFrame API
Spark 1.3 introduced a new DataFrame API as part of the Project
Tungsten initiative which seeks to improve the performance and
scalability of Spark. The DataFrame API introduces the concept of a
schema to describe the data, allowing Spark to manage the schema and
only pass data between nodes, in a much more efficient way than using
Java serialization.
The DataFrame API is radically different from the RDD API because it
is an API for building a relational query plan that Spark’s Catalyst
optimizer can then execute. The API is natural for developers who are
familiar with building query plans
Example SQL style :
df.filter("age > 21");
Limitations :
Because the code is referring to data attributes by name, it is not possible for the compiler to catch any errors. If attribute names are incorrect then the error will only detected at runtime, when the query plan is created.
Another downside with the DataFrame API is that it is very scala-centric and while it does support Java, the support is limited.
For example, when creating a DataFrame from an existing RDD of Java objects, Spark’s Catalyst optimizer cannot infer the schema and assumes that any objects in the DataFrame implement the scala.Product interface. Scala case class works out the box because they implement this interface.
Dataset API
The Dataset API, released as an API preview in Spark 1.6, aims to
provide the best of both worlds; the familiar object-oriented
programming style and compile-time type-safety of the RDD API but with
the performance benefits of the Catalyst query optimizer. Datasets
also use the same efficient off-heap storage mechanism as the
DataFrame API.
When it comes to serializing data, the Dataset API has the concept of
encoders which translate between JVM representations (objects) and
Spark’s internal binary format. Spark has built-in encoders which are
very advanced in that they generate byte code to interact with
off-heap data and provide on-demand access to individual attributes
without having to de-serialize an entire object. Spark does not yet
provide an API for implementing custom encoders, but that is planned
for a future release.
Additionally, the Dataset API is designed to work equally well with
both Java and Scala. When working with Java objects, it is important
that they are fully bean-compliant.
Example Dataset API SQL style :
dataset.filter(_.age < 21);
Evaluations diff. between DataFrame & DataSet :
Catalist level flow..(Demystifying DataFrame and Dataset presentation from spark summit)
Further reading... databricks article - A Tale of Three Apache Spark APIs: RDDs vs DataFrames and Datasets
A DataFrame is defined well with a google search for "DataFrame definition":
A data frame is a table, or two-dimensional array-like structure, in
which each column contains measurements on one variable, and each row
contains one case.
So, a DataFrame has additional metadata due to its tabular format, which allows Spark to run certain optimizations on the finalized query.
An RDD, on the other hand, is merely a Resilient Distributed Dataset that is more of a blackbox of data that cannot be optimized as the operations that can be performed against it, are not as constrained.
However, you can go from a DataFrame to an RDD via its rdd method, and you can go from an RDD to a DataFrame (if the RDD is in a tabular format) via the toDF method
In general it is recommended to use a DataFrame where possible due to the built in query optimization.
Apache Spark provide three type of APIs
RDD
DataFrame
Dataset
Here is the APIs comparison between RDD, Dataframe and Dataset.
RDD
The main abstraction Spark provides is a resilient distributed dataset (RDD), which is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel.
RDD Features:-
Distributed collection:
RDD uses MapReduce operations which is widely adopted for processing and generating large datasets with a parallel, distributed algorithm on a cluster. It allows users to write parallel computations, using a set of high-level operators, without having to worry about work distribution and fault tolerance.
Immutable: RDDs composed of a collection of records which are partitioned. A partition is a basic unit of parallelism in an RDD, and each partition is one logical division of data which is immutable and created through some transformations on existing partitions.Immutability helps to achieve consistency in computations.
Fault tolerant:
In a case of we lose some partition of RDD , we can replay the transformation on that partition in lineage to achieve the same computation, rather than doing data replication across multiple nodes.This characteristic is the biggest benefit of RDD because it saves a lot of efforts in data management and replication and thus achieves faster computations.
Lazy evaluations: All transformations in Spark are lazy, in that they do not compute their results right away. Instead, they just remember the transformations applied to some base dataset . The transformations are only computed when an action requires a result to be returned to the driver program.
Functional transformations:
RDDs support two types of operations: transformations, which create a new dataset from an existing one, and actions, which return a value to the driver program after running a computation on the dataset.
Data processing formats:
It can easily and efficiently process data which is structured as well as unstructured data.
Programming Languages supported:
RDD API is available in Java, Scala, Python and R.
RDD Limitations:-
No inbuilt optimization engine:
When working with structured data, RDDs cannot take advantages of Spark’s advanced optimizers including catalyst optimizer and Tungsten execution engine. Developers need to optimize each RDD based on its attributes.
Handling structured data:
Unlike Dataframe and datasets, RDDs don’t infer the schema of the ingested data and requires the user to specify it.
Dataframes
Spark introduced Dataframes in Spark 1.3 release. Dataframe overcomes the key challenges that RDDs had.
A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a R/Python Dataframe. Along with Dataframe, Spark also introduced catalyst optimizer, which leverages advanced programming features to build an extensible query optimizer.
Dataframe Features:-
Distributed collection of Row Object:
A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database, but with richer optimizations under the hood.
Data Processing:
Processing structured and unstructured data formats (Avro, CSV, elastic search, and Cassandra) and storage systems (HDFS, HIVE tables, MySQL, etc). It can read and write from all these various datasources.
Optimization using catalyst optimizer:
It powers both SQL queries and the DataFrame API. Dataframe use catalyst tree transformation framework in four phases,
1.Analyzing a logical plan to resolve references
2.Logical plan optimization
3.Physical planning
4.Code generation to compile parts of the query to Java bytecode.
Hive Compatibility:
Using Spark SQL, you can run unmodified Hive queries on your existing Hive warehouses. It reuses Hive frontend and MetaStore and gives you full compatibility with existing Hive data, queries, and UDFs.
Tungsten:
Tungsten provides a physical execution backend whichexplicitly manages memory and dynamically generates bytecode for expression evaluation.
Programming Languages supported:
Dataframe API is available in Java, Scala, Python, and R.
Dataframe Limitations:-
Compile-time type safety:
As discussed, Dataframe API does not support compile time safety which limits you from manipulating data when the structure is not know. The following example works during compile time. However, you will get a Runtime exception when executing this code.
Example:
case class Person(name : String , age : Int)
val dataframe = sqlContext.read.json("people.json")
dataframe.filter("salary > 10000").show
=> throws Exception : cannot resolve 'salary' given input age , name
This is challenging specially when you are working with several transformation and aggregation steps.
Cannot operate on domain Object (lost domain object):
Once you have transformed a domain object into dataframe, you cannot regenerate it from it. In the following example, once we have create personDF from personRDD, we won’t be recover the original RDD of Person class (RDD[Person]).
Example:
case class Person(name : String , age : Int)
val personRDD = sc.makeRDD(Seq(Person("A",10),Person("B",20)))
val personDF = sqlContext.createDataframe(personRDD)
personDF.rdd // returns RDD[Row] , does not returns RDD[Person]
Datasets API
Dataset API is an extension to DataFrames that provides a type-safe, object-oriented programming interface. It is a strongly-typed, immutable collection of objects that are mapped to a relational schema.
At the core of the Dataset, API is a new concept called an encoder, which is responsible for converting between JVM objects and tabular representation. The tabular representation is stored using Spark internal Tungsten binary format, allowing for operations on serialized data and improved memory utilization. Spark 1.6 comes with support for automatically generating encoders for a wide variety of types, including primitive types (e.g. String, Integer, Long), Scala case classes, and Java Beans.
Dataset Features:-
Provides best of both RDD and Dataframe:
RDD(functional programming, type safe), DataFrame (relational model, Query optimazation , Tungsten execution, sorting and shuffling)
Encoders:
With the use of Encoders, it is easy to convert any JVM object into a Dataset, allowing users to work with both structured and unstructured data unlike Dataframe.
Programming Languages supported:
Datasets API is currently only available in Scala and Java. Python and R are currently not supported in version 1.6. Python support is slated for version 2.0.
Type Safety:
Datasets API provides compile time safety which was not available in Dataframes. In the example below, we can see how Dataset can operate on domain objects with compile lambda functions.
Example:
case class Person(name : String , age : Int)
val personRDD = sc.makeRDD(Seq(Person("A",10),Person("B",20)))
val personDF = sqlContext.createDataframe(personRDD)
val ds:Dataset[Person] = personDF.as[Person]
ds.filter(p => p.age > 25)
ds.filter(p => p.salary > 25)
// error : value salary is not a member of person
ds.rdd // returns RDD[Person]
Interoperable: Datasets allows you to easily convert your existing RDDs and Dataframes into datasets without boilerplate code.
Datasets API Limitation:-
Requires type casting to String:
Querying the data from datasets currently requires us to specify the fields in the class as a string. Once we have queried the data, we are forced to cast column to the required data type. On the other hand, if we use map operation on Datasets, it will not use Catalyst optimizer.
Example:
ds.select(col("name").as[String], $"age".as[Int]).collect()
No support for Python and R: As of release 1.6, Datasets only support Scala and Java. Python support will be introduced in Spark 2.0.
The Datasets API brings in several advantages over the existing RDD and Dataframe API with better type safety and functional programming.With the challenge of type casting requirements in the API, you would still not the required type safety and will make your code brittle.
All(RDD, DataFrame, and DataSet) in one picture.
image credits
RDD
RDD is a fault-tolerant collection of elements that can be operated on in parallel.
DataFrame
DataFrame is a Dataset organized into named columns. It is
conceptually equivalent to a table in a relational database or a data
frame in R/Python, but with richer optimizations under the hood.
Dataset
Dataset is a distributed collection of data. Dataset is a new interface added in Spark 1.6 that provides the benefits of RDDs
(strong typing, ability to use powerful lambda functions) with the
benefits of Spark SQL’s optimized execution engine.
Note:
Dataset of Rows (Dataset[Row]) in Scala/Java will often refer as DataFrames.
Nice comparison of all of them with a code snippet.
source
Q: Can you convert one to the other like RDD to DataFrame or vice-versa?
Yes, both are possible
1. RDD to DataFrame with .toDF()
val rowsRdd: RDD[Row] = sc.parallelize(
Seq(
Row("first", 2.0, 7.0),
Row("second", 3.5, 2.5),
Row("third", 7.0, 5.9)
)
)
val df = spark.createDataFrame(rowsRdd).toDF("id", "val1", "val2")
df.show()
+------+----+----+
| id|val1|val2|
+------+----+----+
| first| 2.0| 7.0|
|second| 3.5| 2.5|
| third| 7.0| 5.9|
+------+----+----+
more ways: Convert an RDD object to Dataframe in Spark
2. DataFrame/DataSet to RDD with .rdd() method
val rowsRdd: RDD[Row] = df.rdd() // DataFrame to RDD
Because DataFrame is weakly typed and developers aren't getting the benefits of the type system. For example, lets say you want to read something from SQL and run some aggregation on it:
val people = sqlContext.read.parquet("...")
val department = sqlContext.read.parquet("...")
people.filter("age > 30")
.join(department, people("deptId") === department("id"))
.groupBy(department("name"), "gender")
.agg(avg(people("salary")), max(people("age")))
When you say people("deptId"), you're not getting back an Int, or a Long, you're getting back a Column object which you need to operate on. In languages with a rich type systems such as Scala, you end up losing all the type safety which increases the number of run-time errors for things that could be discovered at compile time.
On the contrary, DataSet[T] is typed. when you do:
val people: People = val people = sqlContext.read.parquet("...").as[People]
You're actually getting back a People object, where deptId is an actual integral type and not a column type, thus taking advantage of the type system.
As of Spark 2.0, the DataFrame and DataSet APIs will be unified, where DataFrame will be a type alias for DataSet[Row].
Simply RDD is core component, but DataFrame is an API introduced in spark 1.30.
RDD
Collection of data partitions called RDD. These RDD must follow few properties such is:
Immutable,
Fault Tolerant,
Distributed,
More.
Here RDD is either structured or unstructured.
DataFrame
DataFrame is an API available in Scala, Java, Python and R. It allows to process any type of Structured and semi structured data. To define DataFrame, a collection of distributed data organized into named columns called DataFrame. You can easily optimize the RDDs in the DataFrame.
You can process JSON data, parquet data, HiveQL data at a time by using DataFrame.
val sampleRDD = sqlContext.jsonFile("hdfs://localhost:9000/jsondata.json")
val sample_DF = sampleRDD.toDF()
Here Sample_DF consider as DataFrame. sampleRDD is (raw data) called RDD.
Most of answers are correct only want to add one point here
In Spark 2.0 the two APIs (DataFrame +DataSet) will be unified together into a single API.
"Unifying DataFrame and Dataset: In Scala and Java, DataFrame and Dataset have been unified, i.e. DataFrame is just a type alias for Dataset of Row. In Python and R, given the lack of type safety, DataFrame is the main programming interface."
Datasets are similar to RDDs, however, instead of using Java serialization or Kryo they use a specialized Encoder to serialize the objects for processing or transmitting over the network.
Spark SQL supports two different methods for converting existing RDDs into Datasets. The first method uses reflection to infer the schema of an RDD that contains specific types of objects. This reflection based approach leads to more concise code and works well when you already know the schema while writing your Spark application.
The second method for creating Datasets is through a programmatic interface that allows you to construct a schema and then apply it to an existing RDD. While this method is more verbose, it allows you to construct Datasets when the columns and their types are not known until runtime.
Here you can find RDD tof Data frame conversation answer
How to convert rdd object to dataframe in spark
A DataFrame is equivalent to a table in RDBMS and can also be manipulated in similar ways to the "native" distributed collections in RDDs. Unlike RDDs, Dataframes keep track of the schema and support various relational operations that lead to more optimized execution.
Each DataFrame object represents a logical plan but because of their "lazy" nature no execution occurs until the user calls a specific "output operation".
Few insights from usage perspective, RDD vs DataFrame:
RDDs are amazing! as they give us all the flexibility to deal with almost any kind of data; unstructured, semi structured and structured data. As, lot of times data is not ready to be fit into a DataFrame, (even JSON), RDDs can be used to do preprocessing on the data so that it can fit in a dataframe. RDDs are core data abstraction in Spark.
Not all transformations that are possible on RDD are possible on DataFrames, example subtract() is for RDD vs except() is for DataFrame.
Since DataFrames are like a relational table, they follow strict rules when using set/relational theory transformations, for example if you wanted to union two dataframes the requirement is that both dfs have same number of columns and associated column datatypes. Column names can be different. These rules don't apply to RDDs. Here is a good tutorial explaining these facts.
There are performance gains when using DataFrames as others have already explained in depth.
Using DataFrames you don't need to pass the arbitrary function as you do when programming with RDDs.
You need the SQLContext/HiveContext to program dataframes as they lie in SparkSQL area of spark eco-system, but for RDD you only need SparkContext/JavaSparkContext which lie in Spark Core libraries.
You can create a df from a RDD if you can define a schema for it.
You can also convert a df to rdd and rdd to df.
I hope it helps!
A Dataframe is an RDD of Row objects, each representing a record. A
Dataframe also knows the schema (i.e., data fields) of its rows. While Dataframes
look like regular RDDs, internally they store data in a more efficient manner, taking advantage of their schema. In addition, they provide new operations not available on RDDs, such as the ability to run SQL queries. Dataframes can be created from external data sources, from the results of queries, or from regular RDDs.
Reference: Zaharia M., et al. Learning Spark (O'Reilly, 2015)
a. RDD (Spark1.0) —> Dataframe(Spark1.3) —> Dataset(Spark1.6)
b. RDD lets us decide HOW we want to do which limits the optimization Spark can do on processing underneath . dataframe/dataset lets us decide WHAT we want to do and leave everything on Spark to decide how to do computation.
c. RDD Being in-memory jvm objects, RDDs involve overhead of Garbage Collection and Java(or little better Kryo) Serialization which are expensive when data grows. That is degrade the performance.
Data frame offers huge performance improvement over RDDs because of 2 powerful features it has:
Custom Memory management (aka Project Tungsten)
Optimized Execution Plans (aka Catalyst Optimizer)
Performance wise RDD -> Data Frame -> Dataset
d. How dataset(Project Tungsten and Catalyst Optimizer) scores over Data frame is an additional feature it has: Encoders
Spark RDD (resilient distributed dataset) :
RDD is the core data abstraction API and is available since very first release of Spark (Spark 1.0). It is a lower-level API for manipulating distributed collection of data. The RDD APIs exposes some extremely useful methods which can be used to get very tight control over underlying physical data structure. It is an immutable (read only) collection of partitioned data distributed on different machines. RDD enables in-memory computation on large clusters to speed up big data processing in a fault tolerant manner.
To enable fault tolerance, RDD uses DAG (Directed Acyclic Graph) which consists of a set of vertices and edges. The vertices and edges in DAG represent the RDD and the operation to be applied on that RDD respectively. The transformations defined on RDD are lazy and executes only when an action is called
Spark DataFrame :
Spark 1.3 introduced two new data abstraction APIs – DataFrame and DataSet. The DataFrame APIs organizes the data into named columns like a table in relational database. It enables programmers to define schema on a distributed collection of data. Each row in a DataFrame is of object type row. Like an SQL table, each column must have same number of rows in a DataFrame. In short, DataFrame is lazily evaluated plan which specifies the operations needs to be performed on the distributed collection of the data. DataFrame is also an immutable collection.
Spark DataSet :
As an extension to the DataFrame APIs, Spark 1.3 also introduced DataSet APIs which provides strictly typed and object-oriented programming interface in Spark. It is immutable, type-safe collection of distributed data. Like DataFrame, DataSet APIs also uses Catalyst engine in order to enable execution optimization. DataSet is an extension to the DataFrame APIs.
Other Differences -
A DataFrame is an RDD that has a schema. You can think of it as a relational database table, in that each column has a name and a known type. The power of DataFrames comes from the fact that, when you create a DataFrame from a structured dataset (Json, Parquet..), Spark is able to infer a schema by making a pass over the entire (Json, Parquet..) dataset that's being loaded. Then, when calculating the execution plan, Spark, can use the schema and do substantially better computation optimizations.
Note that DataFrame was called SchemaRDD before Spark v1.3.0
Apache Spark – RDD, DataFrame, and DataSet
Spark RDD –
An RDD stands for Resilient Distributed Datasets. It is Read-only
partition collection of records. RDD is the fundamental data structure
of Spark. It allows a programmer to perform in-memory computations on
large clusters in a fault-tolerant manner. Thus, speed up the task.
Spark Dataframe –
Unlike an RDD, data organized into named columns. For example a table
in a relational database. It is an immutable distributed collection of
data. DataFrame in Spark allows developers to impose a structure onto
a distributed collection of data, allowing higher-level abstraction.
Spark Dataset –
Datasets in Apache Spark are an extension of DataFrame API which
provides type-safe, object-oriented programming interface. Dataset
takes advantage of Spark’s Catalyst optimizer by exposing expressions
and data fields to a query planner.

can't understand how does scala operation functions in Apache spark

hello everyone,
so I started to learn about Apache spark architecture and I understand how the data flow works in higher-level.
what I learn is that spark jobs work on stages that have tasks to operate on RDDS in which they are created with lazy transformations starting from the Spark console. (correct me if I'm wrong)
what I didn't get it :
there are other types of data structures in Spark: Data Frame and Dataset, and there are functions to manipulate them,
so what is the relation between those functions and the tasks applied on RDDs ?
coding with Scala has operations on RDD which is logic as far as I know, and there is also other types of data structure that I can do operations on them and manipulate them like list, Stream, vector, etc... so my question is
so how can spark execute these operations if they are not applied on RDDS ?
I have an estimation of time-complexity of each algorithm operating on any type of data structure of Scala referring to the official documents but I can't find the estimation of time-complexity of operations on RDDS, for example, count() or ReduceKey() applied in RDDS.
why we can't we evaluate exactly the complexity of Spark-app, and is it possible to evaluate elementary tasks complexities ?
more formally, what are RDDS and what is the relation between them and everything in Spark
if someone can clarify to me this confusion of information, I'd be grateful.
so what is the relation between those functions and the tasks applied on RDDs ?
DataFrames, Datasets, and RDD are three API from Spark. Check out this link
so how can spark execute these operations if they are not applied on RDDS?
RDD are structural Data structures, Actions and Transformation specified by Spark can be applied on RDD. Within RDD action or transformation, we do apply some of scala native operations. Each Spark API has it's own set operations. Read the link as shown in previous to get a better idea on how parallelism is achieved in the operations
why we can't we evaluate exactly the complexity of Spark-app, and is it possible to evaluate elementary tasks complexities ?
This article explains Map Reduce Complexity
https://web.stanford.edu/~ashishg/papers/mapreducecomplexity.pdf

spark map RDD vs joins

Wondering which of the two will be more performant for a large dataset.
Lets say I've loaded orders from mongo, the schema for Orders is
case class Orders(organization: String, orderId: Long, recipient: String)
val orders = MongoSpark.load[Orders](spark)
Now I see that there are two ways of going about the next step, I'd like to look up each company that is attributed to an order.
Option 1 is MapRDD
val companies = MongoSpark.load[Company](spark, ReadConfig(...)).map { c => (c.id, c)}
val companiesMap = IndexedRDD(companies.rdd)
or the second option would be to run a join
val joined = orders.join(MongoSpark.load[Company(spark), $orderId === $companyId"
This dataset on a production server ranges from 500 gb - 785 gb.
With the latest advances in Spark (>2.0), when it comes to RDD vs DataFrame almost 100% of the time the correct answer is DataFrames. I suggest you always try to stay in the DaraFrame world, and don't transition to RDDs at all.
In more detail:
RDDs will always curry all the fields for every row. It will also realize the Scala case class and all the Strings are heavyweight Java Strings, etc. On the other hand, DataFrames with tungsten (whole-stage-code-generators and its optimized encoders) and catalyst make everything faster.
RDD is all Scala/Java. DataFrames use their own super thin encoding for types that has a much more compressed/cache-friendly representation for the same data.
RDD code doesn't go through Catalyst, meaning nothing will actually get (query) optimized.
Finally, DataFrames have a code-generator that really optimizes the chained operations in different stages.
This read is really a must.