Spark DF: jrny_df1.createOrReplaceTempView("journeymap_drvs1")
approx: 10MM records
Creating a sql table of this view takes a long time:
create table temp.ms_journey_drvsv1 as select * from journeymap_drvs1;
Is there any process that I can follow to optimize the speed of the table creation. We Spark 2.4, 88 cores, 671 GB memory
Check the cluster configuration , post that partition the DF accordingly so that parallaism can be achieved which will eventually reduce the time
Related
I'm trying to check the size of the different tables we're generating in our data warehouse, so we can have an automatic way to calculate partition size in next runs.
In order to get the table size I'm getting the stats from dataframes in the following way:
val db = "database"
val table_name = "table_name"
val table_size_bytes = spark.read.table(s"$db.$table_name").queryExecution.analyzed.stats.sizeInBytes
This was working fine until I started running the same code on partitioned tables. Each time I ran it on a partitioned table I got the same value for sizeInBytes, which is the max allowed value for BigInt: 9223372036854775807.
Is this a bug in Spark or should I be running this in a different way for partitioned tables?
Technology: Spark 3.0.3 with Scala 2.12.10
I'm trying to pivot a spark dataframe having 238 million records (Parquet files total 1.1GB ) registered as spark tempView with 4 columns namely: (timestamp, asset, tag, value)
I had to pivot the dataframe on the tag column, so I fetched the distinct values of the tag and passed them into the in clause.
SQL Query Used:
SELECT
*
FROM
(
select
timestamp,
tag,
value
from
temp_table
group by
timestamp,
tag,
value
)
pivot( AVG(value) for tag IN
(
tagval_1,
tagval_2,
...,
tagval_4000
)
)
I've close to 4000 distinct values for the tag column.
Configuration used to run the Query:
Spark Driver: 5GB
Spark Executor: 14 Executors of 7GB memory and 2 cores each.
I always end up getting JVM memory overhead exception. Hence I kept on increasing the driver's memory to 5GB each time, eventually at 30GB memory the pivot query ran and it took whopping 25 mins to finish the job.
I'm doing something wrong? Is the resource usage justified? How can a total of 1.1GB of raw files take so many resources for pivoting?
Any help would be appreciated.
How do I join the data frame with oracle JDBC?
The schema for data frame is acct_n0,stmt_st_dt,stmt_end_dt,posn_as_of_dt.
We have to take the above posn_as_of_dt from the data frame and join it with a combination of dimension and fact_table in oracle and pull the balances from the fact_table. This combination is giving around 7M records whereas the data frame has less than 50 records. The output count should be the same as the data frame count. I tried to create the data frame by using spark read jdbc with db table as "select dim.acct_key,fact.balances,fact.posn_as_of_dt from dim_table dim,fact_table fact where dim.acct_no=fact.acct_no" but this is getting struck while joining with dataframe. Any other thoughts to speed up this join?
Basically what am i after is, is there any way i can directly take these dataframe and join with oracle and pull only matching records out?
When we use spark to read data from csv for DB as follow, it will automatically split the data to multiple partitions and sent to executors
spark
.read
.option("delimiter", ",")
.option("header", "true")
.option("mergeSchema", "true")
.option("codec", properties.getProperty("sparkCodeC"))
.format(properties.getProperty("fileFormat"))
.load(inputFile)
Currently, I have a id list as :
[1,2,3,4,5,6,7,8,9,...1000]
What I want to do is split this list to multiple partitions and sent to executors, in each executor, run the sql as
ids.foreach(id => {
select * from table where id = id
})
When we load data from cassandra, the connector will generate the query sql as:
select columns from table where Token(k) >= ? and Token(k) <= ?
it means, the connector will scan the whole database, virtually, I needn't to scan the whole table, I just what to get all the data from the table where the k(partition key) in the id list.
the table schema as:
CREATE TABLE IF NOT EXISTS tab.events (
k int,
o text,
event text
PRIMARY KEY (k,o)
);
or how can i use spark to load data from cassandra using pre defined sql statement without scan the whole table?
You simply need to use joinWithCassandra function to perform selection only of the data is required for your operation. But be aware that this function is only available via RDD API.
Something like this:
val joinWithRDD = your_df.rdd.joinWithCassandraTable("tab","events")
You need to make sure that column name in your DataFrame matched the partition key name in Cassandra - see documentation for more information.
The DataFrame implementation is only available in the DSE version of Spark Cassandra Connector as described in following blog post.
Update in September 2020th: support for join with Cassandra was added in the Spark Cassandra Connector 2.5.0
this question is a spin off from [this one] (saving a list of rows to a Hive table in pyspark).
EDIT please see my update edits at the bottom of this post
I have used both Scala and now Pyspark to do the same task, but I am having problems with VERY slow saves of a dataframe to parquet or csv, or converting a dataframe to a list or array type data structure. Below is the relevant python/pyspark code and info:
#Table is a List of Rows from small Hive table I loaded using
#query = "SELECT * FROM Table"
#Table = sqlContext.sql(query).collect()
for i in range(len(Table)):
rows = sqlContext.sql(qry)
val1 = Table[i][0]
val2 = Table[i][1]
count = Table[i][2]
x = 100 - count
#hivetemp is a table that I copied from Hive to my hfs using:
#create external table IF NOT EXISTS hive temp LIKE hivetableIwant2copy LOCATION "/user/name/hiveBackup";
#INSERT OVERWRITE TABLE hivetemp SELECT * FROM hivetableIwant2copy;
query = "SELECT * FROM hivetemp WHERE col1<>\""+val1+"\" AND col2 ==\""+val2+"\" ORDER BY RAND() LIMIT "+str(x)
rows = sqlContext.sql(query)
rows = rows.withColumn("col4", lit(10))
rows = rows.withColumn("col5", lit(some_string))
#writing to parquet is heck slow AND I can't work with pandas due to the library not installed on the server
rows.saveAsParquetFile("rows"+str(i)+".parquet")
#tried this before and heck slow also
#rows_list = rows.collect()
#shuffle(rows_list)
I have tried to do the above in Scala, and I had similar problems. I could easily load the hive table or query of a hive table, but needing to do a random shuffle or store a large dataframe encounters memory issues. There were also some challenges with being able to add 2 extra columns.
The Hive table (hiveTemp) that I want to add rows to has 5,570,000 ~5.5 million rows and 120 columns.
The Hive table that I am iterating in the for loop through has 5000 rows and 3 columns. There are 25 unique val1 (a column in hiveTemp), and the combinations of val1 and val2 3000. Val2 could be one of 5 columns and its specific cell value. This means if I had tweaked code, then I could reduce the lookups of rows to add down to 26 from 5000, but the number of rows I have to retrieve, store and random shuffle would be pretty large and hence a memory issue (unless anyone has suggestions on this)
As far as how many total rows I need to add to the table might be about 100,000.
The ultimate goal is to have the original table of 5.5mill rows appended with the 100k+ rows written as a hive or parquet table. If its easier, I am fine with writing the 100k rows in its own table that can be merged to the 5.5 mill table later
Scala or Python is fine, though Scala is more preferred..
Any advice on this and the options that would be best would be great.
Thanks a lot!
EDIT
Some additional thought I had on this problem:
I used the hash partitioner to partition the hive table into 26 partitions. This is based on a column value which there are 26 distinct ones. The operations I want to perform in the for loop could be generalized so that it only needs to happen on each of these partitions.
That being said, how could I, or what guide can I look at online to be able to write the scala code to do this, and for a separate executer to do each of these loops on each partition? I am thinking this would make things much faster.
I know how to do something like this using multithreads but not sure how to in the scala/spark paradigm.