This question already has answers here:
How to split a dataframe into dataframes with same column values?
(3 answers)
Closed 5 years ago.
I'm working with a spark dataframe (in scala), and what I'd like to do is group by a column and turn the different groups as a sequence of dataframes.
So it would look something like
df.groupyby("col").toSeq -> Seq[DataFrame]
Even better would be to turn it into something with a key pair
df.groupyby("col").toSeq -> Dict[key, DataFrame]
This seems like an obvious thing to do, but I can't seem to figure out how it might work
This is what you could do, Here is a simple example
import spark.implicits._
val data = spark.sparkContext.parallelize(Seq(
(29,"City 2", 72),
(28,"City 3", 48),
(28,"City 2", 19),
(27,"City 2", 16),
(28,"City 1", 84),
(28,"City 4", 72),
(29,"City 4", 39),
(27,"City 3", 42),
(26,"City 3", 68),
(27,"City 1", 89),
(27,"City 4", 104),
(26,"City 2", 19),
(29,"City 3", 27)
)).toDF("week", "city", "sale")
//create a dataframe with dummy data
//get list of cities
val city = data.select("city").distinct.collect().flatMap(_.toSeq)
// get all the columns for each city
//this returns Seq[(Any, Dataframe)] as (cityId, Dataframe)
val result = city.map(c => (c -> data.where(($"city" === c))))
//print all the dataframes
result.foreach(a=>
println(s"Dataframe with ${a._1}")
a._2.show()
})
Output looks Like this
Dataframe with City 1
+----+------+----+
|week| city|sale|
+----+------+----+
| 28|City 1| 84|
| 27|City 1| 89|
+----+------+----+
Dataframe with City 3
+----+------+----+
|week| city|sale|
+----+------+----+
| 28|City 3| 48|
| 27|City 3| 42|
| 26|City 3| 68|
| 29|City 3| 27|
+----+------+----+
Dataframe with City 4
+----+------+----+
|week| city|sale|
+----+------+----+
| 28|City 4| 72|
| 29|City 4| 39|
| 27|City 4| 104|
+----+------+----+
Dataframe with City 2
+----+------+----+
|week| city|sale|
+----+------+----+
| 29|City 2| 72|
| 28|City 2| 19|
| 27|City 2| 16|
| 26|City 2| 19|
+----+------+----+
You can also use partitionby to group the data and write to the output as
dataframe.write.partitionBy("col").saveAsTable("outputpath")
this creates a output file for each grouped of "col"
Hope this helps!
Related
I have a dataframe with 2 columns (df1). Now I want to merge columns values into one (df2). How?
Let's say you have DataFrame like this:
d = [
("Value 1", 1),
("Value 2", 2),
("Value 3", 3),
("Value 4", 4),
("Value 5", 5),
]
df = spark.createDataFrame(d,['col1','col2'])
df.show()
# output
+-------+----+
| col1|col2|
+-------+----+
|Value 1| 1|
|Value 2| 2|
|Value 3| 3|
|Value 4| 4|
|Value 5| 5|
+-------+----+
You can join columns and format them as you want using following syntax:
(
df.withColumn("newCol",
F.format_string("Col 1: %s Col 2: %s", df.col1, df.col2))
.show(truncate=False)
)
# output
+-------+----+-----------------------+
|col1 |col2|newCol |
+-------+----+-----------------------+
|Value 1|1 |Col 1: Value 1 Col 2: 1|
|Value 2|2 |Col 1: Value 2 Col 2: 2|
|Value 3|3 |Col 1: Value 3 Col 2: 3|
|Value 4|4 |Col 1: Value 4 Col 2: 4|
|Value 5|5 |Col 1: Value 5 Col 2: 5|
+-------+----+-----------------------+
from pyspark.sql.functions import concat
df1.withColumn("Merge", concat(df1.Column_1, df1.Column_2)).show()
You can use a struct or a map.
struct:
df.withColumn(
"price_struct",
F.struct(
(F.col("total_price")*100).alias("amount"),
"total_price_currency",
F.lit("CENTI").alias("unit")
)
)
results in
+-----------+--------------------+--------------------+
|total_price|total_price_currency| price_struct|
+-----------+--------------------+--------------------+
| 79.0| USD|[7900.0, USD, CENTI]|
+-----------+--------------------+--------------------+
or as a map
df
.withColumn("price_map",
F.create_map(
F.lit("currency"), F.col("total_price_currency"),
F.lit("amount"), F.col("total_price")*100,
F.lit("unit"), F.lit("CENTI")
).alias("price_struct")
)
results in
+-----------+--------------------+--------------------+
|total_price|total_price_currency| price_map|
+-----------+--------------------+--------------------+
| 79.0| USD|[currency -> USD,...|
+-----------+--------------------+--------------------+
I am woking on graphframes part,where I need to have edges/links in d3.js to be in indexed values of Vertex/nodes as source and destination.
Now I have VertexDF as
+--------------------+-----------+
| id| rowID|
+--------------------+-----------+
| Raashul Tandon| 3|
| Helen Jones| 5|
----------------------------------
EdgesDF
+-------------------+--------------------+
| src| dst|
+-------------------+--------------------+
| Raashul Tandon| Helen Jones |
------------------------------------------
Now I need to transform this EdgesDF as below
+-------------------+--------------------+
| src| dst|
+-------------------+--------------------+
| 3 | 5 |
------------------------------------------
All the column values should be having the index of the names taken from VertexDF.I am expecting in Higher-order functions.
My approach is to convert VertexDF to map, then iterating the EdgesDF and replaces every occurence.
What I have Tried
made a map of name to ids
val Actmap = VertxDF.collect().map(f =>{
val name = f.getString(0)
val id = f.getLong(1)
(name,id)
})
.toMap
Used that map with EdgesDF
EdgesDF.collect().map(f => {
val src = f.getString(0)
val dst = f.getString(0)
val src_id = Actmap.get(src)
val dst_id = Actmap.get(dst)
(src_id,dst_id)
})
Your approach of collect-ing the vertex and edge dataframes would work only if they're small. I would suggest left-joining the edge and vertex dataframes to get what you need:
import org.apache.spark.sql.functions._
import spark.implicits._
val VertxDF = Seq(
("Raashul Tandon", 3),
("Helen Jones", 5),
("John Doe", 6),
("Rachel Smith", 7)
).toDF("id", "rowID")
val EdgesDF = Seq(
("Raashul Tandon", "Helen Jones"),
("Helen Jones", "John Doe"),
("Unknown", "Raashul Tandon"),
("John Doe", "Rachel Smith")
).toDF("src", "dst")
EdgesDF.as("e").
join(VertxDF.as("v1"), $"e.src" === $"v1.id", "left_outer").
join(VertxDF.as("v2"), $"e.dst" === $"v2.id", "left_outer").
select($"v1.rowID".as("src"), $"v2.rowID".as("dst")).
show
// +----+---+
// | src|dst|
// +----+---+
// | 3| 5|
// | 5| 6|
// |null| 3|
// | 6| 7|
// +----+---+
I'm working on a Spark dataframe containing this kind of data:
A,1,2,3
B,1,2,3
C,1,2,3
D,4,2,3
I want to aggegate this data on the three last columns, so the output would be :
ABC,1,2,3
D,4,2,3
How can I do it in scala ? (this is not a big dataframe so performance is secondary here)
As mentioned in the comments you can first use groupBy to group your columns and then use concat_ws on your first column. Here is one way of doing it,
//create you original DF
val df = Seq(("A",1,2,3),("B",1,2,3),("C",1,2,3),("D",4,2,3)).toDF("col1","col2","col3","col4")
df.show
//output
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
| A| 1| 2| 3|
| B| 1| 2| 3|
| C| 1| 2| 3|
| D| 4| 2| 3|
+----+----+----+----+
//group by "col2","col3","col4" and store "col1" as list and then
//convert it to string
df.groupBy("col2","col3","col4")
.agg(collect_list("col1").as("col1"))
//you can change the string separator by concat_ws first arg
.select(concat_ws("", $"col1") as "col1",$"col2",$"col3",$"col4").show
//output
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
| D| 4| 2| 3|
| ABC| 1| 2| 3|
+----+----+----+----+
Alternatively you can map by your key in this case c2, c3, c4 and then concatenate your values via reduce by key. In the end I format each row as needed through the last map. It should be something like the following:
val data=sc.parallelize(List(
("A", "1", "2", "3"),
("B", "1", "2", "3"),
("C", "1", "2", "3"),
("D", "4", "2", "3")))
val res = data.map{ case (c1, c2, c3, c4) => ((c2, c3, c4), String.valueOf(c1)) }
.reduceByKey((x, y) => x + y)
.map(v => v._2.toString + "," + v._1.productIterator.toArray.mkString(","))
.collect
Is there a way to the group Dataframe using its own schema?
This is produces data of format :
Country | Class | Name | age
US, 1,'aaa',21
US, 1,'bbb',20
BR, 2,'ccc',30
AU, 3,'ddd',20
....
I would want to do some like
Country | Class 1 Students | Class 2 Students
US , 2, 0
BR , 0, 1
....
condition 1. Country Groupping.
condition 2. get only 1 or 2 class value
this is a source code..
val df = Seq(("US", 1, "AAA",19),("US", 1, "BBB",20),("KR", 2, "CCC",29),
("AU", 3, "DDD",18)).toDF("country", "class", "name","age")
df.groupBy("country").agg(count($"name") as "Cnt")
You should use pivot function.
val df = Seq(("US", 1, "AAA",19),("US", 1, "BBB",20),("KR", 2, "CCC",29),
("AU", 3, "DDD",18)).toDF("country", "class", "name","age")
df.groupBy("country").pivot("class").agg(count($"name") as "Cnt").show
+-------+---+---+---+
|country| 1| 2| 3|
+-------+---+---+---+
| AU| 0| 0| 1|
| US| 2| 0| 0|
| KR| 0| 1| 0|
+-------+---+---+---+
I am using scala and spark and have a simple dataframe.map to produce the required transformation on data. However I need to provide an additional row of data with the modified original. How can I use the dataframe.map to give out this.
ex:
dataset from:
id, name, age
1, john, 23
2, peter, 32
if age < 25 default to 25.
dataset to:
id, name, age
1, john, 25
1, john, -23
2, peter, 32
Would a 'UnionAll' handle it?
eg.
df1 = original dataframe
df2 = transformed df1
df1.unionAll(df2)
EDIT: implementation using unionAll()
val df1=sqlContext.createDataFrame(Seq( (1,"john",23) , (2,"peter",32) )).
toDF( "id","name","age")
def udfTransform= udf[Int,Int] { (age) => if (age<25) 25 else age }
val df2=df1.withColumn("age2", udfTransform($"age")).
where("age!=age2").
drop("age2")
df1.withColumn("age", udfTransform($"age")).
unionAll(df2).
orderBy("id").
show()
+---+-----+---+
| id| name|age|
+---+-----+---+
| 1| john| 25|
| 1| john| 23|
| 2|peter| 32|
+---+-----+---+
Note: the implementation differs a bit from the originally proposed (naive) solution. The devil is always in the detail!
EDIT 2: implementation using nested array and explode
val df1=sx.createDataFrame(Seq( (1,"john",23) , (2,"peter",32) )).
toDF( "id","name","age")
def udfArr= udf[Array[Int],Int] { (age) =>
if (age<25) Array(age,25) else Array(age) }
val df2=df1.withColumn("age", udfArr($"age"))
df2.show()
+---+-----+--------+
| id| name| age|
+---+-----+--------+
| 1| john|[23, 25]|
| 2|peter| [32]|
+---+-----+--------+
df2.withColumn("age",explode($"age") ).show()
+---+-----+---+
| id| name|age|
+---+-----+---+
| 1| john| 23|
| 1| john| 25|
| 2|peter| 32|
+---+-----+---+