My df has multiple columns. I want to check whether one column name contains a substring. Such as % in SQL.
I try to use this one but seems not to work. I don't want to give a full name to find whether that column exists.
If I can find this column, I also want to rename the column using .withColumnRename
Such like
if (df.columns.contains("%ABC%" or "%BCD%")) df.withColumnrename("%ABC%" or "%BCD%","ABC123") else println(0)
Maybe you can try this.
The filter can help you select you columns which need to update.
Write your update logic in the flodLeft()() method.
flodLeft is a useful method in scala. If you want to learn more about flodLeft , you can search scala foldLeft example in google.
So, good luck with you.
df.schema.fieldNames.map(_.toUpperCase).filter(x => !x.contains("")).foldLeft(df)((a,b) => {
a.withColumnRenamed(b, ("abc_" + b).toLowerCase() )
})
First, find a column that matches your criteria:
df.columns
.filter(c => c.contains("ABC") || c.contains("BCD"))
.take(1)
This will either return an empty Array[String] if no such column exists or an array with a single element if the column does exist. take(1) is there to make sure that you won't be renaming more than one column using the same new name.
Continuing the previous expression, renaming the column boils down to calling foldLeft, which iterates over the collection chaining its second argument to the "zero" (df in this case):
.foldLeft(df)((ds, c) => ds.withColumnRenamed(c, "ABC123"))
If the array was empty, nothing will get called and the result will be the original df.
Here it is in action:
df.printSchema
// root
// |-- AB: integer (nullable = false)
// |-- ABCD: string (nullable = true)
df.columns
.filter(c => c.contains("ABC") || c.contains("BCD"))
.take(1)
.foldLeft(df)(_.withColumnRenamed(_, "ABC123"))
.printSchema
// root
// |-- AB: integer (nullable = false)
// |-- ABC123: string (nullable = true)
Related
Consider a Spark DataFrame df with the following schema:
root
|-- date: timestamp (nullable = true)
|-- customerID: string (nullable = true)
|-- orderID: string (nullable = true)
|-- productID: string (nullable = true)
One column should be cast to a different type, other columns should just have their white-space trimmed.
df.select(
$"date",
df("customerID").cast(IntegerType),
$"orderID",
$"productId")
.withColumn("orderID", trim(col("orderID")))
.withColumn("productID", trim(col("productID")))
The operations seem to require different syntax; casting is done via select, while trim is done via withColumn.
I'm used to R and dplyr where all the above would be handled in a single mutate function, so mixing select and withColumn feels a bit cumbersome.
Is there a cleaner way to do this in a single pipe?
You can use either one. The difference is that withColumn will add (or replace if the same name is used) a new column to the dataframe while select will only keep the columns you specified. Depending on the situation, choose one to use.
The cast can be done using withColumn as follows:
df.withColumn("customerID", $"customerID".cast(IntegerType))
.withColumn("orderID", trim($"orderID"))
.withColumn("productID", trim($"productID"))
Note that you do not need to use withColumn on the date column above.
The trim functions can be done in a select as follows, here the column names are kept the same:
df.select(
$"date",
$"customerID".cast(IntegerType),
trim($"orderID").as("orderID"),
trim($"productId").as("productId"))
df.select(
$"date",
$"customerID".cast(IntegerType),
trim($"orderID").as("orderID"),
trim($"productID").as("productID"))
This question already has answers here:
Access Array column in Spark
(2 answers)
Closed 5 years ago.
I have a spark dataframe with the following schema and class data:
>ab
ab: org.apache.spark.sql.DataFrame = [block_number: bigint, collect_list(to): array<string> ... 1 more field]
>ab.printSchema
root |-- block_number: long (nullable = true)
|-- collect_list(to): array (nullable = true)
| |-- element: string (containsNull = true)
|-- collect_list(from): array (nullable = true)
| |-- element: string (containsNull = true)
I want to simply merge the arrays from these two columns. I have tried to find a simple solution for this online but have not had any luck. Basically my issue comes down to two problems.
First, I know that probably the solution involves the map function. I have not been able to find any syntax that can actually compile, so for now please accept my best attempt:
ab.rdd.map(
row => {
val block = row.getLong(0)
val array1 = row(1).getAs[Array<string>]
val array1 = row(1).getAs[Array<string>]
}
)
Basically issue number 1 is very simple, and an issue that has been recurring since the day I first started using map in Scala: I can't figure out how to extract an arbitrary field for an arbitrary type from a column. I know that for the primitive types you have things like row.getLong(0) etc, but I don't understand how this should be done for things like array types.
I have seen somewhere that something like row.getAs[Array<string>](1) should work, but when I try it I get the error
error: identifier expected but ']' found.
val array1 = row.getAs[Array<string>](1)`
As far as I can tell, this is exactly the syntax I have seen in other situations but I can't tell why it's not working. I think I have seen before some other syntax that looks like row(1).getAs[Type], but I am not sure.
The second issue is: once I can extact these two arrays, what is the best way of merging them? Using the intersect function? Or is there a better approach to this whole process? For example using the brickhouse package?
Any help would be appreciated.
Best,
Paul
You don't need to switch to the RDD API, you can do it with Dataframe UDFs like this:
val mergeArrays = udf((arr1:Seq[String],arr2:Seq[String]) => arr1++arr2)
df
.withColumn("merged",mergeArrays($"collect_list(from)",$"collect_list(to)"))
.show()
The above UDF just concats the array (using the ++ operator), you can also use union or intersect etc, depending what you want to achieve.
Using the RDD API, the solution would look like this:
df.rdd.map(
row => {
val block = row.getLong(0)
val array1 = row.getAs[Seq[String]](1)
val array2 = row.getAs[Seq[String]](2)
(block,array1++array2)
}
).toDF("block","merged") // back to Dataframes
I have a dataframe, called sessions, with columns that may change over time. (Edit to Clarify: I do not have a case class for the columns - only a reflected schema.) I will consistently have a uuid and clientId in the outer scope with some other inner and outer scope columns that might constitute a tracking event so ... something like:
root
|-- runtimestamp: long (nullable = true)
|-- clientId: long (nullable = true)
|-- uuid: string (nullable = true)
|-- oldTrackingEvents: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- timestamp: long (nullable = true)
| | |-- actionid: integer (nullable = true)
| | |-- actiontype: string (nullable = true)
| | |-- <tbd ... maps, arrays and other stuff matches sibling> section
...
|-- newTrackingEvents: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- timestamp: long (nullable = true)
| | |-- actionid: integer (nullable = true)
| | |-- actiontype: string (nullable = true)
| | |-- <tbd ... maps, arrays and other stuff matches sibling>
...
I'd like to now merge oldTrackingEvents and newTrackingEvents with a UDF containing these parameters and yet-to-be resolved code logic:
val mergeTEs = udf((oldTEs : Seq[Row], newTEs : Seq[Row]) =>
// do some stuff - figure best way
// - to merge both groups of tracking events
// - remove duplicates tracker events structures
// - limit total tracking events < 500
return result // same type as UDF input params
)
The UDF return result would be an array of of the structure that is the resulting List of the concatenated two fields.
QUESTION:
My question is how to construct such a UDF - (1) use of correct passed-in parameter types, (2) a way to manipulate these collections within a UDF and (3) a clear way to return a value that doesn't have a compiler error. I unsuccessfully tested Seq[Row] for the input / output (with val testUDF = udf((trackingEvents : Seq[Row]) => trackingEvents) and received the error java.lang.UnsupportedOperationException: Schema for type org.apache.spark.sql.Row is not supported for a direct return of trackingEvents. However, I get no error for returning Some(1) instead of trackingEvents ... What is the best way to manipulate the collections so that I can concatenate 2 lists of identical structures as suggested by the schema above with the UDF using the activity in the comments section. The goal is to use this operation:
sessions.select(mergeTEs('oldTrackingEvents, 'newTrackingEvents).as("cleanTrackingEvents"))
And in each row, ... get back a single array of 'trackingEvents' structure in a memory / speed efficient manner.
SUPPLEMENTAL:
Looking at a question shown to me ... There's a possible hint, if relevancy exists ... Defining a UDF that accepts an Array of objects in a Spark DataFrame? ... To create struct function passed to udf has to return Product type (Tuple* or case class), not Row.
Perhaps ... this other post is relevant / useful.
I think that the question you've linked explains it all, so just to reiterate. When working with udf:
Input representation for the StructType is weakly typed Row object.
Output type for StructType has to be Scala Product. You cannot return Row object.
If this is to much burden, you should use strongly typed Dataset
val f: T => U
sessions.as[T].map(f): Dataset[U]
where T is an algebraic data type representing Session schema, and U is algebraic data type representing the result.
Alternatively ... If your goal is to merge sequences of some random row structure / schema with some manipulation, this is an alternative generally-stated approach that avoids the partitioning talk:
From the master dataframe, create dataframes for each trackingEvents section, new and old. With each, select the exploded 'trackingEvents' section's columns. Save these val dataframe declarations as newTE and oldTE.
Create another dataframe, where columns that are picked are unique to each tracking event in the arrays of oldTrackingEvents and newTrackingEvents such as each's uuid, clientId and event timestamp. Your pseudo-schema would be:
(uuid: String, clientId : Long, newTE : Seq[Long], oldTE : Seq[Long])
Use a UDF to join the two simple sequences of your structure, both Seq[Long] which is 'something like the untested' example:
val limitEventsUDF = udf { (newTE: Seq[Long], oldTE: Seq[Long], limit: Int, tooOld: Long) => {
(newTE ++ oldTE).filter(_ > tooOld).sortWith(_ > _).distinct.take(limit)
}}
The UDF will return a dataframe of cleaned tracking events & you now have a very slim dataframe with removed events to self-join back to your exploded newTE and oldTE frames after being unioned back to each other.
GroupBy as needed thereafter using collect_list.
Still ... this seems like a lot of work - Should this be voted for this as "the answer" - I'm not sure?
I have a dataframe, and want to rename it using toDF by passing the columns names from list, here column list is dynamic, when i do as below getting error, how can i achieve this?
>>> df.printSchema()
root
|-- id: long (nullable = true)
|-- name: string (nullable = true)
|-- dept: string (nullable = true)
columns = ['NAME_FIRST', 'DEPT_NAME']
df2 = df.toDF('ID', 'NAME_FIRST', 'DEPT_NAME')
(or)
df2 = df.toDF('id', columns[0], columns[1])
this, does not work if we dont know how many columns would be there in the input data frame, so want to pass the list to df2, i tried as below
df2 = df.toDF('id', columns)
pyspark.sql.utils.IllegalArgumentException: u"requirement failed: The number of columns doesn't match.\nOld column names (3): id, name, dept\nNew column names (2): id, name_first, dept_name"
Here it treats list as single item, how to pass the columns from list?
df2 = df.toDF(columns) does not work, add a * like below -
columns = ['NAME_FIRST', 'DEPT_NAME']
df2 = df.toDF(*columns)
"*" is the "splat" operator: It takes a list as input, and expands it into actual positional arguments in the function call
What you tried is correct except you did not add all columns to your "columns" array.
This will work:
columns = ['ID','NAME_FIRST', 'DEPT_NAME']
df2 = df.toDF(columns)
Updating answer with all steps I followed in pyspark:
list=[(1,'a','b'),(2,'c','d'),(3,'e','f')]
df = sc.parallelize(list)
columns = ['ID','NAME_FIRST', 'DEPT_NAME']
df2 = df.toDF(columns)
I have a DataFrame in PySpark that has a nested array value for one of its fields. I would like to filter the DataFrame where the array contains a certain string. I'm not seeing how I can do that.
The schema looks like this:
root
|-- name: string (nullable = true)
|-- lastName: array (nullable = true)
| |-- element: string (containsNull = false)
I want to return all the rows where the upper(name) == 'JOHN' and where the lastName column (the array) contains 'SMITH' and the equality there should be case insensitive (like I did for the name). I found the isin() function on a column value, but that seems to work backwards of what I want. It seem like I need a contains() function on a column value. Anyone have any ideas for a straightforward way to do this?
You could consider working on the underlying RDD directly.
def my_filter(row):
if row.name.upper() == 'JOHN':
for it in row.lastName:
if it.upper() == 'SMITH':
yield row
dataframe = dataframe.rdd.flatMap(my_filter).toDF()
An update in 2019
spark 2.4.0 introduced new functions like array_contains and transform
official document
now it can be done in sql language
For your problem, it should be
dataframe.filter('array_contains(transform(lastName, x -> upper(x)), "JOHN")')
It is better than the previous solution using RDD as a bridge, because DataFrame operations are much faster than RDD ones.