I have a Spark (1.4) dataframe where the data in a column is like "1-2-3-4-5-6-7-8-9-10-11-12". I want to split the data into multiple columns. Please note that the number of fields can vary from 1 to 12, its not fixed.
P.S. we are using Scala API.
Edit:
Editing over the original question. I have the delimited string as below:
"ABC-DEF-PQR-XYZ"
From this string I need to create delimited strings in separate columns as below. Please note that this string is in a column in DF.
Original column: ABC-DEF-PQR-XYZ
New col1 : ABC
New col2 : ABC-DEF
New col3 : ABC-DEF-PQR
New col4 : ABC-DEF-PQR-XYZ
Please note that there can be 12 such new columns which needs to get derived from original field. Also, the string in original column might vary i.e. some times 1 column, some time 2 but max can be 12.
Hope I have articulated the problem statement clearly.
Thanks!
You can use explode and pivot. Here is some sample data:
df=sc.parallelize([["1-2-3-4-5-6-7-8-9-10-11-12"], ["1-2-3-4"], ["1-2-3-4-5-6-7-8-9-10"]]).toDF(schema=["col"])
Now add a unique id to rows so that we can keep track of which row the data belongs to:
df=df.withColumn("id", f.monotonically_increasing_id())
Then split the columns by delimiter - and then explode to get a long-form dataset:
df=df.withColumn("col_split", f.explode(f.split("col", "\-")))
Finally pivot on id to get back to wide form:
df.groupby("id")
.pivot("col_split")
.agg(f.max("col_split"))
.drop("id").show()
Related
i have as solution which goes like
df1 -->dataframe 1 with having 50 columns of data
df2 --->datarame 2 having footer/trailer 3 columns of data like Trailer,count of rows,date
so i added the remaining 47 columns as "","",""..... so on
so that i can union 2 dataframe:
df3=df1.union(df2)
now if i want to save
df3.coalesce(1).write.format("com.databricks.spark.csv")\
.option("header","true").mode("overwrite")\
.save(output_blob_path);
so now i am getting the footer as well
like this Trailer,400,20210805,"","","","","","","".. and so on
if any one can suggest how to remove ,"","","",.. these double quotes from the last row
where i want to save this file in blob container.
it would be very helpful
You can try to define structure of data frame to treat entire row as single column for both the files and then perform union. This way you no need to add extra columns on data frame 2 and then struck in to tricky situation to remove extra columns after union.
I have a use-case where I need to select certain columns from a dataframe containing atleast 30 columns and millions of rows.
I'm loading this data from a cassandra table using scala and apache-spark.
I selected the required columns using: df.select("col1","col2","col3","col4")
Now I have to perform a basic groupBy operation to group the data according to src_ip,src_port,dst_ip,dst_port and I also want to have the latest value from a received_time column of the original dataframe.
I want a dataframe with distinct src_ip values with their count and latest received_time in a new column as last_seen.
I know how to use .withColumn and also, I think that .map() can be used here.
Since I'm relatively new in this field, I really don't know how to proceed further. I could really use your help to get done with this task.
Assuming you have a dataframe df with src_ip,src_port,dst_ip,dst_port and received_time, you can try:
val mydf = df.groupBy(col("src_ip"),col("src_port"),col("dst_ip"),col("dst_port")).agg(count("received_time").as("row_count"),max(col("received_time")).as("max_received_time"))
The above line calculates the count of timestamp received against the group by columns as well as the max timestamp for that group by columns.
I would like to create a spark dataframe in pyspark from a text file, that has different number of rows and columns and map it to key/value pair, the key is the first 4 characters from the first column of the text file. I want to do that in order to remove the redundant rows and to be able group them later by the key value. I know how to do that on pandas but still confused where to start doing that in pyspark.
My input is a text file that has the following:
1234567,micheal,male,usa
891011,sara,femal,germany
I want to be able to group every row by the first six characters in the first column
Create a new column that contains only the first six characters of the first column, and then group by that:
from pyspark.sql.functions import col
df2 = df.withColumn("key", col("first_col")[:6])
df2.groupBy("key").agg(...)
I have two fields that contain concatenated strings. The first field contains medical codes and the second field contains the descriptions of those codes. I don't want to break these into multiple fields because some of them would contain hundreds of splits. Is there any way to break them into a row each like below? The code and description values are separated by a semicolon (;)
code description
----- ------------
80400 description1
80402 description2
A sample of the data:
One way is you can custom split two columns at ; which will create separate columns for every entry then you can pivot code columns and description columns separately.
One issue will be you can't guarantee if every code is mapped to correct description.
One more way is export data to excel sheet and then split and pivot the columns and then match the code and description, Then take the excel as datasource to the tableau.
I am working with a CSV file that contains information in the following format:
col1 col2 col3
row1 id1 , text1 (year1) , a|b|c
row2 id2 , text2 (year2) , a|b|c|d|e
row3 id3 , text3 (year3) , a|b
...
The number of rows in the CSV is very large. The years are embedded in col2 in parentheses. Also, as can be seen col3 can have variate number of elements.
I would like to read the CSV file EFFICIENTLY and end up for each item (id) with an array as follows:
For 'item' with id#_i :
A = [id_i,text_i,year_i,101010001]
where if all possible features in col3 are [a,b,c,d,....,z], the binary vector shows its presence or absence.
I am interested in efficient implementation of this in MATLAB. Ideas are more than welcome. Thank You
I would like to add what I have found to be one of the fastest ways of reading a CSV file:
importdata()
This will allow you to read numeric and non-numeric data, but it assumes there is some number of header lines. You can either input the number of header lines as an input argument to importdata() or you can let it try on its own...to which it didn't work for my use in the past.
This was much faster than xlsread() for me, where it took 1/6th the time to read something 6 times larger!
If you are reading only numeric data, you can use csvread()--which actually uses dlmread().
Thing is, there are about 10 ways to read these files, and it is really dependent not only on your goals, but the file contents.
You can use T = readtable(filename). This has the option for 'ReadVariableNames' which takes first row as header and 'ReadRowNames' that will take first column as row variable.