How to fetch file id from file names in pyspark column - pyspark

I have a big pyspark dataframe like this:
+------------------------------------------------------------+
|filename |
+------------------------------------------------------------+
|file:///Users/downloads/201709_PO.dat|
|file:///Users/downloads/201723_PO.dat|
I want to get a new column with the id which is the first 5 digit of the filename.
Output I want:
+-------------------------------------|-------|
|filename | id |
+-------------------------------------|-------|
|file:///Users/downloads/201709_PO.dat|201709 |
|file:///Users/downloads/201723_PO.dat|201723 |
How can I achieve this in efficient way?

You can use the regexp_extract function.
df = df.withColumn('id', F.regexp_extract('filename', '([0-9]{6})', 1))
df.show(truncate=False)

If it is always 6 digits, you can use substring.
df = df.withColumn('id', substring(substring_index(df.filename, '/', -1), 0, 6))
The middle part substring_index(df.filename, '/', -1) is to take a filename out of full path.
Then use substring(filename, 0, 6) to take first 6 characters out from the filename.
If you want to extract any characters before "_", you can use substring_index twice.
df = df.withColumn('id', substring_index(substring_index(df.filename, '/', -1), '_', 1))

Related

Remove the repeated punctuation from pyspark dataframe

I need to remove the repeated punctuations and keep the last occurrence only.
For example: !!!! -> !
!!$$ -> !$
I have a dataset that looks like below
temp = spark.createDataFrame([
(0, "This is Spark!!!!"),
(1, "I wish Java could use case classes!!##"),
(2, "Data science is cool#$#!"),
(3, "Machine!!$$")
], ["id", "words"])
+---+--------------------------------------+
|id |words |
+---+--------------------------------------+
|0 |This is Spark!!!! |
|1 |I wish Java could use case classes!!##|
|2 |Data science is cool#$#! |
|3 |Machine!!$$ |
+---+--------------------------------------+
I tried regex to remove specific punctuations and that is below
df2 = temp.select(
[F.regexp_replace(col, r',|\.|&|\\|\||-|_', '').alias(col) for col in temp.columns]
)
but the above is not working. Can anyone tell how to achieve this in pyspark?
Below is the desired output.
id words
0 0 This is Spark!
1 1 I wish Java could use case classes!#
2 2 Data science is cool#$#!
3 3 Machine!$
You can use this regex.
df2 = temp.select('id',
F.regexp_replace('words', r'([!$#])\1+', '$1').alias('words'))
Regex explanation.
( -> Group anything between this and ) and create a capturing group
[ -> Match any characters between this and ]
([!$#]) -> Create the capturing group that match any of !, $, #
\1 -> Reference the first capturing group
+ -> Match 1 or more of a preceding group or character
([!$#])\1+ -> Match any of !, $, # that repeats more than 1 time.
And the last argument of regex_replace to set $1 which is referencing the first capturing group (a single character of !, $, #) to replace the repeating characters with just the single character.
You can add more characters between [] for matching more special characters.

Substring with delimiters with Spark Scala

I am new at Spark and Scala and I want to ask you a question :
I have a city field in my database (that I have already loaded it in a DataFrame) with this pattern : "someLetters" + " - " + id + ')'.
Example :
ABDCJ - 123456)
AGDFHBAZPF - 1234567890)
The size of the field is not fixed and the id here can be an integer of 6 or 10 digits. So, what I want to do is to extract that id in a new column called city_id.
Concretely, I want to start by the last character of the digit which is ')', ignore it and extract the integer until I find a space. Then break.
I already tried to do this using withColumn or a regex or even subString index but I got confused since they are based on the index which I can't use here.
How can I fix this?
start by the last character of the digit which is ')', ignore it and
extract the integer until I find a space
This can be done with regex pattern .*?(\\d+)\\)$, where \\)$ matches the ) at the end of the string, and then capture the digits with \\d+, and extract it as a new column; Notice .*? lazily (due to ?) matches a string until the pattern (\\d+)\\)$ is found:
df.withColumn("id", regexp_extract($"city", ".*?(\\d+)\\)$", 1)).show
+--------------------+----------+
| city| id|
+--------------------+----------+
| ABDCJ - 123456)| 123456|
|AGDFHBAZPF - 1234...|1234567890|
+--------------------+----------+
import org.apache.spark.sql.functions._
val df=tempDF.withColumn("city_id",rtrim(element_at(split($"city"," - "),2),")"))
Assuming that the input is in the format in your example.
In order to get the number after the - without the trailing ) you can execute the following command:
split(" - ")(1).dropRight(1)
The above split by the - sign and takes the second element (i.e. the number), and remove the last char (the )).
You can create udf which execute the above command, and create a new column using withColumn command
I would go for regex_extract, but there are many alternatives : You can also do this using 2 splits :
df
.withColumn("id",
split(
split($"city"," - ")(1),"\\)"
)(0)
)
First, you split by - and take the second element, then split by ) and take the first element
Or another alternative, split by - and then drop ) :
df
.withColumn("id",
reverse(
substring(
reverse(split($"city"," - ")(1)),
2,
Int.MaxValue
)
)
)
You can use 2 regexp_replace functions also.
scala> val df = Seq(("ABDCJ - 123456)"),("AGDFHBAZPF - 1234567890)")).toDF("cityid")
df: org.apache.spark.sql.DataFrame = [citiid: string]
scala> df.withColumn("id",regexp_replace(regexp_replace('cityid,""".*- """,""),"""\)""","")).show(false)
+------------------------+----------+
|cityid |id |
+------------------------+----------+
|ABDCJ - 123456) |123456 |
|AGDFHBAZPF - 1234567890)|1234567890|
+------------------------+----------+
scala>
Since the id seems to be an integer, you can cast it to long as
scala> val df2 = df.withColumn("id",regexp_replace(regexp_replace('cityid,""".*- """,""),"""\)""","").cast("long"))
df2: org.apache.spark.sql.DataFrame = [cityid: string, id: bigint]
scala> df2.show(false)
+------------------------+----------+
|cityid |id |
+------------------------+----------+
|ABDCJ - 123456) |123456 |
|AGDFHBAZPF - 1234567890)|1234567890|
+------------------------+----------+
scala> df2.printSchema
root
|-- cityid: string (nullable = true)
|-- id: long (nullable = true)
scala>

How to check that a value is the unix timestamp in Scala?

In the DataFrame df I have a column datetime that contains timestamp values. The problem is that in some rows these are unix timestamps, while in other rows these are yyyyMMddHHmm format.
How can I verify that each given value is unix timestamp and if it's not to convert it into timestamp?
df.withColumn("timestamp", unix_timestamp(col("datetime")))
I assume that when...otherwise should be used, but how to check that a value is the unix timestamp?
You can use when/otherwise along with the date parsing methods. Here is some example code. I differentiated using just the length of the string, but you could also check the result of parsing them.
from pyspark.sql.functions import *
data = [
('201001021011',),
('201101021011',),
('1539721852',),
('1539721853',)
]
df = sc.parallelize(data).toDF(['date'])
df2 = df.withColumn('date',
when(length('date') != 12, from_unixtime('date', 'yyyyMMddHHmm')) \
.otherwise(col('date'))
)
df3 = df2.withColumn('date', to_timestamp('date', 'yyyyMMddHHmm'))
df3.show()
Outputs this:
+-------------------+
| date|
+-------------------+
|2010-01-02 10:11:00|
|2011-01-02 10:11:00|
|2018-10-16 16:30:00|
|2018-10-16 16:30:00|
+-------------------+
If column datetime consists of only Unix-timestamp strings or "yyyyMMddHHmm"-formatted strings, you can differentiate the two string formats based on their length, since the former has 10 digits or less whereas the latter is a fixed 12:
val df = Seq(
(1, "1538384400"),
(2, "1538481600"),
(3, "201809281800"),
(4, "1538548200"),
(5, "201809291530")
).toDF("id", "datetime")
df.withColumn("timestamp",
when(length($"datetime") === 12, unix_timestamp($"datetime", "yyyyMMddHHmm")).
otherwise($"datetime")
)
// +---+------------+----------+
// | id| datetime| timestamp|
// +---+------------+----------+
// | 1| 1538384400|1538384400|
// | 2| 1538481600|1538481600|
// | 3|201809281800|1538182800|
// | 4| 1538548200|1538548200|
// | 5|201809291530|1538260200|
// +---+------------+----------+
In case there are other string formats in column datetime, you can narrow down the conditions for Unix timestamp to a range corresponding to the range of date-time in your dataset. For example, Unix timestamp should be a 10-digit number post 2001-09-09 (and for the next 250+ years) and would start with 10 to 15 up till now:
df.withColumn("timestamp",
when(length($"datetime") === 12, unix_timestamp($"datetime", "yyyyMMddHHmm")).
otherwise(when(regexp_extract($"datetime", "^(1[0-5]\\d{8})$", 1) === $"datetime", $"datetime").
otherwise(null) // Or, additional conditions for other cases
))

pyspark generate row hash of specific columns and add it as a new column

I am working with spark 2.2.0 and pyspark2.
I have created a DataFrame df and now trying to add a new column "rowhash" that is the sha2 hash of specific columns in the DataFrame.
For example, say that df has the columns: (column1, column2, ..., column10)
I require sha2((column2||column3||column4||...... column8), 256) in a new column "rowhash".
For now, I tried using below methods:
1) Used hash() function but since it gives an integer output it is of not much use
2) Tried using sha2() function but it is failing.
Say columnarray has array of columns I need.
def concat(columnarray):
concat_str = ''
for val in columnarray:
concat_str = concat_str + '||' + str(val)
concat_str = concat_str[2:]
return concat_str
and then
df1 = df1.withColumn("row_sha2", sha2(concat(columnarray),256))
This is failing with "cannot resolve" error.
Thanks gaw for your answer. Since I have to hash only specific columns, I created a list of those column names (in hash_col) and changed your function as :
def sha_concat(row, columnarray):
row_dict = row.asDict() #transform row to a dict
concat_str = ''
for v in columnarray:
concat_str = concat_str + '||' + str(row_dict.get(v))
concat_str = concat_str[2:]
#preserve concatenated value for testing (this can be removed later)
row_dict["sha_values"] = concat_str
row_dict["sha_hash"] = hashlib.sha256(concat_str).hexdigest()
return Row(**row_dict)
Then passed as :
df1.rdd.map(lambda row: sha_concat(row,hash_col)).toDF().show(truncate=False)
It is now however failing with error:
UnicodeEncodeError: 'ascii' codec can't encode character u'\ufffd' in position 8: ordinal not in range(128)
I can see value of \ufffd in one of the column so I am unsure if there is a way to handle this ?
You can use pyspark.sql.functions.concat_ws() to concatenate your columns and pyspark.sql.functions.sha2() to get the SHA256 hash.
Using the data from #gaw:
from pyspark.sql.functions import sha2, concat_ws
df = spark.createDataFrame(
[(1,"2",5,1),(3,"4",7,8)],
("col1","col2","col3","col4")
)
df.withColumn("row_sha2", sha2(concat_ws("||", *df.columns), 256)).show(truncate=False)
#+----+----+----+----+----------------------------------------------------------------+
#|col1|col2|col3|col4|row_sha2 |
#+----+----+----+----+----------------------------------------------------------------+
#|1 |2 |5 |1 |1b0ae4beb8ce031cf585e9bb79df7d32c3b93c8c73c27d8f2c2ddc2de9c8edcd|
#|3 |4 |7 |8 |57f057bdc4178b69b1b6ab9d78eabee47133790cba8cf503ac1658fa7a496db1|
#+----+----+----+----+----------------------------------------------------------------+
You can pass in either 0 or 256 as the second argument to sha2(), as per the docs:
Returns the hex string result of SHA-2 family of hash functions (SHA-224, SHA-256, SHA-384, and SHA-512). The numBits indicates the desired bit length of the result, which must have a value of 224, 256, 384, 512, or 0 (which is equivalent to 256).
The function concat_ws takes in a separator, and a list of columns to join. I am passing in || as the separator and df.columns as the list of columns.
I am using all of the columns here, but you can specify whatever subset of columns you'd like- in your case that would be columnarray. (You need to use the * to unpack the list.)
If you want to have the hash for each value in the different columns of your dataset you can apply a self-designed function via map to the rdd of your dataframe.
import hashlib
test_df = spark.createDataFrame([
(1,"2",5,1),(3,"4",7,8),
], ("col1","col2","col3","col4"))
def sha_concat(row):
row_dict = row.asDict() #transform row to a dict
columnarray = row_dict.keys() #get the column names
concat_str = ''
for v in row_dict.values():
concat_str = concat_str + '||' + str(v) #concatenate values
concat_str = concat_str[2:]
row_dict["sha_values"] = concat_str #preserve concatenated value for testing (this can be removed later)
row_dict["sha_hash"] = hashlib.sha256(concat_str).hexdigest() #calculate sha256
return Row(**row_dict)
test_df.rdd.map(sha_concat).toDF().show(truncate=False)
The Results would look like:
+----+----+----+----+----------------------------------------------------------------+----------+
|col1|col2|col3|col4|sha_hash |sha_values|
+----+----+----+----+----------------------------------------------------------------+----------+
|1 |2 |5 |1 |1b0ae4beb8ce031cf585e9bb79df7d32c3b93c8c73c27d8f2c2ddc2de9c8edcd|1||2||5||1|
|3 |4 |7 |8 |cb8f8c5d9fd7165cf3c0f019e0fb10fa0e8f147960c715b7f6a60e149d3923a5|8||4||7||3|
+----+----+----+----+----------------------------------------------------------------+----------+
New in version 2.0 is the hash function.
from pyspark.sql.functions import hash
(
spark
.createDataFrame([(1,'Abe'),(2,'Ben'),(3,'Cas')], ('id','name'))
.withColumn('hashed_name', hash('name'))
).show()
wich results in:
+---+----+-----------+
| id|name|hashed_name|
+---+----+-----------+
| 1| Abe| 1567000248|
| 2| Ben| 1604243918|
| 3| Cas| -586163893|
+---+----+-----------+
https://spark.apache.org/docs/latest/api/python/_modules/pyspark/sql/functions.html#hash
if you want to control how the IDs should look like then we can use this code below.
import pyspark.sql.functions as F
from pyspark.sql import Window
SRIDAbbrev = "SOD" # could be any abbreviation that identifys the table or object on the table name
max_ID = 00000000 # control how long you want your numbering to be, i chose 8.
if max_ID == None:
max_ID = 0 # helps identify where you start numbering from.
dataframe_new = dataframe.orderBy(
F.lit('name')
).withColumn(
'hashed_name',
F.concat(
F.lit(SRIDAbbrev),
F.lpad(
(
F.dense_rank().over(
Window.orderBy(name)
)
+ F.lit(max_ID)
),
8,
"0"
)
)
)
which results to
+---+----+-----------+
| id|name|hashed_name|
+---+----+-----------+
| 1| Abe| SOD0000001|
| 2| Ben| SOD0000002|
| 3| Cas| SOD0000003|
| 3| Cas| SOD0000003|
+---+----+-----------+
Let me know if this helps :)

split an apache-spark dataframe string column into multiple columns by slicing/splitting on field width values stored in a list

I have a dataframe that looks like this
+--------------------
| unparsed_data|
+--------------------
|02020sometext5002...
|02020sometext6682...
I need to get it split it up into something like this
+--------------------
|fips | Name | Id ...
+--------------------
|02020 | sometext | 5002...
|02020 | sometext | 6682...
I have a list like this
val fields = List(
("fips", 5),
(“Name”, 8),
(“Id”, 27)
....more fields
)
I need the spit to take the first 5 characters in unparsed_data and map it to fips, take the next 8 characters in unparsed_data and map it to Name, then the next 27 characters and map them to Id and so on. I need the split to use/reference the filed lengths supplied in the list to do the splitting/slicing as there are allot of fields and the unparsed_data field is very long.
My scala is still pretty week and I assume the answer would look something like this
df.withColumn("temp_field", split("unparsed_data", //some regex created from the list values?)).map(i => //some mapping to the field names in the list)
any suggestions/ideas much appreciated
You can use foldLeft to traverse your fields list to iteratively create columns from the original DataFrame using
substring. It applies regardless of the size of the fields list:
import org.apache.spark.sql.functions._
val df = Seq(
("02020sometext5002"),
("03030othrtext6003"),
("04040moretext7004")
).toDF("unparsed_data")
val fields = List(
("fips", 5),
("name", 8),
("id", 4)
)
val resultDF = fields.foldLeft( (df, 1) ){ (acc, field) =>
val newDF = acc._1.withColumn(
field._1, substring($"unparsed_data", acc._2, field._2)
)
(newDF, acc._2 + field._2)
}._1.
drop("unparsed_data")
resultDF.show
// +-----+--------+----+
// | fips| name| id|
// +-----+--------+----+
// |02020|sometext|5002|
// |03030|othrtext|6003|
// |04040|moretext|7004|
// +-----+--------+----+
Note that a Tuple2[DataFrame, Int] is used as the accumulator for foldLeft to carry both the iteratively transformed DataFrame and next offset position for substring.
This can get you going. Depending on your needs it can get more and more complicated with variable lengths etc. which you do not state. But you can I think use column list.
import org.apache.spark.sql.functions._
val df = Seq(
("12334sometext999")
).toDF("X")
val df2 = df.selectExpr("substring(X, 0, 5)", "substring(X, 6,8)", "substring(X, 14,3)")
df2.show
Gives in this case (you can rename cols again):
+------------------+------------------+-------------------+
|substring(X, 0, 5)|substring(X, 6, 8)|substring(X, 14, 3)|
+------------------+------------------+-------------------+
| 12334| sometext| 999|
+------------------+------------------+-------------------+