Need to convert Informatica reg_extract expression to Pyspark dataframe - pyspark

I have a scenario where I need to convert Informatica mapping (source and target SQL Server) into Pyspark code (source blob file and target Hive). In expression transformation one column contains 'reg_extract' function and I need to convert this to Pyspark dataframe. My final goal is to create the same table in Hive as it is in SQL Server.
What will be the replacement for reg_extract function in Pyspark? I am using Pyspark 2.
Below is the code from Informatica Expression transformation (for one column variable field):
LTRIM(RTRIM(IIF(instr(v_DATE,'AMENDED')>0,
reg_Extract(DATE,'.*(^\w+\s+[0-9]{2}[,]\s+[0-9]{4}|^\w+\s+[0-9]{1}[,]\s+[0-9]{4}).*'),
reg_Extract(DATE,'.*((\s0?[1-9]|1[012])[./-](0?[1-9]|[12][0-9]|3[01])[./-][0-9]{2,4}|(^0?[1-9]|1[012])[./-](0?[1-9]|[12][0-9]|3[01])[./-][0-9]{2,4}|(0[1-9]|[12][0-9]|3[01])[./-](0?[1-9]|1[012])[./-][0-9]{2,4}|\s\w+\s+(0?[1-9]|[12][0-9]|3[01])[.,](\s+)?[0-9]{4}|^\w+\s+(0?[1-9]|[12][0-9]|3[01])[.,](\s+)?[0-9]{4}|^(19|20)[0-9]{2}|^[0-9]{2}\s+\w+\s+[0-9]{4}|^[0-9]{6}|^(0?[1-9]|[12][0-9]|3[01])\s+\w+[.,]?\s+(19|20)[0-9]{2}|^[0-9]{1,2}[-,/]\w+[-,/][0-9]{2,4}).*'))))
In Pyspark, I have saved the source file in one dataframe and selected the required columns. After that I am unable to proceed.
input_data=spark.read.csv(file_path,header=True)
input_data.createOrReplaceTempView("input_data")
df_test = "select ACCESSION_NUMBER, DATE, REPORTING_PERSON from input_data"
df = sqlContext.sql(df_test)
I am new to Pyspark/SparkSQL. Please help.

You can use regexp_extract :
df = df.withColumn('New_Column_Name', regexp_extract(col('Date'), '.*(^\w+\s+[0-9]{2}[,]\s+[0-9]{4}|^\w+\s+[0-9]{1}[,]\s+[0-9]{4}).*', 1))
Related question

Related

Hello All, I have started learning spark (pyspark) and trying to build a Data warehouse using Databricks and ADLS Gen2 (Data lakes). Stuck with this

I am stuck on how to insert data or append data into existing delta file/ pyspark dataframe. My pyspark dataframe has a auto generated column. I would like to generate the sequence when new data is appended to the dataframe. When I try Union, It says, number of columns doesn't match. Any suggestions/pointers on this will greatly help.

Spark : Dynamic generation of the query based on the fields in s3 file

Oversimplified Scenario:
A process which generates monthly data in a s3 file. The number of fields could be different in each monthly run. Based on this data in s3,we load the data to a table and we manually (as number of fields could change in each run with addition or deletion of few columns) run a SQL for few metrics.There are more calculations/transforms on this data,but to have starter Im presenting the simpler version of the usecase.
Approach:
Considering the schema-less nature, as the number of fields in the s3 file could differ in each run with addition/deletion of few fields,which requires manual changes every-time in the SQL, Im planning to explore Spark/Scala, so that we can directly read from s3 and dynamically generate SQL based on the fields.
Query:
How I can achieve this in scala/spark-SQL/dataframe? s3 file contains only the required fields from each run.Hence there is no issue reading the dynamic fields from s3 as it is taken care by dataframe.The issue is how can we generate SQL dataframe-API/spark-SQL code to handle.
I can read s3 file via dataframe and register the dataframe as createOrReplaceTempView to write SQL, but I dont think it helps manually changing the spark-SQL, during addition of a new field in s3 during next run. what is the best way to dynamically generate the sql/any better ways to handle the issue?
Usecase-1:
First-run
dataframe: customer,1st_month_count (here dataframe directly points to s3, which has only required attributes)
--sample code
SELECT customer,sum(month_1_count)
FROM dataframe
GROUP BY customer
--Dataframe API/SparkSQL
dataframe.groupBy("customer").sum("month_1_count").show()
Second-Run - One additional column was added
dataframe: customer,month_1_count,month_2_count) (here dataframe directly points to s3, which has only required attributes)
--Sample SQL
SELECT customer,sum(month_1_count),sum(month_2_count)
FROM dataframe
GROUP BY customer
--Dataframe API/SparkSQL
dataframe.groupBy("customer").sum("month_1_count","month_2_count").show()
Im new to Spark/Scala, would be helpful if you can provide the direction so that I can explore further.
It sounds like you want to perform the same operation over and over again on new columns as they appear in the dataframe schema? This works:
from pyspark.sql import functions
#search for column names you want to sum, I put in "month"
column_search = lambda col_names: 'month' in col_names
#get column names of temp dataframe w/ only the columns you want to sum
relevant_columns = original_df.select(*filter(column_search, original_df.columns)).columns
#create dictionary with relevant column names to be passed to the agg function
columns = {col_names: "sum" for col_names in relevant_columns}
#apply agg function with your groupBy, passing in columns dictionary
grouped_df = original_df.groupBy("customer").agg(columns)
#show result
grouped_df.show()
Some important concepts can help you to learn:
DataFrames have data attributes stored in a list: dataframe.columns
Functions can be applied to lists to create new lists as in "column_search"
Agg function accepts multiple expressions in a dictionary as explained here which is what I pass into "columns"
Spark is lazy so it doesn't change data state or perform operations until you perform an action like show(). This means writing out temporary dataframes to use one element of the dataframe like column as I do is not costly even though it may seem inefficient if you're used to SQL.

create DataSet<Row> from Dataset created reading from a socket (Spark Java)

In Spark Streaming when the input source is a csv file and I read it through a socket (Java), a Dataset<Row> is created with only a string column and the value of each row contains each line sent through the socket.
When I know the format of each line, e.g. the first two values of the csv line are Strings the next is an integer and so on, is t possible to declare my schema and create another Dataset<Row> based on that schema and place the data accordingly?
Thank you in advance.
First of all,if it is csv i dont see any point to use spark streaming for that.It will be hisotrical data ,data is not changing.So you should use spark sql only to read and process csv.
You can create your schema by crating StructField and decalre data types.

DF insertInto is not persisting all columns for mixed structured data ( json, string)

DataFrame saveAsTable is persisting all the column values properly but insertInto function is not storing all the columns especially json data is truncated and sub-sequent column in not stored hive table.
Our Environment
Spark 2.2.0
EMR 5.10.0
Scala 2.11.8
The sample data is
a8f11f90-20c9-11e8-b93e-2fc569d27605 efe5bdb3-baac-5d8e-6cae57771c13 Unknown E657F298-2D96-4C7D-8516-E228153FE010 NonDemarcated {"org-id":"efe5bdb3-baac-5d8e-6cae57771c13","nodeid":"N02c00056","parkingzoneid":"E657F298-2D96-4C7D-8516-E228153FE010","site-id":"a8f11f90-20c9-11e8-b93e-2fc569d27605","channel":1,"type":"Park","active":true,"tag":"","configured_date":"2017-10-23
23:29:11.20","vs":[5.0,1.7999999523162842,1.5]}
DF SaveAsTable
val spark = SparkSession.builder().appName("Spark SQL Test").
config("hive.exec.dynamic.partition", "true").
config("hive.exec.dynamic.partition.mode", "nonstrict").
enableHiveSupport().getOrCreate()
val zoneStatus = spark.table("zone_status")
zoneStatus.select(col("site-id"),col("org-id"), col("groupid"), col("zid"), col("type"), lit(0), col("config"), unix_timestamp().alias("ts")).
write.mode(SaveMode.Overwrite).saveAsTable("dwh_zone_status")
Stored data properly in result table:
a8f11f90-20c9-11e8-b93e-2fc569d27605 efe5bdb3-baac-5d8e-6cae57771c13 Unknown E657F298-2D96-4C7D-8516-E228153FE010 NonDemarcated 0 {"org-id":"efe5bdb3-baac-5d8e-6cae57771c13","nodeid":"N02c00056","parkingzoneid":"E657F298-2D96-4C7D-8516-E228153FE010","site-id":"a8f11f90-20c9-11e8-b93e-2fc569d27605","channel":1,"type":"Park","active":true,"tag":"","configured_date":"2017-10-23 23:29:11.20","vs":[5.0,1.7999999523162842,1.5]} 1520453589
DF insertInto
zoneStatus.
select(col("site-id"),col("org-id"), col("groupid"), col("zid"), col("type"), lit(0), col("config"), unix_timestamp().alias("ts")).
write.mode(SaveMode.Overwrite).insertInto("zone_status_insert")
But insertInto is not persisting all the contents. The json string is storing partially and sub-sequent column is not stored.
a8f11f90-20c9-11e8-b93e-2fc569d27605 efe5bdb3-baac-5d8e-6cae57771c13 Unknown E657F298-2D96-4C7D-8516-E228153FE010 NonDemarcated 0 {"org-id":"efe5bdb3-baac-5d8e-6cae57771c13" NULL
We are using insertInto functions in our projects and recently encountered when parsing json data to pull other metrics. We noticed that the config content is not stored fully. Planning to change to saveAsTable but we can avoid the code change, if any workaround available to add in spark configuration.
You can use below alternative ways of inserting data into table.
val zoneStatusDF = zoneStatus.
select(col("site-id"),col("org-id"), col("groupid"), col("zid"), col("type"), lit(0), col("config"), unix_timestamp().alias("ts"))
zoneStatusDF.registerTempTable("zone_status_insert ")
Or
zoneStatus.sqlContext.sql("create table zone_status_insert as select * from zone_status")
The reason is that schema created with
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE
After removing the ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' can able to save entire contents using insertInto.

How to read Hive table with column with JSON strings?

I have a hive table column (Json_String String) it has some 1000 rows, Where each row is a Json of same structure. I am trying read the json in to Dataframe as below
val df = sqlContext.read.json("select Json_String from json_table")
but it is throwing up the below exception
java.io.IOException: No input paths specified in job
is there any way to read all the rows in to dataframe as we do with Json files using wild card
val df = sqlContext.read.json("file:///home/*.json")
I think what you're asking for is to read the Hive table as usual and transform the JSON column using from_json function.
from_json(e: Column, schema: StructType): Column Parses a column containing a JSON string into a StructType with the specified schema. Returns null, in the case of an unparseable string.
Given you use sqlContext in your code, I'm afraid that you use Spark < 2.1.0 which then does not offer from_json (which was added in 2.1.0).
The solution then is to use a custom user-defined function (UDF) to do the parsing yourself.
val df = sqlContext.read.json("select Json_String from json_table")
The above won't work since json operator expects a path or paths to JSON files on disk (not as a result of executing a query against a Hive table).
json(paths: String*): DataFrame Loads a JSON file (JSON Lines text format or newline-delimited JSON) and returns the result as a DataFrame.