UnsupportedOperationException: Unimplemented Type: DoubleType - pyspark

I'm trying to write a pyspark df to Snowflake using a function I've written:
def s3_to_snowflake(schema, table):
df = get_dataframe(schema, table, sqlContext)
username = user
password = passw
account = acct
snowflake_options = {
"sfURL" : account+".us-east-1.snowflakecomputing.com",
"sfAccount" : account,
"sfUser" : username,
"sfPassword" : password,
"sfDatabase" : "database",
"sfSchema" : schema,
"sfWarehouse" : "demo_wh"
}
sc._jsc.hadoopConfiguration().set("fs.s3.awsAccessKeyId", "KeyId")
sc._jsc.hadoopConfiguration().set("fs.s3.awsSecretAccessKey",
"AccessKey")
(
df
.write
.format("net.snowflake.spark.snowflake")
.mode("overwrite")
.options(**snowflake_options)
.option("dbtable", table)
.option('tempDir', 's3://data-temp-loads/snowflake')
.save()
)
print('Wrote {0} to {1}.'.format(table, schema))
This function has worked for all but one of the tables I've got in my datalake.
This is the schema of the table I'm trying to write.
root
|-- credit_transaction_id: string (nullable = true)
|-- credit_deduction_amt: double (nullable = true)
|-- credit_adjustment_time: timestamp (nullable = true)
The error I'm getting looks like Snowflake is taking issue with that DoubleType column. I've had this issue before with Hive when using Avro/ORC filetypes. Usually it's a matter of casting one datatype to another.
Things I've tried:
Casting (Double to Float, Double to String, Double to Numeric–this last one per the Snowflake docs )
Rerunning DDL of the incoming table, trying Float, String, and Numeric types
One other thing of note: some of the tables that I've transferred successfully have columns of DoubleType. Unsure of what the issue with this table is.

After poking around online, it seems to me that this error is being thrown by Spark's Parquet reader:
https://github.com/apache/spark/blob/branch-2.0/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
Are your files defining df Parquet? I think this may be a read error instead of a write error; it might be worth taking a look at what's going on in get_dataframe.
Thanks,
etduwx

Related

Data format inconsistency during read/write parquet file with spark

Here is the schema of the input data that I read from a file myfile.parquet with spark/scala :
val df = spark.read.format("parquet").load("/usr/sample/myIntialFile.parquet")
df.printSchema
root
|-- details: string (nullable = true)
|-- infos: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- text: string (nullable = true)
| | |-- id: string (nullable = true)
| | |-- value: string (nullable = true)
Then, I just made a .select("infos") and write the dataframe as a parquet file (let say as sparkProcessedFile.parquet). And of course, the data schema of infos column remained unchanged.
On the other hand, when I try to compare schemas of myIntialFile.parquet and sparkProcessedFile.parquet using pyarrow, I don't get the same data schema :
import pyarrow.parquet as pa
initialTable = pa.read_table('myIntialFile.parquet')
initialTable.schema
infos: list<array: struct<text: string, id: string, value: string> not null>
sparkProcessedTable = pa.read_table('sparkProcessedFile.parquet')
sparkProcessedTable.schema
infos: list<element: struct<text: string, id: string, value: string>>
I don't understand why there is a difference (list<array<struct>> instead of list<struct>) and why spark changed the initial nested structure with a simple select.
Thanks for any suggestions.
The actual data type didn't change. In both cases infos is a variable sized list of structs. In other words, each item in the infos array is a list of structs.
Arguably, there isn't much point in the name array or element. I think different parquet readers/writers basically just make something up here. Note that pyarrow will call the field item when creating a new array from memory:
>>> pa.list_(pa.struct([pa.field('text', pa.string()), pa.field('id', pa.string()), pa.field('value', pa.string())]))
ListType(list<item: struct<text: string, id: string, value: string>>)
It appears that Spark is normalizing the "list element name" to element (or perhaps whatever is in its schema) regardless of what is actually in the parquet file. This seems like a reasonable thing to do (although one could imagine it causing a compatibility issue)
Perhaps more concerning is the fact that the field items changed from "not null" to "nullable". Again, Spark reports the field as "nullable" so either Spark has decided that all array columns are nullable or Spark had decided the schema required that to be nullable in some other way.

Saving empty dataframe to parquet results in error - Spark 2.4.4

I have a piece of code where at the end, I am write dataframe to parquet file.
The logic is such that the dataframe could be empty sometimes and hence I get the below error.
df.write.format("parquet").mode("overwrite").save(somePath)
org.apache.spark.sql.AnalysisException: Parquet data source does not support null data type.;
When I print the schema of "df", I get below.
df.schema
res2: org.apache.spark.sql.types.StructType =
StructType(
StructField(rpt_date_id,IntegerType,true),
StructField(rpt_hour_no,ShortType,true),
StructField(kpi_id,IntegerType,false),
StructField(kpi_scnr_cd,StringType,false),
StructField(channel_x_id,IntegerType,false),
StructField(brand_id,ShortType,true),
StructField(kpi_value,FloatType,false),
StructField(src_lst_updt_dt,NullType,true),
StructField(etl_insrt_dt,DateType,false),
StructField(etl_updt_dt,DateType,false)
)
Is there a workaround to just write the empty file with schema, or not write the file at all when empty?
Thanks
The error you are getting is not related with the fact that your dataframe is empty. I don't see the point of saving an empty dataframe but you can do it if you want. Try this if you don't believe me:
val schema = StructType(
Array(
StructField("col1",StringType,true),
StructField("col2",StringType,false)
)
)
spark.createDataFrame(spark.sparkContext.emptyRDD[Row], schema)
.write
.format("parquet")
.save("/tmp/test_empty_df")
You are getting that error because one of your columns is of NullType and as the exception that was thrown indicates "Parquet data source does not support null data type"
I can't know for sure why you have a column with Null type but that usually happens when you read your data from a source and let spark infer the schema. If in that source there is an empty column, spark won't be able to infer the schema and will set it to null type.
If this is what's happening, my advice is that you specify the schema on read.
If this is not the case, a possible solution is to cast all the columns of NullType to a parquet-compatible type (like StringType). Here is an example on how to do it:
//df is a dataframe with a column of NullType
val df = Seq(("abc",null)).toDF("col1", "col2")
df.printSchema
root
|-- col1: string (nullable = true)
|-- col2: null (nullable = true)
//fold left to cast all NullType to StringType
val df1 = df.columns.foldLeft(df){
(acc,cur) => {
if(df.schema(cur).dataType == NullType)
acc.withColumn(cur, col(cur).cast(StringType))
else
acc
}
}
df1.printSchema
root
|-- col1: string (nullable = true)
|-- col2: string (nullable = true)
Hope this helps
'or not write the file at all when empty?' Check if df is not empty & then only write it.
if (!df.isEmpty)
df.write.format("parquet").mode("overwrite").save("somePath")

Not able to override Schema of a CSV file in Spark 2.x

I have a CSV file, test.csv:
col
1
2
3
4
When I read it using Spark, it gets the schema of data correct:
val df = spark.read.option("header", "true").option("inferSchema", "true").csv("test.csv")
df.printSchema
root
|-- col: integer (nullable = true)
But when I override the schema of CSV file and make inferSchema false, then SparkSession is picking up custom schema partially.
val df = spark.read.option("header", "true").option("inferSchema", "false").schema(StructType(List(StructField("custom", StringType, false)))).csv("test.csv")
df.printSchema
root
|-- custom: string (nullable = true)
I mean only column name (custom) and DataType (StringType) are getting picked up. But, nullable part is being ignored, as it is still coming nullable = true, which is incorrect.
I am not able to understand this behavior. Any help is appreciated !
Consider this excerpt from the documentation about Parquet (a popular "Big Data" storage format):
"Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema of the original data. When writing Parquet files, all columns are automatically converted to be nullable for compatibility reasons."
CSV is handled the same way for the same reason.
As for what "compatibility reasons" means, Nathan Marz in his book Big Data describes that an ideal storage schema is both strongly typed for integrity and flexible for evolution. In other words, it should be easy to add and remove fields and not have your analytics blow up. Parquet is both typed and flexible; CSV is just flexible. Spark honors that flexibility by making columns nullable no matter what you do. You can debate whether you like that approach.
A SQL table has schemas rigorously defined and hard to change--so much so Scott Ambler wrote a big book on how to refactor them. Parquet and CSV are much less rigorous. They are both suited to the paradigms for which they were built, and Spark's approach is to take the liberal approach typically associated with "Big Data" storage formats.
I believe the “inferSchema” property is common and applicable for all the elements in a dataframe. But, If we want to change the nullable property of a specific element.
We could handle/set something like,
setNullableStateOfColumn(df, ”col", false )
def setNullableStateOfColumn(df:DataFrame, cn: String, nullable: Boolean) : DataFrame = {
// get schema
val schema = df.schema
// modify [[StructField] with name `cn`
val newSchema = StructType(schema.map {
case StructField( c, t, _, m) if c.equals(cn) => StructField( c, t, nullable = nullable, m)
case y: StructField => y
})
// apply new schema
df.sqlContext.createDataFrame( df.rdd, newSchema )
}
There is a similar thread for setting the nullable property of an element,
Change nullable property of column in spark dataframe

Splitting contents of a dataframe column using Spark 1.4.1 for nested gz file

I am having a difficulty splitting contents of a dataframe column using Spark 1.4.1 for nested gz file. I used the map function to map the attributes of the gz file.
The data is in the following Format :
"id": "tag:1234,89898",
"actor":
{
"objectType": "person",
"id": "id:1234",
"link": "http:\wwww.1234.com/"
},
"body",
I am using the following code to split the columns and read the data file.
val dataframe= sc.textFile(("filename.dat.gz")
.toString())
.map(_.split(","))
.map(r => {(r(0), r(1),r(2))})
.toDF()
dataframe.printSchema()
But the result is something like:
root
--- _1: string (nullable = true)
--- _2: string (nullable = true)
--- _3: string (nullable = true)
This is the incorrect format. I want the schema to be in the format :
----- id
----- actor
---objectType
---id
---link
-----body
Am in doing something wrong ? I need to use this code to do some per-processing on my data set and apply some transformations.
This data looks like JSON. Fortunately, Spark supports the easy ingestion of JSON data using Spark SQL. From the Spark Documentation:
Spark SQL can automatically infer the schema of a JSON dataset and load it as a DataFrame. This conversion can be done using SQLContext.read.json() on either an RDD of String, or a JSON file.
Here is a modified version of the example from the docs
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val myData = sc.textFile("myPath").map(s -> makeValidJSON(s))
val myNewData = sqlContext.read.json(myData)
// The inferred schema can be visualized using the printSchema() method.
myNewData.printSchema()
For the makeValidJSON function you just need to concentrate on some string parsing/manipulation strategies to get it right.
Hope this helps.

How can I change column types in Spark SQL's DataFrame?

Suppose I'm doing something like:
val df = sqlContext.load("com.databricks.spark.csv", Map("path" -> "cars.csv", "header" -> "true"))
df.printSchema()
root
|-- year: string (nullable = true)
|-- make: string (nullable = true)
|-- model: string (nullable = true)
|-- comment: string (nullable = true)
|-- blank: string (nullable = true)
df.show()
year make model comment blank
2012 Tesla S No comment
1997 Ford E350 Go get one now th...
But I really wanted the year as Int (and perhaps transform some other columns).
The best I could come up with was
df.withColumn("year2", 'year.cast("Int")).select('year2 as 'year, 'make, 'model, 'comment, 'blank)
org.apache.spark.sql.DataFrame = [year: int, make: string, model: string, comment: string, blank: string]
which is a bit convoluted.
I'm coming from R, and I'm used to being able to write, e.g.
df2 <- df %>%
mutate(year = year %>% as.integer,
make = make %>% toupper)
I'm likely missing something, since there should be a better way to do this in Spark/Scala...
Edit: Newest newest version
Since spark 2.x you should use dataset api instead when using Scala [1]. Check docs here:
https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/Dataset.html#withColumn(colName:String,col:org.apache.spark.sql.Column):org.apache.spark.sql.DataFrame
If working with python, even though easier, I leave the link here as it's a very highly voted question:
https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.DataFrame.withColumn.html
>>> df.withColumn('age2', df.age + 2).collect()
[Row(age=2, name='Alice', age2=4), Row(age=5, name='Bob', age2=7)]
[1] https://spark.apache.org/docs/latest/sql-programming-guide.html:
In the Scala API, DataFrame is simply a type alias of Dataset[Row].
While, in Java API, users need to use Dataset to represent a
DataFrame.
Edit: Newest version
Since spark 2.x you can use .withColumn. Check the docs here:
https://spark.apache.org/docs/2.2.0/api/scala/index.html#org.apache.spark.sql.Dataset#withColumn(colName:String,col:org.apache.spark.sql.Column):org.apache.spark.sql.DataFrame
Oldest answer
Since Spark version 1.4 you can apply the cast method with DataType on the column:
import org.apache.spark.sql.types.IntegerType
val df2 = df.withColumn("yearTmp", df.year.cast(IntegerType))
.drop("year")
.withColumnRenamed("yearTmp", "year")
If you are using sql expressions you can also do:
val df2 = df.selectExpr("cast(year as int) year",
"make",
"model",
"comment",
"blank")
For more info check the docs:
http://spark.apache.org/docs/1.6.0/api/scala/#org.apache.spark.sql.DataFrame
[EDIT: March 2016: thanks for the votes! Though really, this is not the best answer, I think the solutions based on withColumn, withColumnRenamed and cast put forward by msemelman, Martin Senne and others are simpler and cleaner].
I think your approach is ok, recall that a Spark DataFrame is an (immutable) RDD of Rows, so we're never really replacing a column, just creating new DataFrame each time with a new schema.
Assuming you have an original df with the following schema:
scala> df.printSchema
root
|-- Year: string (nullable = true)
|-- Month: string (nullable = true)
|-- DayofMonth: string (nullable = true)
|-- DayOfWeek: string (nullable = true)
|-- DepDelay: string (nullable = true)
|-- Distance: string (nullable = true)
|-- CRSDepTime: string (nullable = true)
And some UDF's defined on one or several columns:
import org.apache.spark.sql.functions._
val toInt = udf[Int, String]( _.toInt)
val toDouble = udf[Double, String]( _.toDouble)
val toHour = udf((t: String) => "%04d".format(t.toInt).take(2).toInt )
val days_since_nearest_holidays = udf(
(year:String, month:String, dayOfMonth:String) => year.toInt + 27 + month.toInt-12
)
Changing column types or even building a new DataFrame from another can be written like this:
val featureDf = df
.withColumn("departureDelay", toDouble(df("DepDelay")))
.withColumn("departureHour", toHour(df("CRSDepTime")))
.withColumn("dayOfWeek", toInt(df("DayOfWeek")))
.withColumn("dayOfMonth", toInt(df("DayofMonth")))
.withColumn("month", toInt(df("Month")))
.withColumn("distance", toDouble(df("Distance")))
.withColumn("nearestHoliday", days_since_nearest_holidays(
df("Year"), df("Month"), df("DayofMonth"))
)
.select("departureDelay", "departureHour", "dayOfWeek", "dayOfMonth",
"month", "distance", "nearestHoliday")
which yields:
scala> df.printSchema
root
|-- departureDelay: double (nullable = true)
|-- departureHour: integer (nullable = true)
|-- dayOfWeek: integer (nullable = true)
|-- dayOfMonth: integer (nullable = true)
|-- month: integer (nullable = true)
|-- distance: double (nullable = true)
|-- nearestHoliday: integer (nullable = true)
This is pretty close to your own solution. Simply, keeping the type changes and other transformations as separate udf vals make the code more readable and re-usable.
As the cast operation is available for Spark Column's (and as I personally do not favour udf's as proposed by #Svend at this point), how about:
df.select( df("year").cast(IntegerType).as("year"), ... )
to cast to the requested type? As a neat side effect, values not castable / "convertable" in that sense, will become null.
In case you need this as a helper method, use:
object DFHelper{
def castColumnTo( df: DataFrame, cn: String, tpe: DataType ) : DataFrame = {
df.withColumn( cn, df(cn).cast(tpe) )
}
}
which is used like:
import DFHelper._
val df2 = castColumnTo( df, "year", IntegerType )
First, if you wanna cast type, then this:
import org.apache.spark.sql
df.withColumn("year", $"year".cast(sql.types.IntegerType))
With same column name, the column will be replaced with new one. You don't need to do add and delete steps.
Second, about Scala vs R.
This is the code that most similar to R I can come up with:
val df2 = df.select(
df.columns.map {
case year # "year" => df(year).cast(IntegerType).as(year)
case make # "make" => functions.upper(df(make)).as(make)
case other => df(other)
}: _*
)
Though the code length is a little longer than R's. That is nothing to do with the verbosity of the language. In R the mutate is a special function for R dataframe, while in Scala you can easily ad-hoc one thanks to its expressive power.
In word, it avoid specific solutions, because the language design is good enough for you to quickly and easy build your own domain language.
side note: df.columns is surprisingly a Array[String] instead of Array[Column], maybe they want it look like Python pandas's dataframe.
You can use selectExpr to make it a little cleaner:
df.selectExpr("cast(year as int) as year", "upper(make) as make",
"model", "comment", "blank")
Java code for modifying the datatype of the DataFrame from String to Integer
df.withColumn("col_name", df.col("col_name").cast(DataTypes.IntegerType))
It will simply cast the existing(String datatype) to Integer.
I think this is lot more readable for me.
import org.apache.spark.sql.types._
df.withColumn("year", df("year").cast(IntegerType))
This will convert your year column to IntegerType with creating any temporary columns and dropping those columns.
If you want to convert to any other datatype, you can check the types inside org.apache.spark.sql.types package.
To convert the year from string to int, you can add the following option to the csv reader: "inferSchema" -> "true", see DataBricks documentation
Generate a simple dataset containing five values and convert int to string type:
val df = spark.range(5).select( col("id").cast("string") )
So this only really works if your having issues saving to a jdbc driver like sqlserver, but it's really helpful for errors you will run into with syntax and types.
import org.apache.spark.sql.jdbc.{JdbcDialects, JdbcType, JdbcDialect}
import org.apache.spark.sql.jdbc.JdbcType
val SQLServerDialect = new JdbcDialect {
override def canHandle(url: String): Boolean = url.startsWith("jdbc:jtds:sqlserver") || url.contains("sqlserver")
override def getJDBCType(dt: DataType): Option[JdbcType] = dt match {
case StringType => Some(JdbcType("VARCHAR(5000)", java.sql.Types.VARCHAR))
case BooleanType => Some(JdbcType("BIT(1)", java.sql.Types.BIT))
case IntegerType => Some(JdbcType("INTEGER", java.sql.Types.INTEGER))
case LongType => Some(JdbcType("BIGINT", java.sql.Types.BIGINT))
case DoubleType => Some(JdbcType("DOUBLE PRECISION", java.sql.Types.DOUBLE))
case FloatType => Some(JdbcType("REAL", java.sql.Types.REAL))
case ShortType => Some(JdbcType("INTEGER", java.sql.Types.INTEGER))
case ByteType => Some(JdbcType("INTEGER", java.sql.Types.INTEGER))
case BinaryType => Some(JdbcType("BINARY", java.sql.Types.BINARY))
case TimestampType => Some(JdbcType("DATE", java.sql.Types.DATE))
case DateType => Some(JdbcType("DATE", java.sql.Types.DATE))
// case DecimalType.Fixed(precision, scale) => Some(JdbcType("NUMBER(" + precision + "," + scale + ")", java.sql.Types.NUMERIC))
case t: DecimalType => Some(JdbcType(s"DECIMAL(${t.precision},${t.scale})", java.sql.Types.DECIMAL))
case _ => throw new IllegalArgumentException(s"Don't know how to save ${dt.json} to JDBC")
}
}
JdbcDialects.registerDialect(SQLServerDialect)
the answers suggesting to use cast, FYI, the cast method in spark 1.4.1 is broken.
for example, a dataframe with a string column having value "8182175552014127960" when casted to bigint has value "8182175552014128100"
df.show
+-------------------+
| a|
+-------------------+
|8182175552014127960|
+-------------------+
df.selectExpr("cast(a as bigint) a").show
+-------------------+
| a|
+-------------------+
|8182175552014128100|
+-------------------+
We had to face a lot of issue before finding this bug because we had bigint columns in production.
df.select($"long_col".cast(IntegerType).as("int_col"))
You can use below code.
df.withColumn("year", df("year").cast(IntegerType))
Which will convert year column to IntegerType column.
Using Spark Sql 2.4.0 you can do that:
spark.sql("SELECT STRING(NULLIF(column,'')) as column_string")
This method will drop the old column and create new columns with same values and new datatype. My original datatypes when the DataFrame was created were:-
root
|-- id: integer (nullable = true)
|-- flag1: string (nullable = true)
|-- flag2: string (nullable = true)
|-- name: string (nullable = true)
|-- flag3: string (nullable = true)
After this I ran following code to change the datatype:-
df=df.withColumnRenamed(<old column name>,<dummy column>) // This was done for both flag1 and flag3
df=df.withColumn(<old column name>,df.col(<dummy column>).cast(<datatype>)).drop(<dummy column>)
After this my result came out to be:-
root
|-- id: integer (nullable = true)
|-- flag2: string (nullable = true)
|-- name: string (nullable = true)
|-- flag1: boolean (nullable = true)
|-- flag3: boolean (nullable = true)
So many answers and not much thorough explanations
The following syntax works Using Databricks Notebook with Spark 2.4
from pyspark.sql.functions import *
df = df.withColumn("COL_NAME", to_date(BLDFm["LOAD_DATE"], "MM-dd-yyyy"))
Note that you have to specify the entry format you have (in my case "MM-dd-yyyy") and the import is mandatory as the to_date is a spark sql function
Also Tried this syntax but got nulls instead of a proper cast :
df = df.withColumn("COL_NAME", df["COL_NAME"].cast("Date"))
(Note I had to use brackets and quotes for it to be syntaxically correct though)
PS : I have to admit this is like a syntax jungle, there are many possible ways entry points, and the official API references lack proper examples.
Another solution is as follows:
1) Keep "inferSchema" as False
2) While running 'Map' functions on the row, you can read 'asString' (row.getString...)
//Read CSV and create dataset
Dataset<Row> enginesDataSet = sparkSession
.read()
.format("com.databricks.spark.csv")
.option("header", "true")
.option("inferSchema","false")
.load(args[0]);
JavaRDD<Box> vertices = enginesDataSet
.select("BOX","BOX_CD")
.toJavaRDD()
.map(new Function<Row, Box>() {
#Override
public Box call(Row row) throws Exception {
return new Box((String)row.getString(0),(String)row.get(1));
}
});
Why not just do as described under http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.Column.cast
df.select(df.year.cast("int"),"make","model","comment","blank")
One can change data type of a column by using cast in spark sql.
table name is table and it has two columns only column1 and column2 and column1 data type is to be changed.
ex-spark.sql("select cast(column1 as Double) column1NewName,column2 from table")
In the place of double write your data type.
Another way:
// Generate a simple dataset containing five values and convert int to string type
val df = spark.range(5).select( col("id").cast("string")).withColumnRenamed("id","value")
In case you have to rename dozens of columns given by their name, the following example takes the approach of #dnlbrky and applies it to several columns at once:
df.selectExpr(df.columns.map(cn => {
if (Set("speed", "weight", "height").contains(cn)) s"cast($cn as double) as $cn"
else if (Set("isActive", "hasDevice").contains(cn)) s"cast($cn as boolean) as $cn"
else cn
}):_*)
Uncasted columns are kept unchanged. All columns stay in their original order.
val fact_df = df.select($"data"(30) as "TopicTypeId", $"data"(31) as "TopicId",$"data"(21).cast(FloatType).as( "Data_Value_Std_Err")).rdd
//Schema to be applied to the table
val fact_schema = (new StructType).add("TopicTypeId", StringType).add("TopicId", StringType).add("Data_Value_Std_Err", FloatType)
val fact_table = sqlContext.createDataFrame(fact_df, fact_schema).dropDuplicates()
In case if you want to change multiple columns of a specific type to another without specifying individual column names
/* Get names of all columns that you want to change type.
In this example I want to change all columns of type Array to String*/
val arrColsNames = originalDataFrame.schema.fields.filter(f => f.dataType.isInstanceOf[ArrayType]).map(_.name)
//iterate columns you want to change type and cast to the required type
val updatedDataFrame = arrColsNames.foldLeft(originalDataFrame){(tempDF, colName) => tempDF.withColumn(colName, tempDF.col(colName).cast(DataTypes.StringType))}
//display
updatedDataFrame.show(truncate = false)