Insert into hive from dataframe is not working - pyspark

I am trying to insert the records from dataframe into hive tables using below command. The command is successful but the target table is not loaded with records.
mergerdd.write.mode("append").insertInto("db.tablename")
I expect records to be loaded into hive table.

Please Check with my Solution. It worked for me.
df.repartition(1).write.format("csv").insertInto('db.tablename',overwrite=True) # CSV
df.repartition(1).write.format("orc").insertInto('db.tablename',overwrite=True) # ORC
df.repartition(1).write.format("parquet").insertInto('db.tablename',overwrite=True) #PARQUET

this way works for me via spark.sql
df.coalesce(#numberofoutputfile).createOrReplaceTempView(#temptablename)
spark.sql(f"insert into {db}.{tablename} select * from {temptablename}")
also mergerdd is an rdd or spark dataframe?

Here is another way of achieving what you are trying to achieve:
df.write.mode("append").saveAsTable("db.tablename")
I use this all the time without any problems.
Hope that helps.

Related

Even after setting the "orc.force.positional.evolution" to false hive is still picking up based on position

I have an external table where I have added few new columns and wanted to ensure that data in orc format file should be written from Spark dataframe to Hive external table based on the column name and not based on position and so have set "orc.force.positional.evolution"="false" in TBLPROPERTIES but still data is written based on a position which is incorrect.
Please suggest what I am missing here. I have used below question as a reference:
Hive external table with ORC format- how to map the column names in the orc file to the hive table columns?
I have a workaround of using select on spark Dataframe but looking for better options without making any code changes.
Hive version I am using is 3.1

DataBricks: Ingesting CSV data to a Delta Live Table in Python triggers "invalid characters in table name" error - how to set column mapping mode?

First off, can I just say that I am learning DataBricks at the time of writing this post, so I'd like simpler, cruder solutions as well as more sophisticated ones.
I am reading a CSV file like this:
df1 = spark.read.format("csv").option("header", True).load(path_to_csv_file)
Then I'm saving it as a Delta Live Table like this:
df1.write.format("delta").save("table_path")
The CSV headers have characters in them like space and & and /, and I get the error:
AnalysisException:
Found invalid character(s) among " ,;{}()\n\t=" in the column names of your
schema.
Please enable column mapping by setting table property 'delta.columnMapping.mode' to 'name'.
For more details, refer to https://docs.databricks.com/delta/delta-column-mapping.html
Or you can use alias to rename it.
The documentation I've seen on the issue explains how to set the column mapping mode to 'name' AFTER a table has been created using ALTER TABLE, but does not explain how to set it at creation time, especially when using the DataFrame API as above. Is there a way to do this?
Is there a better way to get CSV into a new table?
UPDATE:
Reading the docs here and here, and inspired by Robert's answer, I tried this first:
spark.conf.set("spark.databricks.delta.defaults.columnMapping.mode", "name")
Still no luck, I get the same error. It's interesting how hard it is for a beginner to write a CSV file with spaces in its headers to a Delta Live Table
Thanks to Hemant on the Databricks community forum, I have found the answer.
df1.write.format("delta").option("delta.columnMapping.mode", "name")
.option("path", "table_path").saveAsTable("new_table")
Now I can either query it with SQL or load it into a Spark dataframe:
SELECT * FROM new_table;
delta_df = spark.read.format("delta").load("table_path")
display(delta_df)
SQL Way
This method does the same thing but in SQL.
First, create a CSV-backed table for your CSV file:
CREATE TABLE table_csv
USING CSV
OPTIONS (path '/path/to/file.csv', 'header' 'true', 'mode' 'FAILFAST');
Then create a Delta table using the CSV-backed table:
CREATE TABLE delta_table
USING DELTA
TBLPROPERTIES ("delta.columnMapping.mode" = "name")
AS SELECT * FROM table_csv;
SELECT * FROM delta_table;
I've verified that I get the same error as I did when using Python should I omit the TBLPROPERTIES statement.
I guess the Python answer would be to use spark.sql and run this using Python, that way I could embed the CSV path variable in the SQL.
You can set the option in the Spark Configuration of the cluster you are using. That is how you enable the mode at runtime.
You could also set the config at runtime like this:
spark.conf.set("spark.databricks.<name-of-property>", <value>)

AWS glue pyspark - converting one row from source table to multiple rows in destination

I have the below requirement
How can I achieve this using pyspark explode function
#Mohammad Murtaza Hashmi
In need of your help again.
F.split(F.concat_ws(',',*(x for x in df.columns if x.startswith('daily_qty'))),',')
I am not getting how to modify the above to satisfy the below requirement.
Currently destination table looks like below which is wrong

How can I resolve table names to Parquet on the fly?

I need to run Spark SQL queries with my own custom correspondence from table names to Parquet data. Reading Parquet data to DataFrames with sqlContext.read.parquet and registering the DataFrames with df.registerTempTable isn't cutting it for my use case, because those calls have to be run before the SQL query, when I might not even know what tables are needed.
Rather than using registerTempTable, I'm trying to write an Analyzer that resolves table names using my own logic. However, I need to be able to resolve an UnresolvedRelation to a LogicalPlan representing Parquet data, but sqlContext.read.parquet gives a DataFrame, not a LogicalPlan.
A DataFrame seems to have a logicalPlan attribute, but that's marked protected[sql]. There's also a ParquetRelation class, but that's private[sql]. That's all I found for ways to get a LogicalPlan.
How can I resolve table names to Parquet with my own logic? Am I even on the right track with Analyzer?
You can actually retrieve the logicalPlan of your DataFrame with
val myLogicalPlan: LogicalPlan = myDF.queryExecution.logical

Apache Spark Multiple Aggregations

I am using Apache spark in Scala to run aggregations on multiple columns in a dataframe for example
select column1, sum(1) as count from df group by column1
select column2, sum(1) as count from df group by column2
The actual aggregation is more complicated than just the sum(1) but it's besides the point.
Query strings such as the examples above are compiled for each variable that I would like to aggregate, and I execute each string through a Spark sql context to create a corresponding dataframe that represents the aggregation in question
The nature of my problem is that I would have to do this for thousands of variables.
My understanding is that Spark will have to "read" the main dataframe each time it executes an aggregation.
Is there maybe an alternative way to do this more efficiently?
Thanks for reading my question, and thanks in advance for any help.
Go ahead and cache the data frame after you build the DataFrame with your source data. Also, to avoid writing all the queries in the code, go ahead and put them in a file and pass the file at run time. Have something in your code that can read your file and then you can run your queries. The best part about this approach is you can change your queries by updating the file and not the applications. Just make sure you find a way to give the output unique names.
In PySpark, it would look something like this.
dataframe = sqlContext.read.parquet("/path/to/file.parquet")
// do your manipulations/filters
dataframe.cache()
queries = //how ever you want to read/parse the query file
for query in queries:
output = dataframe.sql(query)
output.write.parquet("/path/to/output.parquet")