Difference Between df.wirte and CREATE TABLE USING - pyspark

I have always been under the impression that the following code create a Delta table,
data.write.format("delta").save("/path/to/delta-table")
This creates the files, sure, however, I noticed today that when I look at the Data section of Databricks, under the hive_metastore, this table does not show up.
In order for this table to show up there, I have to do something like,
CREATE TABLE some_table USING DELTA LOCATION "/path/to/delta-table"
What exactly is going on here? Was I wrong in my understanding that the .write operation creates a table? What is the difference between these commands?

DataFrameWriter has following methods:
def save(path: String): Unit
Saves the content of the DataFrame at the specified path.
def saveAsTable(tableName: String): Unit
Saves the content of the DataFrame as the specified table.
What you did by .save("/path/to/delta-table") was saving the data in delta format in the filesystem. In order for the table to be visible in data catalog (aka. metastore) you need to run CREATE TABLE providing the location.
You can write data using .saveAsTable("delta-table") - that would write the data under a path managed by the metastore and register the table in one step.

Related

Saved delta file reads as an df - is it still part of delta lake?

I have problems understanding the concept of delta lake. Example:
I read a parquet file:
taxi_df = (spark.read.format("parquet").option("header", "true").load("dbfs:/mnt/randomcontainer/taxirides.parquet"))
Then I save it using asTable:
taxi_df.write.format("delta").mode("overwrite").saveAsTable("taxi_managed_table")
I read the just stored managed table:
taxi_read_from_managed_table = (spark.read.format("delta").option("header", "true").load("dbfs:/user/hive/warehouse/taxi_managed_table/"))
... and when I check the type it shows "pyspark.sql.dataframe.DataFrame", not deltaTable:
type(taxi_read_from_managed_table) # returns pyspark.sql.dataframe.DataFrame
Only after I transform it explicitly using the following command, I receive the type DeltaTable
taxi_delta_table = DeltaTable.convertToDelta(spark,"parquet.dbfs:/user/hive/warehouse/taxismallmanagedtable/")
type(taxi_delta_table) #returns delta.tables.DeltaTable
/////////////////////////////
Does that mean that the table in stage 4. is not a delta table and won’t provide the automatic optimizations provided by delta lake?
How do you establish if something is part of the delta lake or not?
I understand that delta live tables only work with delta.tables.DeltaTables, is that correct?
When you use spark.read...load() - it returns the Spark's DataFrame object that you can use to process the data. Under the hood this DataFrame use the Delta Lake table. DataFrame is abstracting the data source so you can work with different sources and apply the same operations.
On other hand, DeltaTable is a specific object that allows to apply only Delta-specific operations. You even don't need to perform convertToDelta to get it - just use DeltaTable.forPath or DeltaTable.forName functions to obtain its instance.
P.S. if you saved data with .saveAsTable(my_name), then you don't need to use .load, just use spark.read.table(my_name).

Cassandra Alter Column type from Timestamp to Date

Is there any way to alter the Cassandra column from timestamp to date without data lost? For example '2021-02-25 20:30:00+0000' to '2021-02-25'
If not, what is the easiest way to migrate this column(timestamp) to the new column(date)?
It's impossible to change a type of the existing column, so you need to add a new column with correct data type, and perform migration. Migration could be done via Spark + Spark Cassandra Connector - it could be most flexible solution, and even could be done via single node machine with Spark running in the local master mode (default). Code could look something like this (try on test data first):
import pyspark.sql.functions as F
options = { "table": "tbl", "keyspace": "ks"}
spark.read.format("org.apache.spark.sql.cassandra").options(**options).load()\
.select("pk_col1", "pk_col2", F.col("timestamp_col").cast("date").alias("new_name"))\
.write.format("org.apache.spark.sql.cassandra").options(**options).save()
P.S. you can use DSBulk, for example, but you need to have enough space to offload the data (although you need only primary key column + your timestamp)
To add to Alex Ott's answer, there are validations done in Cassandra that prevents changing the data type of a column. The reason is that SSTables (Cassandra data files) are immutable -- once they are written to disk, they are never modified/edited/updated. They can only be compacted to new SSTables.
Some try to get around it by dropping the column from the table then adding it back in with a new data type. Unlike traditional RDBMS, the existing data in the SSTables don't get updated so if you tried to read the old data, you'll get a CorruptSSTableException because the CQL type of the data on disk won't match that of the schema.
For this reason, it is no longer possible to drop/recreate columns with the same name (CASSANDRA-14948). If you're interested, I've explained it in a bit more detail in this post -- https://community.datastax.com/questions/8018/. Cheers!
You can use ToDate to change it. For example: Table Email has column Date with format: 2001-08-29 13:03:35.000000+0000.
Select Date, ToDate(Date) as Convert from keyspace.Email:
date | convert ---------------------------------+------------ 2001-08-29 13:03:35.000000+0000 | 2001-08-29

Spark JDBC - How to stop automatic creation of table if table doesnt exist

I'm currently using jdbc to write data into an existing table.
jdbcDF.write
.mode(Savemode.append)
.jdbc("jdbc:postgresql:dbserver", "schema.tablename", connectionProperties)
I'm currently using code similar to above. When given a wrong table name, it will create that table and input that table there as opposed to throwing an error. Is there anyway to stop that feature.
This behaviour is not supported by Spark. Your would need to write your own logic around it.
According to the ScalaDocs on Enumeration SaveMode you have the following options when writing Data to a sink:
Append: Append mode means that when saving a DataFrame to a data source, if data/table already exists, contents of the DataFrame are expected to be appended to existing data.
ErrorIfExists: ErrorIfExists mode means that when saving a DataFrame to a data source, if data already exists, an exception is expected to be thrown.
Ignore: Ignore mode means that when saving a DataFrame to a data source, if data already exists, the save operation is expected to not save the contents of the DataFrame and to not change the existing data.
Overwrite: Overwrite mode means that when saving a DataFrame to a data source, if data/table already exists, existing data is expected to be overwritten by the contents of the DataFrame.

issue insert data in hive create small part files

i am processing more than 1000000 records of json file i am reading file line by line and extract requried key values
(json are mix structure is not fix. so i am parsing and generate requried json element) and generate json string simillar to json_string variable and push to hive table data are store properly but at hadoop apps/hive/warehouse/jsondb.myjson_table folder contain small part files. every insert query the new (.1 to .20 kb)part file will be created. beacuse of that if i run simple query on hive as it will take more than 30 min. showing sample code of my logic this iterate multipal times for new records to inesrt in hive.
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder().appName("SparkSessionZipsExample").enableHiveSupport().getOrCreate()
var json_string = """{"name":"yogesh_wagh","education":"phd" }"""
val df = spark.read.json(Seq(json_string).toDS)
//df.write.format("orc").saveAsTable("bds_data1.newversion");
df.write.mode("append").format("orc").insertInto("bds_data1.newversion");
i have also try to add hive property to merge the files but it wont work,
i have also try to create table from existing table for combine small part file to one 256 mb files..
please share sample code to insert multipal records and append record in part file.
I think each of those individual inserts creating a new part file.
You could create dataset/dataframe of these json strings and then save it to hive table.
you could merge the existing small file using hive ddl ALTER TABLE table_name CONCATENATE;

How can I resolve table names to Parquet on the fly?

I need to run Spark SQL queries with my own custom correspondence from table names to Parquet data. Reading Parquet data to DataFrames with sqlContext.read.parquet and registering the DataFrames with df.registerTempTable isn't cutting it for my use case, because those calls have to be run before the SQL query, when I might not even know what tables are needed.
Rather than using registerTempTable, I'm trying to write an Analyzer that resolves table names using my own logic. However, I need to be able to resolve an UnresolvedRelation to a LogicalPlan representing Parquet data, but sqlContext.read.parquet gives a DataFrame, not a LogicalPlan.
A DataFrame seems to have a logicalPlan attribute, but that's marked protected[sql]. There's also a ParquetRelation class, but that's private[sql]. That's all I found for ways to get a LogicalPlan.
How can I resolve table names to Parquet with my own logic? Am I even on the right track with Analyzer?
You can actually retrieve the logicalPlan of your DataFrame with
val myLogicalPlan: LogicalPlan = myDF.queryExecution.logical