Apache Spark - Error persisting Dataframe to MemSQL database using JDBC driver - scala

I'm currently facing an issue while trying to save an Apache Spark DataFrame loaded from an Apache Spark temp table to a distributed MemSQL database.
The trick is that I cannot use MemSQLContext connector for the moment. So I'm using JDBC driver.
Here is my code:
//store suppliers data from temp table into a dataframe
val suppliers = sqlContext.read.table("tmp_SUPPLIER")
//append data to the target table
suppliers.write.mode(SaveMode.Append).jdbc(url_memsql, "R_SUPPLIER", prop_memsql)
Here is the error message (occuring during the suppliers.write statement):
java.sql.SQLException: Distributed tables must either have a PRIMARY or SHARD key.
Note:
R_SUPPLIER table has exactly the same fields and datatypes than the temp table and has a primary key set.
FYI, here are some clues:
R_SUPPLIER script:
`CREATE TABLE R_SUPPLIER
(
SUP_ID INT NOT NULL PRIMARY KEY,
SUP_CAGE_CODE CHAR(5) NULL,
SUP_INTERNAL_SAP_CODE CHAR(5) NULL,
SUP_NAME VARCHAR(255) NULL,
SHARD KEY(SUP_ID)
);`
The suppliers.write statement has worked once, but data was then loaded in the DataFrame with a sqlContext.read.jdbc command and not sqlContext.sql (data was stored in a distant database and not in Apache Spark local temp table).
Did anyone face the same issue, please?

Are you getting that error when you run the create table, or when you run the suppliers.write code? That is an error that you should only get when creating a table. Therefore if you are hitting it when running suppliers.write, your code is probably trying to create and write to a new table, not the one you created before.

Related

Flyway - Postgresql partitioned table

I would like to generate partitioned table on PostgreSQL 11 database using Flyway. When I try to execute simple SQL file like
CREATE TABLE blabla (id varchar(100) NOT NULL, name varchar(100) NULL)
PARTITION BY LIST(name);
I have an error saying that "PARTITION" is not validate even if I'm using last release of flyway core library.
Does anyone know if partitioned table on PostgreSQL are managed with Flyway or what is the correct way for partition table creation ?

Hive create partitioned table based on Spark temporary table

I have a Spark temporary table spark_tmp_view with DATE_KEY column. I am trying to create a Hive table (without writing the temp table to a parquet location. What I have tried to run is spark.sql("CREATE EXTERNAL TABLE IF NOT EXISTS mydb.result AS SELECT * FROM spark_tmp_view PARTITIONED BY(DATE_KEY DATE)")
The error I got is mismatched input 'BY' expecting <EOF> I tried to search but still haven't been able to figure out the how to do it from a Spark app, and how to insert data after. Could someone please help? Many thanks.
PARTITIONED BY is part of definition of a table being created, so it should precede ...AS SELECT..., see Spark SQL syntax.

Not able to insert data into hive elasticsearch index using spark SQL

I have used the following steps in hive terminal to insert into elasticsearch index -
Create hive table pointing to elasticsearch index
CREATE EXTERNAL TABLE test_es(
id string,
name string
)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource = test/person', 'es.mapping = id');
Create a staging table and insert data into it
Create table emp(id string,name string) row format delimited fields terminated by ',';
load data local inpath '/home/monami/data.txt' into table emp;
Insert data from staging table into the hive elasticsearch index
insert overwrite table test_es select * from emp;
I could browse through the hive elasticsearch index successfully following the above steps in hive CLI. But whenever I am trying to insert in the same way using SPARK SQL hiveContext object, I am getting the folllowing error -
java.lang.RuntimeException: java.lang.RuntimeException: class org.elasticsearch.hadoop.mr.EsOutputFormat$EsOutputCommitter not org.apache.hadoop.mapred.OutputCommitter
Can you please let me know the reason for this error? If it is not possible to insert the same way using Spark, then what is the method to insert into hive elasticsearch index using Spark ?
Versions used - Spark 1.6, Scala 2.10, Elasticsearch 6.4, Hive 1.1

Batch Insert from Dataframe to DB ignoring failed row in Pyspark

I am trying to insert spark DF to Postgres using JDBC write. The postgres table has a unique constraint on one of the columns, when the df to be inserted violates the constraint entire batch is rejected and spark session closes giving an error duplicate key value violates unique constraint which is correct as the data is duplicate (already exists in the database)
org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:148
What is needed that the data rows which do not violate the constraint be inserted and the failed row be ignored, without failing the entire batch.
The code used is:
mode = "Append"
url = "jdbc:postgresql://IP/DB name"
properties = {"user": "username", "password": "password"}
DF.write
.option("numPartitions",partitions_for_parallelism)
.option("batchsize",batch_size)
.jdbc(url=url, table="table name", mode=mode, properties=properties)
How can I do this?
Unfortunately, there is no out of the box solution by Spark. There is a number of possible solutions I see:
Implement business logic of conflict resolution in PostgreSQL database as part of the forEachPartition function. For example, catch the exception of the constraint violation then report to the log.
Drop the constraint on PostgreSQL database, use autogenerated PK means enable to store duplicated rows in the database. Deduplication logic may be further implemented as a part of each SQL query or running deduplication on a daily/hourly basis. You can see example here.
In case there is no other system or process writing to PostgreSQL table except your Spark job it is possible to do filter using the join operation to remove all existing rows from Spark Dataframe before spark.write something like this
I hope my ideas would be helpful.
That is not possible if you have a unique constraint on the target. There is no UPSert mode currently with these techniques. You need to design around this aspect.

Errors while saving Spark Dataframe to Hbase using Apache Phoenix

I'm trying to save jsonRDD into hbase using apache phoenix spark plugin : df.saveToPhoenix(tableName, zkUrl = Some(quorumAddress)). The table looks like:
CREATE TABLE IF NOT EXISTS person (
ID BIGINT NOT NULL PRIMARY KEY,
NAME VARCHAR,
SURNAME VARCHAR) SALT_BUCKETS = 40, COMPRESSION='GZ';
I have about 100,000 - 2,000,000 records in this kind of tables. Some of them are saved normally. But some of them fail with error:
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException:
callTimeout=1200000, callDuration=2902566: row 'PERSON' on table 'SYSTEM.CATALOG' at
region=SYSTEM.CATALOG,,1443172839381.a593d4dbac97863f897bca469e8bac66.,
hostname=hadoop-02,16020,1443292360474, seqNum=339
at org.apache.phoenix.mapreduce.PhoenixRecordWriter.close(PhoenixRecordWriter.java:62)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$5.apply$mcV$sp(PairRDDFunctions.scala:1043)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1294)
What could that mean? Are there any other ways to bulk insert data from DataFrame to hbase?