I have a following situation. I have large Cassandra table (with large number of columns) which i would like process with Spark. I want only selected columns to be loaded in to Spark ( Apply select and filtering on Cassandra server itself)
val eptable =
sc.cassandraTable("test","devices").select("device_ccompany","device_model","devi
ce_type")
Above statement gives a CassandraTableScanRDD but how do i convert this in to DataSet/DataFrame ?
Si there any other way i can do server side filtering of columns and get dataframes?
In DataStax Spark Cassandra Connector, you would read Cassandra data as a Dataset, and prune columns on the server-side as follows:
val df = spark
.read
.format("org.apache.spark.sql.cassandra")
.options(Map( "table" -> "devices", "keyspace" -> "test" ))
.load()
val dfWithColumnPruned = df
.select("device_ccompany","device_model","device_type")
Note that the selection operation I do after reading is pushed to the server-side using Catalyst optimizations. Refer this document for further information.
Related
Scenario: Cassandra is hosted on a server a.b.c.d and spark runs on server say w.x.y.z.
Assume i want to transform the data from a table(say table)casssandra and rewrite the same to other table(say tableNew) in cassandra using Spark,The code that i write looks something like this
val conf = new SparkConf(true)
.set("spark.cassandra.connection.host", "a.b.c.d")
.set("spark.cassandra.auth.username", "<UserName>")
.set("spark.cassandra.auth.password", "<Password>")
val spark = SparkSession.builder().master("yarn")
.config(conf)
.getOrCreate()
val dfFromCassandra = spark.read.format("org.apache.spark.sql.cassandra").options(Map( "table" -> "<table>", "keyspace" -> "<Keyspace>")).load()
val filteredDF = dfFromCassandra.filter(filterCriteria).write.format("org.apache.spark.sql.cassandra").options(Map( "table" -> "<tableNew>", "keyspace" -> "<Keyspace>")).save
Here filterCriteria represents the transformation/filtering that I do. Iam not sure how Spark cassandra connector works in this case internally.
This is the Confusion that I have:
1: Does spark load the data from Cassandra source table to the memory and then filter the same and reload the same to the Target table Or
2: Does Spark cassandra connector convert the filter criteria to Where clause and loads only the relevant data to form RDD and writes the same back to target table in Cassandra Or
3:Does the entire operation happens as a cql operation where the query is converted to sqllike query and is executed in cassandra itself?(I am almost sure that this is not what happens)
It is either 1. or 2. depending on your filterCriteria. Naturally Spark itself can't do any CQL filtering but custom datasources can implement it using predicate pushdown. In case if Cassandra driver, it is implemented here and the answer depends if that covers the used filterCriteria.
I have 2 instances for the same data.
Hive table called myData in parquet format
Parquet file (not managed by Hive) that is in parquet format
Consider the following code:
val myCoolDataSet = spark
.sql("select * from myData")
.select("col1", "col2")
.as[MyDataSet]
.filter(x => x.col1 == "Dummy")
And this one:
val myCoolDataSet = spark
.read
.parquet("path_to_file")
.select("col1", "col2")
.as[MyDataSet]
.filter(x => x.col1 == "Dummy")
My question is what is better in terms of performance and amount of scanned data?
How spark computes it for the 2 different approaches?
Hive serves as a storage for metadata about the Parquet file. Spark can leverage the information contained therein to perform interesting optimizations. Since the backing storage is the same you'll probably not see much difference, but the optimizations based on the metadata in Hive can give an edge.
I am using Spark 2.1.0 and Kafka 0.9.0.
I am trying to push the output of a batch spark job to kafka. The job is supposed to run every hour but not as streaming.
While looking for an answer on the net I could only find kafka integration with Spark streaming and nothing about the integration with the batch job.
Does anyone know if such thing is feasible ?
Thanks
UPDATE :
As mentioned by user8371915, I tried to follow what was done in Writing the output of Batch Queries to Kafka.
I used a spark shell :
spark-shell --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.1.0
Here is the simple code that I tried :
val df = Seq(("Rey", "23"), ("John", "44")).toDF("key", "value")
val newdf = df.select(to_json(struct(df.columns.map(column):_*)).alias("value"))
newdf.write.format("kafka").option("kafka.bootstrap.servers", "localhost:9092").option("topic", "alerts").save()
But I get the error :
java.lang.RuntimeException: org.apache.spark.sql.kafka010.KafkaSourceProvider does not allow create table as select.
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:497)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
... 50 elided
Have any idea what is this related to ?
Thanks
tl;dr You use outdated Spark version. Writes are enabled in 2.2 and later.
Out-of-the-box you can use Kafka SQL connector (the same as used with Structured Streaming). Include
spark-sql-kafka in your dependencies.
Convert data to DataFrame containing at least value column of type StringType or BinaryType.
Write data to Kafka:
df
.write
.format("kafka")
.option("kafka.bootstrap.servers", server)
.save()
Follow Structured Streaming docs for details (starting with Writing the output of Batch Queries to Kafka).
If you have a dataframe and you want to write it to a kafka topic, you need to convert columns first to a "value" column that contains data in a json format. In scala it is
import org.apache.spark.sql.functions._
val kafkaServer: String = "localhost:9092"
val topicSampleName: String = "kafkatopic"
df.select(to_json(struct("*")).as("value"))
.selectExpr("CAST(value AS STRING)")
.write
.format("kafka")
.option("kafka.bootstrap.servers", kafkaServer)
.option("topic", topicSampleName)
.save()
For this error
java.lang.RuntimeException: org.apache.spark.sql.kafka010.KafkaSourceProvider does not allow create table as select.
at scala.sys.package$.error(package.scala:27)
I think you need to parse the message to Key value pair. Your dataframe should have value column.
Let say if you have a dataframe with student_id, scores.
df.show()
>> student_id | scores
1 | 99.00
2 | 98.00
then you should modify your dataframe to
value
{"student_id":1,"score":99.00}
{"student_id":2,"score":98.00}
To convert you can use similar code like this
df.select(to_json(struct($"student_id",$"score")).alias("value"))
I have a cassandra table like below and want to get records from cassandra using some conditions and put it in the hive table.
Cassandra Table(Employee) Entry:
Id Name Amount Time
1 abc 1000 2017041801
2 def 1000 2017041802
3 ghi 1000 2017041803
4 jkl 1000 2017041804
5 mno 1000 2017041805
6 pqr 1000 2017041806
7 stu 1000 2017041807
Assume that this table columns are of the datatype string.
We have same schema in hive also.
Now i wanted to import cassandra record between 2017041801 to 2017041804 to hive or hdfs. In second run I will pull the incremental records based on the prev run.
I am able to load the cassandra data into RDD using below syntax.
val sc = new SparkContext(conf)
val rdd = sc.cassandraTable("mydb", "Employee")
Now my problem is how can i filter this records according to the between condition and persist the filtered records in hive or hive external table path.
Unfortunately my Time column is not clustering key in cassandra table. So I am not able to use .where() clause.
I am new to this scala and spark. So please kindly help out on this filter logic or any other better way of implementing this logic using dataframe, Please let me know.
Thanks in advance.
I recommend to use Connector Dataframe API for loading from C* https://github.com/datastax/spark-cassandra-connector/blob/master/doc/14_data_frames.md.
Use df.filter() call for predicates
saveAsTable() method to store data in hive.
Here is spark 2.0 example for your case
val df = spark
.read
.format("org.apache.spark.sql.cassandra")
.options(Map( "table" -> "Employee", "keyspace" -> "mydb" ))
.load()
df.filter("time between 2017041801 and 2017041804")
.write.mode("overwrite").saveAsTable("hivedb.employee");
I want to try to load data into hive external table using spark.
please help me on this, how to load data into hive using scala code or java
Thanks in advance
Assuming that hive external table is already created using something like,
CREATE EXTERNAL TABLE external_parquet(c1 INT, c2 STRING, c3 TIMESTAMP)
STORED AS PARQUET LOCATION '/user/etl/destination'; -- location is some directory on HDFS
And you have an existing dataFrame / RDD in Spark, that you want to write.
import sqlContext.implicits._
val rdd = sc.parallelize(List((1, "a", new Date), (2, "b", new Date), (3, "c", new Date)))
val df = rdd.toDF("c1", "c2", "c3") //column names for your data frame
df.write.mode(SaveMode.Overwrite).parquet("/user/etl/destination") // If you want to overwrite existing dataset (full reimport from some source)
If you don't want to overwrite existing data from your dataset...
df.write.mode(SaveMode.Append).parquet("/user/etl/destination") // If you want to append to existing dataset (incremental imports)
**I have tried similar scenario and had satisfactory results.I have worked with avro data with schema in json.I streamed kafka topic with spark streaming and persisted the data in to hdfs which is the location of an external table.So every 2 seconds(the streaming duration the data will be stored in to hdfs in a seperate file and the hive external table will be appended as well).
Here is the simple code snippet
val messages = KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](ssc, kafkaConf, topicMaps, StorageLevel.MEMORY_ONLY_SER)
messages.foreachRDD(rdd =>
{
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
val dataframe = sqlContext.read.json(rdd.map(_._2))
val myEvent = dataframe.toDF()
import org.apache.spark.sql.SaveMode
myEvent.write.format("parquet").mode(org.apache.spark.sql.SaveMode.Append).save("maprfs:///location/of/hive/external/table")
})
Don't forget to stop the 'SSC' at the end of the application.Doing it gracefully is more preferable.
P.S:
Note that while creating an external table make sure you are creating the table with schema identical to the dataframe schema. Because when getting converted in to a dataframe which is nothing but a table, the columns will be arranged in an alphabetic order.