Update base data when spark streaming - streaming

Foy my scenario, the data a will be read from kafka in the realtime way using spark streaming. Based on the data A from an rdbms(MySQL,etc), which is the accumulated T+1 data, the computation will be done, result=A+a,for example.
The next the accumlated data is B, the computation will be result = B+a
My Question is how can I sum the data based on the T+1 data from rdbms.
Thank you in advance!

We finally worked it out by using a redis. Import data from rdbms everyday, and update the data in redis, and export the data to rdbms result table at the end of every day. Close this question, Thank you.

Related

Theta Sketch (Yahoo) on SnappyData

How to store Theta Sketch (Yahoo) on SnappyData's table instead of write to file? Because I generate billions of sketches every day and need to keep many millions of sketches online for real-time queries. Can anyone help me? Thanks.
Can’t you store these in a column with blob data type ?
If writing out from a spark program and managing the sketches in a DF , I would think df.write.format(“column”).saveAsTable should work.
Else, serialize (sketch.compact().toByteArray()) and store in blob column using sql.

how to improve performance of writing data to MongoDB using Spark?

I use python Spark to run a heavy iterative computing job and write data to MongoDB. In each iteration, there may contain 0.01 ~ 1 billion data in RDD or DataFrame to be computed(this procedure is simple and relatively fast) and about 100,000 data to be written to MongoDb. The problem is that procedure seems get stuck in MongoSpark Job every iteration(see image below). I don't know what is going on in this job. The computing part seems already have finished(see Job "runJob at PythonRDD.scala"). MongoDB, however, doesn't receive any data in most time of this job. In my estimation, writing 100,000 data to MongoDB directly only cost tiny time.
Can you explain what costs most time of this job, and how to improve the performance of this ?
Thanks for your help.

How to make MapReduce work with HDFS

This might sound like some stupid question.
I might write a MR code that can take input and output as HDFS locations and then I really don't need to worry about the parallel computing power of hadoop/MR. (Please correct me if I am wrong here).
However if my input is not an HDFS location say I am taking a MongoDB data as input - mongodb://localhost:27017/mongo_hadoop.messages and running my mappers and reducers and storing the data back to mongodb, how will HDFS come into picture. I mean how can I be sure that the 1 GB or any sized big file is first being distributed on HDFS and then parallel computing is being done on it?
Is it that this direct URI will not distribute the data and I need to take the BSON file instead, load it up on HDFS and then give the HDFS path as Input to MR or the framework is smart enough to do this by itself?
I am sorry if the above question is too stupid or not making any sense at all. I am really new to big data but very much excited to dive into this domain.
Thanks.
You are describing DBInputFormat. This is an input format that reads the split from an external database. HDFS only gets involved in setting up the job, but not in actual input. There is also an DBOutputFormat. With an input like DBInputFormat the splits are logical, eg. key ranges.
Read Database Access with Apache Hadoop for a detailed explanation.
Sorry,I am not sure about MongoDb.
If you just wanted to know,how splitting is happening if we are using the data source is a table,then this is my answer when MapRed working with HBase.
we will use TableInputFormat to use an Hbase table in MapRed job.
From the http://hbase.apache.org/book.html#hbase.mapreduce.classpath
7.7. Map-Task Splitting
7.7.1. The Default HBase MapReduce Splitter
When TableInputFormat is used to source an HBase table in a MapReduce job, its splitter will make a map task for each region of the table. Thus, if there are 100 regions in the table, there will be 100 map-tasks for the job - regardless of how many column families are selected in the Scan.
7.7.2. Custom Splitters
For those interested in implementing custom splitters, see the method getSplits in TableInputFormatBase. That is where the logic for map-task assignment resides.
This is a good question, not stupid.
1.
"mongodb://localhost:27017/mongo_hadoop.messages and running my mappers and reducers and storing the data back to mongodb, how will HDFS come into picture. "
Under this situation, u needn't consider hdfs. U needn't do anything related with hdf. Just like write a multiple-thread application with each thread write data to mongodb.
In fact, hdfs is independent to map-reduce, and map-reduce is also independent to hdfs. So, u can use them separately or together as your wish.
2.
if u want to input/output db to map-reduce, u show consider DBInputFormat, but that's another question.
Now, hadoop DBInputFormat only support JDBC. I'm not sure whether some mongodb version of DBInputFormat. Maybe U can search it or implement it by yourself.

Apache spark streaming - cache dataset for joining

I'm considering using Apache Spark streaming for some real-time work but I'm not sure how to cache a dataset for use in a join/lookup.
The main input will be json records coming from Kafka that contain an Id, I want to translate that id into a name using a lookup dataset. The lookup dataset resides in Mongo Db but I want to be able to cache it inside the spark process as the dataset changes very rarely (once every couple of hours) so I don't want to hit mongo for every input record or reload all the records in every spark batch but I need to be able to update the data held in spark periodically (e.g. every 2 hours).
What is the best way to do this?
Thanks.
I've thought long and hard about this myself. In particular I've wondered is it possible to actually implement a database DB in Spark of sorts.
Well the answer is kind of yes. First you want a program that first caches the main data set into memory, then every couple of hours does an optimized join-with-tiny to update the main data set. Now apparently Spark will have a method that does a join-with-tiny (maybe it's already out in 1.0.0 - my stack is stuck on 0.9.0 until CDH 5.1.0 is out).
Anyway, you can manually implement a join-with-tiny, by taking the periodic bi-hourly dataset and turning it into a HashMap then broadcasting it as a broadcast variable. What this means is that the HashMap will be copied, but only once per node (compare this with just referencing the Map - it would be copied once per task - a much greater cost). Then you take your main dataset and add on the new records using the broadcasted map. You can then periodically (nightly) save to hdfs or something.
So here is some scruffy pseudo code to elucidate:
var mainDataSet: RDD[KeyType, DataType] = sc.textFile("/path/to/main/dataset")
.map(parseJsonAndGetTheKey).cache()
everyTwoHoursDo {
val newData: Map[KeyType, DataType] = sc.textFile("/path/to/last/two/hours")
.map(parseJsonAndGetTheKey).toarray().toMap
broadcast(newData)
val mainDataSetNew =
mainDataSet.map((key, oldValue) => (key,
newData.get(key).map(newDataValue =>
update(oldValue, newDataValue))
.getOrElse(oldValue)))
.cache()
mainDataSetNew.someAction() // to force execution
mainDataSet.unpersist()
mainDataSet = mainDataSetNew
}
I've also thought that you could be very clever and use a custom partioner with your own custom index, and then use a custom way of updating the partitions so that each partition itself holds a submap. Then you can skip updating partitions that you know won't hold any keys that occur in the newData, and also optimize the updating process.
I personally think this is a really cool idea, and the nice thing is your dataset is already ready in memory for some analysis / machine learning. The down side is your kinda reinventing the wheel a bit. It might be a better idea to look at using Cassandra as Datastax is partnering with Databricks (people who make Spark) and might end up supporting some kind of thing like this out of box.
Further reading:
http://spark.apache.org/docs/latest/programming-guide.html#broadcast-variables
http://www.datastax.com/2014/06/datastax-unveils-dse-45-the-future-of-the-distributed-database-management-system
Here is a fairly simple work-flow:
For each batch of data:
Convert the batch of JSON data to a DataFrame (b_df).
Read the lookup dataset from MongoDB as a DataFrame (m_df). Then cache, m_df.cache()
Join the data using b_df.join(m_df, "join_field")
Perform your required aggregation and then write to a data source.

no sql read and write intensive bigdata table

I am having 10 different queries and a total of 40 columns.
Looking for solutions in available Big data noSQL data bases that will perform read and write intensive jobs (multiple queries with SLA).
Tried with HBase but its fast only for rowkey (scan) search ,for other queries (not running on row key) query response time is quite high.Making data duplication with different row keys is the only option for quick response but for 10 queries making 10 different tables is not a good idea.
Please suggest the alternatives.
Have you tried Druid? It is inspired on Dremel, precursor of Google BigQuery.
From the documentation:
Druid is a good fit for products that require real-time data ingestion of a single, large data stream. Especially if you are targeting no-downtime operation and are building your product on top of a time-oriented summarization of the incoming data stream. When talking about query speed it is important to clarify what "fast" means: with Druid it is entirely within the realm of possibility (we have done it) to achieve queries that run in less than a second across trillions of rows of data.