mahout datamodel for amazon redshift Recommendation Engine - amazon-redshift

how would i build Recommendation Engine with amazon Redshift as a data source.is there any mahout data model for amazon redshift or S3

Mahout uses Hadoop to read data, except for a few supported NoSQL dbs and JDBC dbs. Hadoop in turn can use S3. You'd have to configure Hadoop to use the S3 filesystem and then Mahout should work fine reading and writing to S3.
Redshift is a data warehousing solution based on Postgres and supporting JDBC/ODBC. Mahout 0.9 supports data models stored in JDBC compliant stores so, though I haven't done it, it should be supported
The Mahout v1 recommenders runs on Spark and input and output is text by default. All I/O goes through Hadoop. So S3 data is fine for input but the models created are also text and need to be indexed and queried with a search engine like Solr or Elasticsearch. You can pretty easily write a reader to get data from any other store (Redshift) but you might not want to save the models in a data warehouse since they need to be indexed by solr and should have super fast search engine style retrieval.

Related

What's the best way to read/write from/to Redshift with Scala spark since spark-redshift lib is not supported publicly by Databricks

I have my Spark project in Scala I want to use Redshift as my DataWarehouse, I have found spark-redshift repo exists but Databricks made it private since a couple of years ago and doesn't support it publicly anymore.
What's the best option right now to deal with Amazon Redshift and Spark (Scala)
This is a partial answer as I have only been using Spark->Redshift in a real world use-case and never benchmarked Spark read from Redshift performance.
When it comes to writing from Spark to Redshift, by far the most performant way that I could find was to write parquet to S3 and then use Redshift Copy to load the data. Writing to Redshift through JDBC also works but it is several orders of magnitude slower than the former method. Other storage formats could be tried as well, but I would be surprised if any row-oriented format could beat Parquet as Redshift internally stores data in columnar format. Another columnar format that is supported by both Spark and Redshift is ORC.
I never came across a use-case of reading large amounts of data from Redshift using Spark as it feels more natural to load all the data to Redshift and use it for joins and aggregations. It is probably not cost-efficient to use Redshift just as a bulk storage and use another engine for joins and aggregations. For reading small amounts of data, JDBC works fine. For large reads, my best guess is Unload command and S3.

Benefits and drawbacks of using Hive Warehouse Connector over Hadoop File Location

Normally we are using the Hadoop file location of the hive table to access data from our spark ETLs. Are there any benefits of using Hive Warehouse Connector instead of our current approach? And is there any drawback of using the Hive Warehouse connector for ETLs?
I cannot think of a drawback.
Hive stores the schema and provides faster predicate push downs. If you read from the filesystem, you will need to often define the scheme on your own

What value does Postgres adapter for spark/hadoop add?

I am not an HDFS nerd but coming from traditional RDMS background, I am scratching surface with newer technologies like Hadoop and Spark. Now, I was looking at my options when it comes to SQL querying on Spark data.
What I realized that Spark inherently supports SQL querying. Then I came across this link
https://www.enterprisedb.com/news/enterprisedb-announces-new-apache-spark-connecter-speed-postgres-big-data-processing
Which I am trying to make some sense of. If I am understanding it correctly. Data is still stored in HDFS format but Postgres connector is used as a query engine? If so, in presence of an existing querying framework, what new value does this postgress connector add?
Or I am misunderstanding what it actually does?
I think you are misunderstanding.
They allude to the concept of Foreign Data Wrapper.
"... They allow PostgreSQL queries to include structured or unstructured data, from multiple sources such as Postgres and NoSQL databases, as well as HDFS, as if they were in a single database. ...
"
This sounds to me like the Oracle Big Data Appliance approach. From Postgres you can look at the world of data processing it logically as though it is all Postgres, but underwater the HDFS data is accessed using Spark query engine invoked by the Postgres Query engine, but you need not concern yourself with that is the likely premise. We are in the domain of Virtualization. You can combine Big Data and Postgres data on the fly.
There is no such thing as Spark data as it is not a database as such barring some Spark fomatted data that is not compatible with Hive.
The value will be invariably be stated that you need not learn Big Data etc. Whether that is true remains to be seen.

is it possible to store mongodb data on hdfs

In my project, I'm challenging with data storage method. Firstly, in my project, there are streaming data in JSON format and most suitable db is MongoDB. I have to analyze data with Hadoop or Spark.
So, my conflict starts here: Can I store MongoDB collections in HDFS or must MongoDB and HDFS storage units be different? It is an important issue for my decision. Must I use Hadoop and MongoDB in same disk units or separate units?
They need to be different units since the methods of storage, security policy implementations and storage mechanisms themselves are different.

What is the common practice to store users data and analysis it with Spark/hadoop?

I'm new to spark. I'm used to a Web developer, not familiar to big data.
That's say I have a portal website. user's behavior and action will store in 5 sharded mongoDB clusters.
How to I analyze it with spark ?
Or Spark can get the data from any databases directly (postgres/mongoDB/mysql/....)
Because most website may use Relational DB as back-end database.
Should I export whole data in the website's databases into HBase ?
I stored all the users log in postgreSQL, is it practical to export data into HBase or other Spark preffered databases ?
It seems it will make lots of duplicated data if I copy the data to a new database.
Does my big data model need other framework excepts Spark ?
For analyze the data in the website's databases,
I don't see the reasons that I need HDFS, Mesos, ...
How to make Spark workers can access the data in PostgreSQL databases ?
I only know how to read data from text file,
and saw some codes about how to load data from HDFS://
But I don't have HDFS system now , should I create one HDFS for my purpose ?
Spark is a distributed compute engine; so it expects to have files accessible from all nodes. Here are some choices you might consider
There seems to be Spark - MongoDB connector. This post explains how to get it working
Export the data out of MongoDB into Hadoop. And then use Spark to process the files. For this , you need to have a Hadoop cluster running
If you are on Amazon, you can put the files in S3 store and access from Spark