Apache Giraph: Read in postgres rows as vertices? - postgresql

Is it possible to read in rows from a sql database as vertices in apache giraph? If so, could someone provide a small code example? Thanks.

Related

SQL vs PySpark/Spark SQL

Could someone please help me understand why we need to use PySpark or SprakSQL etc if the source and target of my data is the same DB?
For example, lets say I need to load data to table X in PostgresDB from tables X and Y. Would it not be simpler and faster to just do it in Postgres instead of using SprakSQL or PySpark etc?
I understand the need for these solutions if data is from multiple sources, but if it is from same source, do I need to use PySpark?
You can use spark when you want to do heavy data transformations, it makes it easier to load and process due to distributed processing.
It totally depends on how large is the data and how you want to transform it.
Using Postgres will be a good idea if data is relatively small and no transformation is required.
It is not necessary to use PySpark. Both PySpark & SparkSQL have their value in managing/manipulating large volumes of data few hundred of GBs, TBs, or PBs in a distributed computing setup. If this is your case, please use PySpark, it will be more efficient to load, manipulate, process/shape the data before inserting it into another table.
Thank you all for the feedback. I think I will use glue pyspark if source and destination are different. Else i will use glue python with jdbc connection and have one session do the tasks without bringing data to dataframes.

Migrate data from NoSQL to an RDBMS

We have data existing in HBase and we want to move to AWS Aurora (MySQL) and we need to use the existing data so have to somehow load the NoSQL data into Aurora.
It's not a very big data base. Just a few tables.
Are there any best practices/tools to migrate data from NoSQL to a relational DB? I saw a lot of questions on the internet that ask to the reverse (DB -> NoSQL) but my requirement is a bit different and I don't find any helpful information.
Can someone please help? Where do I even start?
One simple way to do this without writing too much custom code would be to use Spark-HBase Connector from Hortonworks (SHC) to read data from an HBase table into a Spark dataframe and to write that dataframe into a MySQL table. The key challenge would be to get SHC to work, because in my experience it's extremely version sensitive. So the trick is to correctly coordinate your version of Spark, HBase, and SHC (and finding that right combination is trickier than you may think).
However, if you manage to get all the dependencies right, then doing the above is a matter of a few lines of code in Jupyter Notebook or Pyspark. You could run this on Yarn to parallelize the workload, in case it's large. Should work. Give it a try.

Firebird equivalent to MySQL table partitioning

I'm working with Firebird 2.5. I have used MySQL table partitioning in the past to help optimize very large tables by creating partitions based on year. I would like to do the same thing, if possible, in Firebird but I'm having trouble finding any documentation.
Does anyone know if this is possible and if so, can you please point me toward some documentation?
Firebird does not support table partitioning, which is also why you can't find anything about it in the documentation.
Depending on the exact performance problem you're trying to solve and the queries you use, choosing your indexes well may already solve part of the problem.

Is really Hive on Tez with ORC performance better than Spark SQL for ETL?

I have little experience in Hive and currently learning Spark with Scala. I am curious to know whether Hive on Tez really faster than SparkSQL. I searched many forums with test results but they have compared older version of Spark and most of them are written in 2015. Summarized main points below
ORC will do the same as parquet in Spark
Tez engine will give better performance like Spark engine
Joins are better/faster in Hive than Spark
I feel like Hortonworks supports more for Hive than Spark and Cloudera vice versa.
sample links :
link1
link2
link3
Initially I thought Spark would be faster than anything because of their in-memory execution. after reading some articles I got Somehow existing Hive also getting improvised with new concepts like Tez, ORC, LLAP etc.
Currently running with PL/SQL Oracle and migrating to big data since volumes are getting increased. My requirements are kind of ETL batch processing and included data details involved in every weekly batch runs. Data will increase widely soon.
Input/lookup data are csv/text formats and updating into tables
Two input tables which has 5 million rows and 30 columns
30 look up tables used to generate each column of output table which contains around 10 million rows and 220 columns.
Multiple joins involved like inner and left outer since many look up tables used.
Kindly please advise which one of below method I should choose for better performance with readability and easy to include minor updates on columns for future production deployment.
Method 1:
Hive on Tez with ORC tables
Python UDF thru TRANSFORM option
Joins with performance tuning like map join
Method 2:
SparkSQL with Parquet format which is converting from text/csv
Scala for UDF
Hope we can perform multiple inner and left outer join in Spark
The best way to implement the solution to your problem as below.
To load the data into the table the spark looks good option to me. You can read the tables from the hive metastore and perform the incremental updates using some kind of windowing functions and register them in hive. While ingesting as data is populated from various lookup table, you are able to write the code in programatical way in scala.
But at the end of the day, there need to be a query engine that is very easy to use. As your spark program register the table with hive, you can use hive.
Hive support three execution engines
Spark
Tez
Mapreduce
Tez is matured, spark is evolving with various commits from Facebook and community.
Business can understand hive very easily as a query engine as it is much more matured in the industry.
In short use spark to process the data for daily processing and register them with hive.
Create business users in hive.

How to create thousands of dummy records in a table - Oracle

I am learning oracle myself with help of internet...
Now, for some scenario I need thousands of records which should be available in my table.
It is not possible to create thousands of records manually...
Is there any tools or any other way to do the same in ORACLE 10g...
As I said I am a novice to Oracle I need some advices from you SOF professionals....
Thanks in advance...
This database has a JDBC driver. Download Eclipse, add this driver to the path and write ten lines of code to insert as much blabla as required, here tutorial. Even if you have never programmed Java before and would not try again, easy enough to do.