As part of my final thesis, I must transform a relational database in a graph-oriented database, specifically a PostgreSQL database into a Neo4j embedded database. Now, the way is the problem. In Rik Van Bruggen's book: Learning Neo4j, he mentions a data import process using ETL activities with Trascend and MuleSoft tools, but in their official sites, there's no documentation about how to do it, neither help documentation nor examples. Apart from these tools, what other ways can I use to transform this information without using my own code?
Some modeling advice:
A well normalized relational model, which was not yet denormalized for performance reasons can be translated into the equivalent graph model.
Graph model shapes are mostly driven by use-cases, so there will be opportunity for optimization and model evolution afterwards.
A good, normalized Entity-Relationship diagram often already represents a decent graph model.
So if you still have the orignal ER diagram available, try to use it as a guide.
Here are some tips that help you with the transformation:
Each entity table is represented by a label on nodes
Each row in a table is a node
Columns on those tables become node properties.
Remove technical primary keys, keep business primary keys
Add unique constraints for business primary keys, add indexes for frequent lookup attributes
Replace foreign keys with relationships to the other table, remove them afterwards
Remove data with default values, no need to store those
Data in tables that is denormalized and duplicated might have to be pulled out into separate nodes to get a cleaner model.
Indexed column names, might indicate an array property (like email1, email2, email3)
JOIN tables are transformed into relationships, columns on those tables become relationship properties
It is important to have an understanding of the graph model before you start to import data, then it just becomes the task of hydrating that model.
LOAD CSV might be your best option, but of course it means outputting a CSV first. Here are some great resources:
http://neo4j.com/docs/stable/query-load-csv.html
http://watch.neo4j.org/video/112447027
http://jexp.de/blog/2014/06/load-csv-into-neo4j-quickly-and-successfully/
http://jexp.de/blog/2014/10/load-cvs-with-success/
http://www.markhneedham.com/blog/2014/10/23/neo4j-cypher-avoiding-the-eager/
I've also written a ruby gem which lets you write a little ruby code to import data from various sources. It's called neo4apis. You can look at the neo4apis-twitter gem to get an idea for how it works:
https://github.com/neo4jrb/neo4apis-twitter/
https://github.com/neo4jrb/neo4apis-twitter/blob/master/lib/neo4apis/twitter.rb
I've actually been wanting to implement a neo4apis-activerecord to make it easy to import from SQL with ActiveRecord
You can not directly export data from relational and import to neo4j.
Because these are two different database structures.
Relational Database -
A relational database is a set of tables containing data fitted into predefined categories. Each table (which is sometimes called a relation) contains one or more data categories in columns. Each row contains a unique instance of data for the categories defined by the columns.
Graph-oriented database -
A graph database is essentially a collection of nodes and edges. Each node represents an entity (such as a person or business) and each edge represents a connection or relationship between two nodes.
Sollution To your Problem-
First, you need to design Neo4j Data structure. e.g What will be the nodes you required, what will be the relationships between the nodes.
After that you create Script in your application language to fetch data from relational database and insert it into neo4j.
Load CSA is a option to Import/Export (backup) functionality with graph database. you can not directly Export/Import data from Relational DB to Graph DB
Related
I am developing a multi-step data pipeline that should optimize the following process:
1) Extract data from a NoSQL database (MongoDB).
2) Transform and load the data into a relational (PostgreSQL) database.
3) Build a data warehouse using the Postgres database
I have manually coded a script to handle steps 1) and 2), which is an intermediate ETL pipeline. Now my goal is to build the data warehouse using the Postgres database, but I came across with a few doubts regarding the DW design. Below is the dimensional model for the relational database:
There are 2 main tables, Occurrence and Canonical, from which inherit a set of others (drawn in red and blue, respectively). Note that there are 2 child data types, ObserverNodeOccurrence and CanonicalObserverNode, that have an extra many-to-many relationship with another table.
I made some research regarding how inheritance should be implemented in a data warehouse and figured the best practice would be to merge together the family data types (super and child tables) into a single table. Doing this would imply adding extra attributes and a lot of null values. My new dimensional model would look like the following:
Question 1: Do you think this is the best approach to address this problem? If not, what would be?
Question 2: Any software recommendations for on-premise data warehouses? (on-premise is a must since it contains sensitive data)
Usually having fewer tables to join and denormalizing data will improve query performance for data warehouse queries, so they are often considered a good thing.
This would suggest your second table design. NULL values don't occupy any space in a PostgreSQL table, so you need not worry about that.
As described here there are three options to implement inheritance in a relational database.
IMO the only practicable way to be used in data warehouse is the Table-Per-Hierarchy option, which merges all entities in one table.
The reason is not only the performance gain by saving the joins. In data warehouse often the historical view of the data is important. Think, how would you model a change in a subtype in some entity?
An important thing is to define a discriminator column which uniquely defines the source entity.
I am trying to pull data from Oracle RDBMS and move it to OrientDB using teleporter. My relational database have multiple columns and have E-R relationships maintained. I have two questions :
My objective is to get only few columns ( that holds unique identity and foreign key relations ) and not all bulky column data. Is there any configuration using which I could do so. Today include and exclude only works at full DB table level.
Another objective is to keep my graph db sync with these selected table-column data which I pushed in previous run. Additional data which comes to RDBMS I would want in my graph db too.
You can enjoy this feature, and more others, in orientdb 3.0 through a JSON configuration, but there is not any documentation about it yet. Currently in 2.2.x you can just configure relationships and edges as described here:
http://orientdb.com/docs/2.2.x/Teleporter-Import-Configuration.html
In the next 2 weeks all these features will be available also in 2.2.x and well documented in order to make the comprehension of the config very easy.
At the moment you can adopt the following workaround:
import all the columns for each table in the correspondent vertex as usual.
drop the properties you are not interested in after each sync. You could write down a script where you call the teleporter execution and then delete the properties you don't care about from the schema.
I will update here when the alignment with 3.0 and the doc will be complete.
I have only looked in Azure table, but it may well apply for other NoSQL databases as well.
If I have an entity consisting of these following properties
First name - Last name - Hometown - Country
In Azure table there is no concept of relations therefore if I have thousands of data, and I want to change all entities that has 'Canada' in it, to some other country. Then in this scenario there is a possibility it has to go through thousands of data to find entities with 'Canada' and change it to something else.
I wonder, is the benefit of NoSQL only if you have data that is static and not changed after you have written it? Or could this problem be solved for NoSQLs?
In the case of NoSQL data stores the advantages are different from SQL databases. Things like scalability or availability can be better in a NoSQL database like Azure table, but there are tradeoffs. For example you are generally unable to efficiently query any part of a record, only the keys.
In designing your schema for Azure Table you have to consider the use cases of your data layer and let that dictate the schema. In this example, if I thought I would have to update all records in a given country, I would make that part of the partition or row key. That way your query to get all data in a given country is fast and can be updated quickly.
I'm just exploring the whole NoSQL concept. I've been playing around with Amazons DynamoDB and I really like the concept. However that said I am not quite sure how the data should be separated. By this I mean should I create a new Table for related data features like you would in a relational database or do I use a single table to store all the applications data?
As an example, in a relational DB I might have a table called users and a table called users_details. I would then for example, create a 1:1 relationship between the two tables. With the NoSQL concept I could theoretically create two tables as well but it strikes me as more efficient to have all the data in a single table.
If that is the case then when do you stop? Is the idea to store all the application data for a given user in a single table?
First ask yourself: why did I separate the users from the user details in RDBMS in the first place.
On a more general note, when talking about NoSQL you really shouldn't look at relationships between tables. You shouldn't think about "joining" information from different tables but rather prepare your data in a way that can be retrieved optimally.
In the user/user_details scenario, put all information in a users table, and query what you want by specifying the attributes to get.
I understand that Core Data is not a relational database but I need to understand how it can be used to support a client/server model where the server uses a Rails, ActiveRecord, Mysql setup.
My app is pulling records from the server using JSON and I am mapping the relationships using Core Data.
The Foreign Key in the SQLLite database is showing the PK field of the related table even though I have set the User Info Key/Value of primaryAttributeKey => id. (I can't remember where I saw this mentioned.)
Is there any way to setup the models so they will use my id as the PK so that it will clean up the export of related data back to the server?
Edward,
The PK is just a field in your object. If you want to maintain them in CD, they are just numbers. As you build your object graph, you have to maintain them in parallel with your relations. Of course, exporting records created on the device back to your server will have difficulty -- FKs and PKs are unique to each table and that uniqueness is determined on the server. Hence, tracking these numbers is not that useful.
May I suggest that your JSON needs to be structured such that it is redundant -- that it has both the data and the various PKs and FKs, if any?
Finally, you appear to be making a CRUD focused API. Generally, those are low performance APIs for remote devices. There are other problems with CRUD APIs, such as inconsistent business logic between servers and clients. I would suggest you to rethink your APIs.
Andrew