I'm trying to build an app using Node/Express and RDS PostgreSQL on the back-end to get some more experience with these two technologies. More specifically, I'm looking to build this using the node-postgres package and without the aid of an ORM. I currently have a .sql file in my app that contains the desired schema.
What would be considered "best practice" when implementing a schema for the first time? For example, is it considered better to import a schema via the command line, use something like pgAdmin, or throw a bunch of "CREATE TABLEs" into queries through node-postgres?
Thanks in advance for the help!
Related
I'm unable to locate any API to create a database in pyspark. All I can find is SQL based approach.
catalog doesn't mention a python method to create a database.
Is this even possible? I am trying to avoid using SQL.
I guess it might be a suboptimal solution, but you can call a CREATE DATABASE statement using SparkSession's sql method to create a database, like this:
spark.sql("CREATE DATABASE IF EXISTS test_db")
It's not pure PySpark API, but this way you don't have to switch context to SQL completely, to create a database :)
I have not found real sync from postgreSql to new graph database like neo4j, so I've decide to use same postgresql to sync one normalise tables with json table on the same postgresql with a differenta name for a database. So i have the best of two worlds, sql and nosql database.
When i see sql is more fast than graphql i can choose, and in the future, when i moved the nosql tables to a real graph database like neo4j and i ll be able to sync, i dont need to change the app that can use both database synced
Someone did already this? or it's a dumm idea? Or someone already use automatic libraries to sync from postgresql to neo4j ? and the another sens too ? or must I write sync scripts from scratch if i want to sync two databases?
We have a series of modifications to a Postgres database, which can generally be written all in SQL. So it seems Flyway would be a great fit to automate these.
However, they also include imports from files to tables, such as
COPY mytable FROM '${PWD}/mydata.sql';
And secondarily, we'd like not to rely on Postgres' use of file paths like this, which apparently must reside on the server. It should be possible to run any migration from a remote client -- as in Amazon's RDS documentation (last section).
Are there good approaches to handling this kind of scenario already in Flyway? Or alternate approaches to avoid this issue altogether?
Currently, it looks like it'd work to implement the whole migration in Java and use the Postgres driver's CopyManager to import the data. However, that means most of our migrations have to be done in Java, which seems much clumsier. (As far as I can tell, hybrid Java+SQL migrations are not expected?)
Am new to looking at Flyway so thought I'd ask what other alternatives might exist with Flyway, since I'd expect it's pretty common to import a table during a migration.
Starting with Flyway 3.1, you can use COPY FROM STDIN statements within your migration files to accomplish this. The SQL execution engine will automatically use PostgreSQL's CopyManager to transfer the data.
I'm a Neo4j new user and I played around with the webadmin interface of Neo4j to create small databases and simple queries in Cypher. Now I want to use Neo4J to create a graph with my existing database. It's a postgresql database with millions of entries with the same structure (Neo4J is very adapted to represent these data). My question is how to import these data ? What is the easiest way to do that ? I already saw that Cypher recognizes csv files but do I have to create a csv file with my data or is there another way to import them ? Thank you for your help. Sam
One option is to export your postgres data to csv and apply LOAD CSV to import them into the graph.
Another way is writing a script in a language of choice (I'd vote for groovy here) that connects to Postgres using JDBC and connects to Neo4j and then applies the business logic to transform between the two.
A third option is using a ETL tool like Talend. It basically does the same as your custom script but provides a point & click interface to define the transformation, see http://neo4j.com/blog/fun-with-music-neo4j-and-talend/ for more details.
I am working on a project where I need to programmatically validate and/or compare a database schema between product releases.
I am using Perl and am looking for a cross-platform method to collect the database schema. I am currently able to perform database queries by utilizing the dbisql.exe command and then parsing the results.
I am wondering if there is potentially a stored procedure or set of queries that I can run to collect the database schema.
It appears that the dbunload.exe command could be used to generate a SQL regeneration script however I am thinking that this output may be difficult to parse.
Any feedback would be greatly appreciated.
If you would like to retrieve the DB schema data on a really low level you could query the corresponding system tables. They are in the SYS-Namespace, especially SYSTABLE (for all tables) and SYSCOLUMN for all fields in those tables.
Check the ASA SQL Reference Handbook for the schema of those system tables.
With Perl's DBI you can easily fire queries on those tables. But you will have to create some local storage for the schema to compare the query results with.
Sybase Central v3.0 has the possibility to export DDL with all DB objects;
and I think SC v6.0 can't connect to ASA 11 :(