flaskr- flask tutorial database confusion? - database-schema

http://flask.pocoo.org/docs/tutorial/dbinit/- In this step of the Flask tutorial is written-
Such a schema can be created by piping the schema.sql file into the sqlite3 command as follows:
sqlite3 /tmp/flaskr.db < schema.sql
The downside of this is that it requires the sqlite3 command to be installed which is not necessarily the case on every system. Also one has to provide the path to the database there which leaves some place for errors. It’s a good idea to add a function that initializes the database for you to the application.
Are both, piping the schema.sql file and adding a function, necessary or are they alternatives?

They're alternatives. I think the author suggests that when you're following the tutorial, piping the sql is okay, but when you're writing your own real applications, you should add a function and use that instead.

Related

How can I obfuscate SQL code for PostgreSQL?

I'm trying to obfuscate my SQL code for PostgreSQL (which saved as .sql files). But can't find clear way to do it.
These SQL code should be run with psql on Linux system.
The point of this obfuscation job is making SQL code hard-readable on customer side.
I tried to use shell script binary generator (shc) to obfuscate the whole queries, but that didn't really work well.
Are there any things to try more?
Need your help.

Upgrading from Postgres 7.4 to 9.4.1

I'm upgrading Postgres from ancient 7.4 to 9.4.1 and seeing some errors.
On the old machine, I did:
pg_dumpall | gzip > db_pg_bu.gz
On the new machine, I did:
gunzip -c db_pg_bu.gz | psql
While restoring I got a number of errors which I don't understand, and don't know the importance of. I'm not a DBA, just a lowly developer, so if someone could help me understand what I need to do to get this migration done I would appreciate it.
Here are the errors:
ERROR: cannot delete from view "pg_shadow"
DETAIL: Views that do not select from a single table or view are not automatically updatable.
HINT: To enable deleting from the view, provide an INSTEAD OF DELETE trigger or an unconditional ON DELETE DO INSTEAD rule.
I also got about 15 of these:
NOTICE: SYSID can no longer be specified
And this, although this looks harmless since I saw plpgsql is installed by default stating in version 9.2:
ERROR: could not access file "/usr/lib/postgresql/lib/plpgsql.so": No such file or directory
SET
NOTICE: using pg_pltemplate information instead of CREATE LANGUAGE parameters
ERROR: language "plpgsql" already exists
A big concern is that, as it restores the databases, for each ne I see something like this:
COMMENT
You are now connected to database "landrush" as user "postgres".
SET
ERROR: could not access file "/usr/lib/postgresql/lib/plpgsql.so": No such file or directory
There are basically two ways. Both are difficult for the inexperienced. (and maybe even for the experienced)
do a stepwise migration, using a few intermediate versions (which will probably have to be compiled from source). Between versions you'd have to do a pg_dump --> pg_restore (or just the psql < dumpfile, like in the question). A possible path first hop could be 7.4 -> 8.3, but maybe an additional hop might be needed.
Edit the (uncompressed) dumpfile: remove (or comment out) anything that the new version does not like. This will be an iterative process, and it assumes your dump fits into your editor. (and that you know what you are doing). You might need to redump, separating schema and data (options --schema-only and --data-only, I don't even know if these were available in PG-7.4)
BTW: it is advisable to use the pg_dump from the newer version(the one that you will import to). You'll need to specify the source host via the -h flag. The new (target) version knows about what the new version needs, and will try to adapt (upto a certain point, you still need to use more than one step) I will also refuse to work if it cannot produce a usable dump. (In which case you'll have to make smaller steps...)
Extra:
if the result of your failed conversion is complete enough, and if you are only interested in the basic data, you could just stop here, and maybe polish a bit.
NOTICE: using pg_pltemplate information instead of CREATE LANGUAGE parameters I don't know what this is. Maybe the way that additional languages, such as plpgsql, were added to the core dbms.
ERROR: language "plpgsql" already exists : You can probably ignore this error. -->> comment out the offending lines.
DETAIL: Views that do not select from a single table or view are not automatically updatable. This implies that the postgres RULE rewrite system is used in the old DB. It will need serious work to get it working again.

What are ways to include sizable Postgres table imports in Flyway migrations?

We have a series of modifications to a Postgres database, which can generally be written all in SQL. So it seems Flyway would be a great fit to automate these.
However, they also include imports from files to tables, such as
COPY mytable FROM '${PWD}/mydata.sql';
And secondarily, we'd like not to rely on Postgres' use of file paths like this, which apparently must reside on the server. It should be possible to run any migration from a remote client -- as in Amazon's RDS documentation (last section).
Are there good approaches to handling this kind of scenario already in Flyway? Or alternate approaches to avoid this issue altogether?
Currently, it looks like it'd work to implement the whole migration in Java and use the Postgres driver's CopyManager to import the data. However, that means most of our migrations have to be done in Java, which seems much clumsier. (As far as I can tell, hybrid Java+SQL migrations are not expected?)
Am new to looking at Flyway so thought I'd ask what other alternatives might exist with Flyway, since I'd expect it's pretty common to import a table during a migration.
Starting with Flyway 3.1, you can use COPY FROM STDIN statements within your migration files to accomplish this. The SQL execution engine will automatically use PostgreSQL's CopyManager to transfer the data.

Perl dbd-sqlite, is there a equivalent to the .import function?

In many of my scripts I am using sqlite for reporting info and I need first to upload my big table data (millions of csv rows). In the past I have found that .import was quicker than line by line inserting (even using transactions).
Nowadays my scripts implement a method that do system call for sqlite3 db '.import ....'. I wonder if it is possible to call .import from dbd-sqlite. Or it would be better to keep calling insert from system?.
PD: The reason for wanting to call .import from inside dbd-sql is to remove the sqlite3 dependency when my software is installed elsewhere.
.import is a SQLite-specific command, so you won't find a DBI method for it which is independent of the database driver; while any given database engine almost certainly has equivalent functionality, each will implement it differently (e.g. SQLite .import vs MySQL LOAD DATA INFILE, &c.)
If you're looking for true engine independence, you'll need to import your data by means of INSERT queries, which can be relied upon in the simplest case to work more or less equivalently everywhere. However, if the difference in execution time is significant enough, it may be worth your while to write an engine-agnostic interface to the import functionality, with a wrapper around each engine's specific import command, and determining from the currently active database driver (or some other method, depending on your code) which wrapper to invoke at runtime.
if you are not opposed to "shelling out"
perl -e 'system(qq(sqlite3 foo.db ".import file.dat table")) and die $!'

PostgreSQL database reverse engineering from shell level

I happened to do some moderating stuff with large database but I am not so experienced in it so i guess the smart thing is to create similar database on my localhost to not mess up with original one.And here is my question, is it possible to generete SQL script which will create exact table as i want? I mean on MySQL GUI tool there is option like this, reverse engineering which generate SQL script which will create exact database as I used function on, is it possible in PostgreSQL in shell level?
pg_dump --schema-only db1 > db-schema.sql