It seems that the only supported import data file format is .csv?
http://www.dbvis.com/doc/9.0/doc/ug/exportImport/exportImport.html
If you have a .sql file with insert statements it can be executed in DbVisualizer as-is. Just load the script in the SQL editor and run the script to execute the insert statements.
Related
I want to create a procedure which when I call creates a backup by creating an .sql file and saves it in my computer.
The procedure's name is trial_gen(). When I execute call trial_gen(), it should create a plain .sql file of the schema.
All solutions I found were only using the SQL shell
SQL code is a script, so I think it makes sense to run one from SQL shell. It would be a stored script (text) in a file anyway.
I'm trying to import csv file's table into pgAdmin using COPY command and getting the error:
ERROR: could not open file "R:\myfile.csv" for reading: No such file or directory
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
The statement is:
copy tst_copy from 'R:\myfile.csv'
I'm familiarizing myself with the standalone version of Datagrip and having a bit of trouble understanding the different approaches to composing SQL via console, external files, scratch files, etc.
I'm managing, referencing the documentation, and am happy to figure things out as such.
However, I'm trying to ingest CSV data into tables via batch files using the Postgres \copy command. Datagrip will execute this command without error but no data is being populated.
This is my syntax, composed and ran in the console view:
\copy tablename from 'C:\Users\username\data_file.txt' WITH DELIMITER E'\t' csv;
Note that the data is tab-separated and stored in a .txt file.
I'm able to use the import functions of Datagrip (via context menu) just fine but I'd like to understand how to issue commands to do similarly.
\copy is a command of the command-line PostgreSQL client psql.
I doubt that Datagrip invokes psql, so it won't be able to use \copy or any other “backslash command”.
You probably have to use Datagrip's import facilities. Or you start using psql.
Ok, but what about the SQL COPY command https://www.postgresql.org/docs/12/sql-copy.html ?
How can I run something like that with datagrip ?
BEGIN;
CREATE TEMPORARY TABLE temp_json(values text) ON COMMIT DROP;
COPY temp_json FROM 'MY_FILE.JSON';
SELECT values->>'aJsonField' as f
FROM (select values::json AS values FROM temp_json) AS a;
COMMIT;
I try to replace 'MY_FILE.JSON' with full path, parameter (?), I put it in sql directory etc.
The data grip answer is :
[2021-05-05 10:30:45] [58P01] ERROR: could not open file '...' for reading : No such file or directory
EDIT :
I know why. RTFM! -_-
COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible by the PostgreSQL user (the user ID the server runs as) and the name must be specified from the viewpoint of the server.
Sorry.....
I try to design a database schema and use alembic (and virtualenv if that matters). I made some CSV files with testdata. Currently I copy them into the db on the interactive postgresql shell via
/copy table_name FROM '~/path/to/file.csv' WITH CSV;
I would like to automate this so I don't have to manually copy every table when I grade down and up via alembic. I tried the following:
In my alembic version file in the upgrade() method, below the generation of my tables, I added:
op.execute("copy table_name FROM '~/path/to/file.csv' WITH CSV;", execution_options=None)
But it never finds the file. This is the error:
File "/Users/me/Workspace/project/venv/lib/python3.4/site-packages/SQLAlchemy-0.9.4-py3.4-macosx-10.6-intel.egg/sqlalchemy/engine/default.py", line 435, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (OperationalError) could not open file "~/Workspace/project/path/to/file.csv" for reading: No such file or directory
"copy table_name FROM '~/Workspace/project/path/to/file.csv' WITH CSV;" {}
What am I doing wrong?
Is my problem that I try to run a postgresql command where I can use only sqlalchemy command? If so, how would I do this via sqlalchemy?
I know of the bulk import option in alembic but I would prefer to not re-format everything.
Are there other options to automate the copy operation? Are there better workflows?
Postgres does not run as "you" so the home directory will be different. When entering file paths in postgres, always use the full path to the file. In the case of doing this as part of an alembic upgrade, you should do something like:
import os
dirname = os.path.dirname(__file__)
filename = os.path.join(dirname, 'relative/path/to/file/you/want')
op.execute(f"copy table_name FROM '{file_name}' WITH CSV;", execution_options=None)
Hi I'd like to export data from a sqlite2 database into SQL to import into phpMyAdmin. Thanks.
After searching I found sqlite2
http://www.sqlite.org/sqlite-2_8_17.zip
And to export it to SQL you do:
sqlite database.db .dump > output.txt
To stop an error in phpMyAdmin you need to delete the first line: "BEGIN TRANSACTION;"