Load files from FTP in File_FDW in Postgresql - postgresql

Can you help me understand if file_fdw allows to connect and read from FTP ? How can that be done?

file_fdw, starting in version 10, supports PROGRAM. So if you can write a program (or use existing programs) to read from ftp, you can hook file_fdw up to it.

No, file_fdw can only access files local on the database server machine.
Perhaps COPY ... FROM PROGRAM is what you need:
COPY tab
FROM PROGRAM 'curl ftp://server.org/file';
But this copies the data into a database table.

Related

SQL DATABASE(postgresql)

ERROR: could not open file "C:\Users\lenovo\Downloads\Owners.csv" for reading: Permission denied
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
SQL state: 42501
I am trying to import a csv file into postgresql. But this error pops up. I search everywhere. But i Couldn't get the answer of it PLEASE HELP ME.
THANKS IN ADVANCE!!
COPY mytable FROM /path/thefile.csv WITH CSV,HEADER; is executed by the DBMS server, the .csv-file is read by the server. The server (typically) runs as user postgres, which cannot access arbitrary users's files. (Also: the client and server don't have to be running on the same machine) There are two possible solutions to this:
copy the csv-file to a place where the server can access it, in /tmp/, or somewhere under its home-directory.
use psql's \copy mytable(col1,col2,...) FROM '/path/file.csv'... (slightly different syntax)

Postgres Data Encryption at Rest Using LUKS with dm-crypt

We are trying to encrypt Postgres data at rest. Can't find any documentation to encrypt Postgres data folder using LUKS with dm-encrypt.
No special instructions are necessary – PostgreSQL will use the opened encrypted filesystem just like any other file system. Just point initdb to a directory in the opened file system, and it will create a PostgreSQL cluster there.
Automatic server restarts will fail, because you need to enter the passphrase.
Of all the ways to protect a database, encrypting the file system is the least useful:
Usually, attacks on a database happen via the client, normally with SQL injection. Encrypring the file system won't help.
The other common attack vector are backups. Backups done with pg_dump or pg_basebackup are not encrypted.
But I guess you know why you need it.

Postgres equivalent to Oracle's "DIRECTORY" objects

Is it possible to create "DIRECTORY" object in Postgres?
If not can some help me with a solution how implement it on PostgreSQL.
Not the best option, but you could use:
COPY (select 1) TO PROGRAM 'mkdir --mode=777 -p /path/to/your/directory/'
Note that only the last part of directory get the permissions set in mode.
There is no equivalent concept to an "Oracle directory" in Postgres.
The alternatives depend on why the "Oracle directory" is needed.
If the directory is needed to read and write files on the database server, then this can be done through Generic File Access Functions. Access to those functions is restricted to superusers (details in the linked section of the manual). If regular users should be able to use them, the best thing would be to create wrapper functions and then grant execute on those functions to the users in question.
For security reasons, only directories inside the database cluster can be accessed.
But it's possible to create symlinks inside the data directory that point to directories outside the data directory. Access privileges on those directories need to be properly setup for the postgres operating system user (the one under which the postgres process is started)
If the directory is needed to access e.g. CSV files through Oracle's external tables, then there is no need for a "directory". The file FDW foreign data wrapper, can access files outside the data directory (provided access privileges have been setup correctly on the file system level).
The question doesn't even make sense really. PostgreSQL is a database management system. It doesn't have files and directories.
The closest parallel I can think of is schemas - see CREATE SCHEMA.
Now, if you want to use COPY to write output to the server's disk and want to create a directory to put that output in... then no, there's nothing like that. But you can use PL/Perlu or PL/Pythonu to do it easily enough.

PostgreSQL how to restore a blob entry to the file system

I want to restore a blob entry to the file system as a file.
How can I do that with PostgreSQL on Linux?
Could you direct me towards an appropriate solution(s)?
Thanks in advance.
See this section 32.3.3 of the PostgreSQL docs on Client Interfaces for how to use the lo_export function.
You can use my pgsql-fio extension that provide basic file system functions.

Can PostgreSQL COPY read CSV from a remote location?

I've been using JDBC with a local Postgres DB to copy data from CSV files into the database with the Postgres COPY command. I use Java to parse the existing CSV file into a CSV format matches the tables in the DB. I then save this parsed CSV to my local disk. I then have JDBC execute a COPY command using the parsed CSV to my local DB. Everything works as expected.
Now I'm trying to perform the same process on a Postgres DB on a remote server using JDBC. However, when JDBC tries to execute the COPY I get
org.postgresql.util.PSQLException: ERROR: could not open file "C:\data\datafile.csv" for reading: No such file or directory
Am I correct in understanding that the COPY command tells the DB to look locally for this file? I.E. the remote server is looking on its C: drive (doesn't exist).
If this is the case, is there anyway to indicate to the copy command to look on my computer rather than "locally" on the remote machine? Reading through the copy documentation I didn't find anything that indicated this functionality.
If the functionality doesn't exist, I'm thinking of just populating the whole database locally then copying to database to the remote server but just wanted to check that I wasn't missing anything.
Thanks for your help.
Create your sql file as follows on your client machine
COPY testtable (column1, c2, c3) FROM STDIN WITH CSV;
1,2,3
4,5,6
\.
Then execute, on your client
psql -U postgres -f /mylocaldrive/copy.sql -h remoteserver.example.com
If you use JDBC, the best solution for you is to use the PostgreSQL COPY API
http://jdbc.postgresql.org/documentation/publicapi/org/postgresql/copy/CopyManager.html
Otherwise (as already noted by others) you can use \copy from psql which allows accessing the local files on the client machine
To my knowledge, the COPY command can only be used to read locally (either from stdin or from file) from the machine where the database is running.
You could make a shell script where you run you the java conversion, then use psql to do a \copy command, which reads from a file on the client machine.