while importing a fairly big .csv file into mongodb using mongoimport, i have rectified and corrected every error except one that says: open "some.csv file" access is denied. not able to understand where i went wrong here. access is open to this file though.
Thanks
Mongoimport is probably trying to acquire a write lock on the file - and the file is already opened or you have insufficient rights - if intrusive alternatives are not an option, simply copying the file should suffice.
Related
I am trying to install an extension to postgres that will help me write postgres queries to read data directly from parquet files.
This is the extension I found - https://github.com/pgspider/parquet_s3_fdw
After installing the required dependencies I went ahead and tried running the 'make' command.
make install
But ends up with an error
Makefile:45: /contrib/contrib-global.mk: No such file or directory
make: *** No rule to make target '/contrib/contrib-global.mk'. Stop.
Has anyone else tried using this extension ? Or can you suggest me some other way to read data directly from parquet files while using postgres ? (Please note: conversion from parquet to any other format is not allowed under the circumstances that I'm trying this)
Thanks
I'm not sure about the error there but the FDW you referenced is for accessing parquet files on S3 which you didn't mention as a requirement. You might want to try a simpler version like https://github.com/adjust/parquet_fdw
I am getting the error like following while accessing a Postgres database
ERROR: could not access status of transaction 69675
DETAIL: Could not open file "pg_clog/0000": No such file or directory.
I didn't do anything with the pg_clog folder but the 0000 file is not there.
Is there any way to recover that file or in any way to fix this issue?
Any help would be appreciated.
You are experiencing database corruption, and you should restore from a backup. You should try to figure out what happened to the database so you can prevent it in the future.
Is your storage reliable?
Are you using dangerous settings like fsync = off?
Were there any crashes recently?
Are you really running 9.1? If yes, you shouldn't do that, as it is out of support.
Are there any files in the pg_clog directory? There should be.
Did you have an out-of-space problem recently that may have led someone to remove files from a “log” directory?
As stated in the previous response, you're better off restoring from backup, however, I discovered the metadata for those transaction files are not stored in the same location as the data when we restored the data on a server where we were doing some testing with full vacuum and needed to restore the database to an earlier state before the vacuum. In the event where your data integrity isn't as critical like a test database you can get away with creating empty files for the missing transaction logs like this:
dd if=/dev/zero of=/path/to/db/pg_clog/xxxx bs=256k count=1
chown postgres.postgres /path/to/db/pg_clog/xxxx
chmod go-rwX /path/to/db/pg_clog/xxxx
There may be multiple missing files, but if it's just a few files this is an alternative to consider.
This question already has answers here:
Postgres ERROR: could not open file for reading: Permission denied
(17 answers)
Closed 9 years ago.
How do you copy data from a file to a table in SQL? I'm using pgAdmin3 on a Macbook.
The table name is tutor, and the name of the file is tutor.rtf.
I use the following query:
COPY tutor
FROM /Users/.../tutor.rtf
WITH DELIMITER ',';
but got the error "permission denied'.
The file is not locked. So how do you solve this problem? Or is there any other quicker way to copy data from file to table except for INSERT INTO ... VALUE(); ?
COPY opens the file using the PostgreSQL server backend, so it requires that the user postgresql runs as have read permission (for COPY FROM) for the file in question. It also requires the same SQL-level access rights to the table as INSERT, but I suspect it's file permissions that're getting you here.
Most likely the postgres or postgres_ (depending on how you installed PostgreSQL) user doesn't have read access to /Users/somepath/tutor.rtf or some parent directory of that file.
The easiest solution is to use psql's \copy command, which reads the file using the client permissions, rather than those of the server, and uses a path relative to the client's current working directory. This command is not available in PgAdmin-III.
Newer PgAdmin-III versions have the Import command in the table context menu. See importing tables from file in the PgAdmin-III docs. This does the equivalent of psql's \copy command, reading the file with the access rights of the PgAdmin-III application.
Alternately you can use the server-side COPY command by making sure every directory from /Users up somepath has world-execute rights - meaning users can traverse it, cd into it, etc, but can't list its contents without r rights too. Then either set the file to group postgres and make sure it has group read rights, or make it world-readable.
I know this issue has already been raised by others, but even trying previous suggestions I still get this error...
When I try to populate a table copying from a csv file, I get a permission error.
COPY Eurasia FROM '/Users/Oritteropus/Desktop/eurasia1.csv' CSV HEADER;
ERROR: could not open file "/Users/Oritteropus/Desktop/eurasia1.csv" for reading: Permission denied
SQL state: 42501
As previously suggested in these cases, I changed the permission of the file (chmod 711 eurasia1.csv or chmod a+r eurasia1.csv) and I also changed the user rights with:
ALTER USER postgres WITH SUPERUSER; #where postgres is my user
However, I still get the same error.
I also tried to manually change the privileges from pgAdmin but seems avery privilege is already given.
I'm working on a Mac Os and I'm using PostGreSQL 9.2.4.
Any suggestion? Thanks
The best option is to change and use COPY FROM STDIN as that avoids quite a number of permissions issues.
Alternatively you can make sure that the postgres user can access the file. This rarely better than COPY FROM STDIN however for a couple reasons.
COPY TO STDOUT can conceivably corrupt your data. Because this involves file I/O by PostgreSQL if bugs exist in COPY FROM STDIN that could be a problem too.
If you are doing it on the server side because of automation/stored proc concerns, this is rarely a win, as you are combining transactional and non-transactional effects. COPY TO STDOUT and COPY FROM STDIN do not have these issues. (For example, you don't have to wonder whether the atime of the inode actually means the file was properly processed).
I have been looking everywhere (google, stackoverflow, etc.) for some documentation on how to use the PostgreSQL pg_read_binary_file() function.
The only meaningful thing I can find is this page in the official documentation.
Every time I try to use this function I get an error.
For example:
SELECT pg_read_binary_file('/some/path/and/file.gif');
ERROR: absolute path not allowed
or
SELECT pg_read_binary_file('file.gif');
ERROR: could not stat file "file.gif": No such file or directory
Do I need to have my file in a specific directory for Postgres to have access to it? If so what directory?
If it matters, the reason I am looking at this function is because I am trying to insert a file into the database without doing crazy things.
As stated by #a_horse_with_no_name and #guedes the solution is to ensure that the file being uploaded is on the server in the PGDATA directory.
The postgres documentation does state the file location as a requirement.
Additionally, I made a symlink from another directory to the PGDATA directory so that I would not disturb any of the postgres data structure. This seems to be working well and I don't have to do any of the above crazy things.