h2 database - how to copy a xxx.mv.db file with a locked on - copy

If you try to copy a file(xxx.mv.db) while h2db is running,
the process will not be able to access it because another process has locked a portion of the file.
An error occurs. I want to copy the file ignoring this error, but is it possible?

It was solved with 'backup to /path' of h2db.

Related

PostgreSQL 14.5 pg_read_binary_file could not open file for reading: Invalid argument

Yesterday I installed PostgreSQL 14.5 on a Windows 10 laptop.
I then ran an old script to load images into a table.
The script uses the pg_read_binary_file function.
Some of the images are .jpg files and some are .png files.
Of the 34 files, only 5 were successfully processed (1 .jpg and 4 .png). The other 29 failed with the following error:
[Exception, Error code 0, SQLState XX000] ERROR: could not open file "file absolute path" for reading: Invalid argument
For instance, the following statement executes without errors
select pg_read_binary_file('C:\Users\Jorge\OneDrive\Documents\000\020-logos\adalid.png') as adalid_png;
... and the following statement fails
select pg_read_binary_file('C:\Users\Jorge\OneDrive\Documents\000\020-logos\oper.png') as oper_png;
... with the following error message
[Exception, Error code 0, SQLState XX000] ERROR: could not open file "C:/Users/Jorge/OneDrive/Documents/000/020-logos/oper.png" for reading: Invalid argument
So far, I have not been able to identify any difference in the files that could be the cause of the error. Also, I'm pretty sure the script works on earlier releases of version 14. Unfortunately I have not been able to find a website to download any of those earlier releases to test it again.
Has anyone else found this problem, and its solution?
I think the issue is somehow caused by OneDrive. This laptop is new. When I logged in with my Microsoft account, the OneDrive directory was automatically created and updated. Apparently this operation only updates the directory entries, leaving the contents of the files in the cloud until they are opened. When I zipped the directory that contains all my images, a message from OneDrive appeared saying that in that moment it will restore some files. After that, all the commands in my scripts work.
My theory is that pg_read_binary_file gets the file entry from the directory, so it doesn't give the "No such file or directory" message; but then fails reading the contents, giving the "Invalid argument" message instead.
The unanswered question would be: why does 7-Zip make OneDrive restore the files but pg_read_binary_file does not?
UPDATE
After more testing, and reading Save disk space with OneDrive Files On-Demand for Windows, now I am sure that pg_read_binary_file could fail and send the message "Invalid argument" when the OneDrive file is not a locally available file. In Windows File Explorer such file has a blue cloud icon next to it.

postgreSQL COPY TO is producing a corrupted file

COPY sqllearning.superstore_people
TO 'C:\Windows\Temp\sup_people.csv'
WITH DELIMITER ','
CSV HEADER;
When I open the csv file either in notepad or excel I get an Error that says:
Excel cannot access 'file.csv', the document may be read only or encrypted.
file.csv cannot be accessed. the file may be corrupted, located on a server that is not responding, or read- only
Any tips on how to resolve this would be appreciated. Thanks!
I faced similar issue before. I solved it by checking and setting ownership of the entire folder and all its subfolders and files for current user (in Windows - right click on the folder - properties - security). Maybe it would help for you also

cannot zip locked access file

I am writing a powershell script which creates a zip file of a local folder:
[System.IO.Compression.ZipFile]::CreateFromDirectory('c:\myfolder\', 'c:\myarchive.zip', [System.IO.Compression.CompressionLevel]::Fastest,$true)
This folder contains an MS-Access database. This database is opened at the same time by another user. I cannot ask him to close this database.
The zip operation fails because the database is locked. Is there a way to bypass this lock and make a copy of the database ?
Thanks a lot
Copy the folder to a temporary place and zip the copy.

Scala Spark - overwrite parquet file failed to delete file or dir

I'm trying to create parquet files for several days locally. The first time I run the code, everything works fine. The second time it fails to delete a file. The third time it fails to delete another file. It's totally random which file can not be deleted.
The reason I need this to work is because I want to create parquet files everyday for the last seven days. So the parquet files that are already there should be overwritten with the updated data.
I use Project SDK 1.8, Scala version 2.11.8 and Spark version 2.0.2.
After running that line of code the second time:
newDF.repartition(1).write.mode(SaveMode.Overwrite).parquet(
OutputFilePath + "/day=" + DateOfData)
this error occurs:
WARN FileUtil:
Failed to delete file or dir [C:\Users\...\day=2018-07-15\._SUCCESS.crc]:
it still exists.
Exception in thread "main" java.io.IOException:
Unable to clear output directory file:/C:/Users/.../day=2018-07-15
prior to writing to it
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:91)
After the third time:
WARN FileUtil: Failed to delete file or dir
[C:\Users\day=2018-07-20\part-r-00000-8d1a2bde-c39a-47b2-81bb-decdef8ea2f9.snappy.parquet]: it still exists.
Exception in thread "main" java.io.IOException: Unable to clear output directory file:/C:/Users/day=2018-07-20 prior to writing to it
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:91)
As you see it's another file than the second time running the code.
And so on.. After deleting the files manually all parquet files can be created.
Does somebody know that issue and how to fix it?
Edit: It's always a crc-file that can't be deleted.
Thanks for your answers. :)
The solution is not to write in the Users directory. There seems to be a permission problem. So I created a new folder in the C: directory and it works perfect.
this problem occurs when you open the destination directory in windows. You just need to close the directory.
Perhaps another Windows process has a lock on the file so it can't be deleted.

Using COPY FROM in postgres - absolute filename of local file

I'm trying to import a csv file using the COPY FROM command with postgres.
The db is stored on a linux server, and my data is stored locally, i.e. C:\test.csv
I keep getting the error:
ERROR: could not open file "C:\test.csv" for reading: No such file or directory
SQL state: 58P01
I know that I need to use the absolute path for the filename that the server can see, but everything I try brings up the same error
Can anyone help please?
Thanks
Quote from the PostgreSQL manual:
The file must be accessible to the server and the name must be specified from the viewpoint of the server
So you need to copy the file to the server before you can use COPY FROM.
If you don't have access to the server, you can use psql's \copy command which is very similar to COPY FROM but works with local files. See the manual for details.