I am having issues with invalid UTF8 byte sequences - postgresql

I am trying to move a POSTGRESQL database from one server to another. In order to do so, I did a pg_dump and then after creating a new database on the new server, I tried to restore the pg_dumped file. For the most part, the restore was alright, but then one table did not copy over.
pg_restore: [archiver (db)] COPY failed: ERROR: invalid byte sequence for encoding "UTF8": 0x92
HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".
Now, after checking database properties, it turns out that the original table was encoded in SQL_ASC2, but the new one that I created is UTF8.
I don't know anything about encoding, but isn't UTF8 backward compatible with ASC2? So, why is there an invalid byte sequence?
Would changing the new database to one that uses SQL_ASC2 fix this problem?
If I have to change the encoding of the new database, how do I do it? Can I just change it, or do I have to start from scratch and remake the entire database?
Thanks for the help!

Before connecting to the database you could set client_encoding to 'LATIN9' (what it probably is; anyway: it will be accepted) You can do this by:
1) issuing pg_restore with a -f my_filename flag.
2) Edit the resulting file (there probably already is a "SET client_encoding = 'UTF8';" present near the top.)
3) submit with "psql -U username dbname < filename". (in most cases you'll have to supply a different username or dbname; the "\connect newdbname" is situated at the top of the script and will take over. Sometimes you even need to create a user first)

If you are on a *nix box and your pg_dump file is plain text, then you could try running the dump file through iconv prior to importing it into postgres.

Related

Need to convert a dump.sql to a *fname.dump file for restoration of Odoo

My last working database back up of an Odoo13CE system was a full one, including the file store. I'm getting timeouts when trying to restore "a copy" via Odoo database manager page. Thought I could just do a partial restore (dump.sql & manifest.json), dump the filestore, recompress and upload and that brought everything down to its knees (Errored w/" no *.dump file found). So logged into server and dropped my failed restore and restarted odoo service and all is back to somewhat normal, with the database I want to replace active.
Is there a way to convert that .sql to a .dump or some other way to get my .sql to be added to my pgdb? I'm fairly green re: psql so if I'm missing something simple, please feel free to shove it down my throat.
TIA
to restore sql back up file to a new database:
psql YOUR_DATABASE_NAME < YOUR_FILENAME
You can read more about restoring/back up Postgres Db here: https://www.postgresql.org/docs/11/backup-dump.html
Restoring the heavy size database(with file store) you have to increase the limit of the server to continue your process.
Add the parameter on your path
--limit-time-cpu=6000 --limit-time-real=12000
Restore the SQL File
psql database_name < your_file.sql
Restore the Dump File
pg_restore -d database_name < your_file.dump

PostgreSQL - Error: "invalid input syntax for type bytea"

I am very new to PostgreSQL so I apologize if the question is elementary.
During PostgreSQL database restore, from sql file, I am getting an error "invalid input syntax for type bytea" and I believe the data is not copied to the table, i.e. the table is empty.
This is the error message:
2015-02-20 08:56:14 EST ERROR: invalid input syntax for type bytea
2015-02-20 08:56:14 EST CONTEXT: COPY ir_ui_menu, line 166, column web_icon_data: "\x6956424f5277304b47676f414141414e5355684555674141414751414141426b43414d41414142485047566d4141414143..."
2015-02-20 08:56:14 EST STATEMENT: COPY ir_ui_menu (id, parent_id, name, icon, create_uid, create_date, write_date, write_uid, web_icon_data, web_icon, sequence, web_icon_hover, web_icon_hover_data) FROM stdin;
The database backup dump is created like this:
pg_dump -U user_name database_name -f backup_file.sql
The database restore is done like this:
psql -U user_name -d destination_db -f backup_file.sql
Source database (to get backup from) is PostgreSQL version 9.1.15 on one server and destination (to restore to) database is PostgreSQL 8.3.4, on another server.
Is there any way to resolve this issue? Thanks in advance for your help.
Restoring a dump from a newer version of Postgres onto an older is quite often problematic, and there is no automated way that I am aware of. Making this work will most likely require editing the dump file manually.
Specifically, Postgres 9.0 changed the handling of escape strings used with bytea: previous versions treated \ in regular string literals such as '\' as escape characters, whereas newer versions use the escape string syntax E'\'.
If you have access to your 9.X server configuration, you can change bytea_output variable to 'escape' on postgresql.conf:
bytea_output = 'escape' # hex, escape
Then restart Postgres 9.X server and dump the database as you do it normally. Finally, restore it on the 8.X server.
You can also change client connection variable just for the database dump action, but it may be outside scope.
Hope it helps.
Unfortunately, PostgreSQL 9 backups are not backward compatible, and there are no options to enable this compatibility. I've been stuck on this problem for hours.
You'll need to figure out which statements are not compatible, and find their equivalent (if there's one) in PostgreSQL 8.
Another option, the better one, would be to upgrade your v8 to the latest v9.

PgAdmin III, opening server status gives "invalid byte sequence for encoding UTF8"

I have two Postgres 9.3 servers in synchronous replication.
I had needed to restart the slave in order to load a new archive_cleanup_command in the recovery.conf.
The server restarted correctly and it's now perfectly in sync with the master server.
But when I open "Server status" panel for the slave server in PgAdmin III (which executable is located on the master server), I get some errors like this:
invalid byte sequence for encoding “UTF8” plus some hex codes
It might be because I put a tilde ~ in the archive_cleanup_command, but it didn't worked, then I removed it and the command worked correctly.
Maybe that ~ has been written somewhere and it's not a valid char... but I also deleted logs...
Log of the slave server has a lot of lines like the followings:
2015-02-13 11:11:32 CET ERROR: invalid byte sequence for encoding “UTF8”: 0xe8 0x20 0x73
2015-02-13 11:11:32 CET STATEMENT: SELECT pg_file_read('pg_log/postgresql-2015-02-13_111038.log', 0, 50000)
Note that postgresql-2015-02-13_111038.log is the last log, the one from which I got these lines.
The problem you have is that the locale setting lc_messages is set to an encoding that is different to the encoding of the database(s). As a result, some messages are being written into the log using Windows-1252 encoding, while when you try to use PgAdmin to view the log, it tries to interpret the file using UTF-8. Some of the byte sequences written in the log are not valid UTF-8, leading to the error.
In fact, the way in which different locales interact in postgresql can result in mixed encodings in the log file. There is a Bug Report on this, but it does not look like it has been resolved.
Probably the easiest way to resolve this is to set lc_messages to English_United States.UTF-8.
It would also be preferable to have lc_messages aligned across all of the databases on the server (or at least all using the same encoding).
Be sure to remove any existing log files as they will already contain the incorrect encoding.
It is due to your postgresql.log corrupted as stated in the statement 'select pf_file_read ....'.
If you do a "touch" (after a backup of your log perhaps) on you server log, and reconnect, you'll not see this unicode error anymore and thus, you'll be able to use pgadmin III furthermore.

Error importing postgres DB dump into empty database

I need to import my psql dump into my fresh psql database. When I execute the following command, I get errors.
psql -U user new_database < filename.sql
Error I got:
ERROR: out of memory
DETAIL: Cannot enlarge string buffer containing 0 bytes by 1208975751 more bytes.
How do i fix this. Aldo, is there any method to log the import process?
Thanks.
I think the most common cause is a "corrupt" SQL file. There's no easy fix. Split the file in half (man split), fix the SQL statements at the tail end of one resulting file and at the head end of the other, and run again. In the past, I seem to recall seeing "Invalid byte sequence for UTF-8", or something like that. I'm sure there are other causes.
PostgreSQL has a lot of logging options; set them in postgresql.conf and restart PostgreSQL. Look at
log_destination
logging_collector
client_min_messages
log_error_verbosity
log_statement

Copy a live Firebird .fdb database

I want to make a copy of a live Firebird .fdb database.
I understand that simply copying it could result in a corrupted database and I looked into using the gbak command because it is able to perform a backup while the database is running.
So that will give me a database backup but I would need to restore this before I can use it. My database is nearly 1GB and takes 10 minutes to restore which is too long. Is there any other method to simply copy a Firebird live database from one location to another safely?
So far I have used the following to backup (which works):
gbak -v -t -user SYSDBA -password "masterkey" 127.0.0.1:"C:/Files/Live/Database.fdb" "C:\Test\Test.fbk"
I also tried using the following to backup and restore at the same time:
gbak -c [options] <source database> stdout | gbak -r [options] stdin <target database>
but this kept giving the error:
Done with volume #1, "new.gbak"
Press return to reopen that file, or type a new
name followed by return to open a different file.
The risk of corruption is due to the way Firebird writes it file. Your copy might contain inconsistent data when copied at the same time that Firebird (re)writes a datapage. As far as I know the only real risk of corruption is during writes of index pages (and then only for index page splits), otherwise it just leads to inconsistent data and dangling transactions (which wouldn't be visible anyway as the transaction is not committed).
If you really don't want to use a backup, you can set the Firebird database in a backup state. This freezes the database and writes the changes to a delta file instead. You enable this state using ALTER DATABASE BEGIN BACKUP and end this state using ALTER DATABASE END BACKUP. See the documentation for ALTER DATABASE. This command was added in Firebird 2.0.
For the second part of your question (which really should have been posted as a separate question):
The command gbak -c doesn't create a backup, it Creates a database from a backup (it is the brother of -r (replace) but doesn't overwrite an existing database. See Restore Switches for more information.
See Create a Database Clone Without a Dump File for an example of how you can make a backup like this:
gbak -backup emptest stdout | gbak -replace stdin emptest_2
You can do
alter database begin backup;
then copy the file using standard file copy of your OS and
alter database end backup;
Also I strongly recommend reading these pages [1] [2] about nbackup.