Moving a firebird 2.5 database - firebird

So currently I have firebird 2.5 installed and running on Windows, working fine but performance is a bit slow.
I have installed 2.5 on Ubuntu, and I can connect to the current database with ISQL easily:
connect "192.168.155.112:C:\database\database.FDB" user 'SYSDBA' password 'adminpassword';
So I stopped the firebird services on the Windows server, copied the file to the Ubuntu server, and in isql tried to run:
SQL> connect "localhost:/var/lib/firebird/2.5/data/database.FDB" user 'SYSDBA' password 'adminpassword';
Statement failed, SQLSTATE = m
file /var/lib/firebird/2.5/data/database.FDB is not a valid database
Note I have so far tried:
~$ sudo adduser `id -un` firebird
[sudo] password for luke:
The user `luke' is already a member of `firebird'.
As well as
# chown firebird /var/lib/firebird/2.5/data/database.fdb
With no luck, if anyone has any idea as to why I might be getting this error, I would be very grateful :)
I am not sure if Super or Classic was used on Windows, however I have tried using both on Ubuntu with the same error message. Windows server version 2.5.6, same version on Linux

You need to backup the database using gbak, and then restore it using gbak.
To backup:
gbak -backup employee D:\backups\employee.fbk
To restore:
gbak -c /backups/employee.fbk employee
Where employee is either the path or the alias of the database.
See also the gbak manual for more information.

Related

Syntax to make pg_dump target an older version of pg_restore

I have a postgres 12 database in use on heroku with postgres 11 installed on my macOS workstation. When I try to restore the file provided to me by Heroku, I get
$ pg_restore --verbose --no-owner -h localhost -d myapp_development latest-heroku.dump
pg_restore: [archiver] unsupported version (1.14) in file header
According to Heroku's documentation, they make it sound like the only option is that if a Heroku user wants to access their data locally, they must be running postgres 12? That seems silly.
Digging into the Postgres docs on this topic, they say:
pg_dump can also dump from PostgreSQL servers older than its own version. (Currently, servers back to version 8.0 are supported.)
Which certainly sounds like it should be possible to specify a target version of pg_restore to be used by pg_dump? But nowhere on the internet does there seem to be an example of this in action. Including the postgres docs themselves, which offer no clues about the syntax that would be used to target the "dump versions back to version 8.0".
Has anyone ever managed to use the pg_restore installed with postgres 11 to import a dump from the pg_dump installed with postgres 12?
The best answer to this that I figured out was to upgrade via brew upgrade libpq. This upgrades psql, pg_dump and pg_restore to the latest version (to link them I had to use brew link --force libpq). Once that upgrade was in place, I was able to dump from the postgres 12 databases on heroku, and import into my postgres 11 database locally. I thought I might need to dump to raw SQL for that to work, but thankfully the pg-12 based pg_restore was able to import into my postgres 11 database without issue.
pg_restore will refuse to handle a dump with a later version than itself - basically, if it encounters a file "from the future", it cannot know how to handle it.
So if you dump a database with pg_dump from v12, pg_restore from v11 cannot handle it.
The solution, as you have found out, is to upgrade the v11 installation.

Firebird 3 on macOS, local connection fails with: Can not access lock files directory /tmp/firebird/

I've installed firebird 3.0 from the package provided by firebirdsql.org.
If I try to use a local connection to a database:
isql employee -user SYSDBA
it fails with:
Can not access lock files directory /tmp/firebird/
So adding read/write/execute permissions to /tmp/firebird/
sudo chmod a+rwx /tmp/firebird/
and executing the command again yields:
Statement failed, SQLSTATE = 08001
I/O error during "open" operation for file "/tmp/firebird/fb_init"
-Error while trying to open file
-Unknown error: -1
This all will work if I sudo the calls, but is this really necessary?
What is the correct way to use a local connection to firebird database on macOS?
I found CORE-3871 issue in the firebird issue tracker, which describes the problem and it's solution. The user which tries to open the local connection must be member of the firebird user group.
So a user is added to the firebird group on mac bash with the following command:
sudo dseditgroup -o edit -a myusername -t user firebird
If you try to open the sample database employee, shipped with firebird, it's also necessary to grant the group write access to the employee.fdb:
sudo chmod g+w /Library/Frameworks/Firebird.framework/Resources/examples/empbuild/employee.fdb
Now /Library/Frameworks/Firebird.framework/Resources/bin/isql employee -user SYSDBA should work
I only put -p and the password and it's just fine. It's working.
You current command creates the Firebird Embedded database engine to connect to the database. To be able to do that, your current OS user needs to have sufficient access to the database file. For details how to fix that, see the answer by jonjonas68.
An alternative to solution - if you have the Firebird server running - is to connect through the Firebird server process, for example using isql localhost:employee -user sysdba -password <sysdbapassword>. Then the file permissions of the user running the Firebird server process will be applied. However, in that situation, you will need to specify a password when connecting, as passwordless authentication is only applied for Firebird Embedded connections.

Is it possible to backup Firebird DB when using SuperServer on Windows Server 2016?

When I execute Firebird 3.0.x backup command:
c:\Db>"C:\Program Files\Firebird\Firebird_3_0\gbak.exe" -b c:\Db\Db1.fdb c:\Db\Db1_backup.fbk -garbage_collect -transportable -verify -user SYSDBA -pas PASSWORD
Error 1 happend:
gbak: ERROR:I/O error during "CreateFile (open)" operation for file "C:\DB\DB1.FDB"
gbak: ERROR: Error while trying to open file
gbak: ERROR: The process cannot access the file because it is being used by another process.
gbak:Exiting before completion due to errors
Example 2 with TCP/INET/localhost/remote protocols:
c:\Db>"c:\Program Files\Firebird\Firebird_3_0\gbak.exe" -backup inet://c:\Db\Db1.fdb d:\_Backups\Db1_20180702_230546.fbk -garbage_collect -transportable -verify -skip_data SOMETAB_TO_SKIP -user SYSDBA -password PASSWORD123
Error 2 happend:
gbak: ERROR:Your user name and password are not defined. Ask your database administrator to set up a Firebird login.
First of all ... to be honest I am not sure when this started or why. I did not look at my server maybe 3 months but today my backup disk broke down so I had to. I just saw this error first time today and I lived in conviction that my backup works. But I had Firebird 2.5 before.
The question is: is this specific only for Firebird 3 SuperServer on Windows? And there is no way how to backup Firebird 3 SuperServer database when is used by FB server?
Tested and failed on Firebird server 3.0.2 and 3.0.3 on Windows Server 2016.
Firebird is running as a service
Nothing is changed in firebird.config except:
WireCompression = true
RemoteServicePort = 1234
CpuAffinityMask = 8
ServerMode = Super or SuperClassic (when I testing it)
When I execute first command on SuperClassic it works.
When I execute first command on SuperServer 2.5.x it works.
Ok, so I finally figured out where is the issue. Here is the explanation:
My password is wrong!
BUT!
When I use SuperClassic I can use WRONG password and Firebird allows access to the database. (as local user)
When I use SuperServer I can use WRONG password and Firebird allows access to the database WHEN I am the FIRST connection! (as local user with and also without remote protocols)
When I use SuperServer and I use WRONG password Firebird denied access to the database WHEN I am the second (and more) connection! (local also remote user)
With only remote protocols you can not access database with wrong password.
(By remote protocols I mean this.)
This are the reasons of the differences in behavior and why I did not see using of WRONG password. Thanks to everybody who tried to help me.

Why is pg_restore returning successfully but not actually restoring my database? PostgreSQL 9.5

I'm trying to use PostgreSQL 9.5 with pgAdmin 4 (I would prefer to use pgAdmin 3 but I get the following message when trying to connect to my database:
"Warning:
The server you are connecting to is not a version that is supported by this release of pgAdmin III.
pgAdmin III may not function as expected.
Supported server versions are 8.4 to 9.3")
So I'm forced to use pgAdmin 4. However, when I restore (like I usually do on pgAdmin III) I get a 'successful' restore but no data is actually restored into the tables.
If I click on the details of the 'successful' restore I am presented with this:
"1. pg_restore: connecting to database for restore"
How do I fix this issue?
Would you provide pgAdmin4 logs after running restore?
pgAdmin4 Log location:
Linux:
~/.pgadmin/pgAdmin4.log
~/.pgadmin/job_logs/
Windows:
%appdata%\pgAdmin\pgAdmin4.log
%appdata%\pgAdmin\job_logs\
Note: Do not close Success dialog in pgAdmin4 after restore otherwise once you acknowledge the success dialog, pgAdmin4 will delete respective job log files from "job_logs" directory & also delete "pgAdmin4.log" before running restore to trim unwanted logs.

Remote trigger a postgres database backup

I would like to backup my production database before and after running a database migration from my deploy server (not the database server) I've got a Postgresql 8.4 server sitting on a CentOS 5 machine. The website accessing the database is on a Windows 2008 server running an MVC.Net application, it checks out changes in the source code, compiles the project, runs any DB Changes, then deploys to IIS.
I have the DB server set up to do a crontab job backup for daily backups, but I also want a way of calling a backup from the deploy server during the deploy process. From what I can figure out, there isn't a way to tell the database from a client connection to back itself up. If I call pg_dump from the web server as part of the deploy script it will create the backup on the web server (not desirable). I've looked at the COPY command, and it probably won't give me what I want. MS SQLServer lets you call the BACKUP command from within a DB Connection which will put the backups on the database machine.
I found this post about MySQL, and that it's not a supported feature in MySQL. Is Postgres the same? Remote backup of MySQL database
What would be the best way to accomplish this? I thought about creating a small application that makes an SSH connection to the DB Server, then calls pg_dump? This would mean I'm storing SSH connection information on the server, which I'd really rather not do if possible.
Create a database user pgbackup and assign him read-only privileges to all your database tables.
Setup a new OS user pgbackup on CentOS server with a /bin/bash shell.
Login as pgbackup and create a pair of ssh authentication keys without passphrase, and allow this user to login using generated private key:
su - pgbackup
ssh-keygen -q -t rsa -f ~/.ssh/id_rsa -N ""
cp -a ~/.ssh/.id_rsa.pub ~/.ssh/authorized_keys
Create a file ~pgbackup/.bash_profile:
exec pg_dump databasename --file=`date +databasename-%F-%H-%M-%S-%N.sql`
Setup your script on Windows to connect using ssh and authorize using primary key. It will not be able to do anything besides creating a database backup, so it would be reasonably safe.
I think this could be possible if you create a trigger that uses the PostgreSQL module dblink to make a remote database connection from within PL/pgSQL.
I'm not sure what you mean but I think you can just use pg_dump from your Windows computer:
pg_dump --host=centos-server-name > backup.sql
You'd need to install Windows version of PostgreSQL there, so pg_dump.exe would be installed, but you don't need to start PostgreSQL service or even create a tablespace there.
Hi Mike you are correct,
Using the pg_dump we can save the backup only on the local system. In our case we have created a script on the db server for taking the base backup. We have created a expect script on another server which run the script on database server.
All our servers are linux servers , we have done this using the shell script.