FireDAC: Shared Lock on Table with Firebird - firebird

I'm using Delphi 10.1 with FireDAC to connect to Firebird.
I would like to open a table in Exclusive mode in Firebird with FireDAC?
How would it be?

Firebird does not handle table or row locks. So there's no way you're going to get this to work with FireDAC... no connection parameters can do this magic.
What you can do with Firebird is to use the entire database in single user mode. To do this, you must shut it down, run GFIX to flag it as a single user database, and then reconnect to the database. You can find more details on the Firebird How-To FAQ. But I doubt this is what you are looking for.
You should explain better what you are trying to do. With real SQL servers you should not feel the need to lock tables or rows. Transactions and transaction isolation should be enough to handle most situations. If not, then you should probably start thinking about application level locks, that is, if you have just one application that uses the database.

Related

Schema pg_dump failed due to a Lock on a table

I'm running backup restore on a schema every day and get this every now and then:
pg_dump: Error message from server: ERROR: relation not found (OID
86157003) DETAIL: This can be validly caused by a concurrent delete
operation on this object. pg_dump: The command was: LOCK TABLE
myschema.products IN ACCESS SHARE MODE
How can this be avoided? It seems the table was being used at the time, or someone was running something against the table. can I just kill all connections to the DB before restoring or is there another alternative?
As far as I understand, pg_dump could run even if users are doing something with the table but it doesn't seem to be the case.
Thanks,
It is somewhat buried but the answer lies here:
https://www.postgresql.org/docs/current/app-pgdump.html
"
-j njobs
...
To detect this conflict, the pg_dump worker process requests another shared lock using the NOWAIT option. If the worker process is not granted this shared lock, somebody else must have requested an exclusive lock in the meantime and there is no way to continue with the dump, so pg_dump has no choice but to abort the dump.
"
Which is borne out by the this in the error message:
"LOCK TABLE myschema.products IN ACCESS SHARE MODE"
ACCESS SHARE will cooperate with all other locks modes except ACCESS EXCLUSIVE. ACCESS EXCLUSIVE is used by DROP TABLE, TRUNCATE, REINDEX, etc. See here Locks for more information. So you need to do the dump during a time where the operations listed for ACCESS EXCLUSIVE are known to not happen or by blocking/dropping connections.
Somebody dropped a table between the time pg_dump took an inventory of the tables and the time it tries to dump the table.
This can happen if your application is in the habit of dropping tables all the time.
This is not an answer to your main question, but a caution regarding:
As far as I understand, pg_dump could run even if users are doing something with the table but it doesn't seem to be the case.
It assumes that the application performs every action in a single transaction. I have known of applications which accomplish some tasks using more than one.
I don't know exactly what the tasks were or if it was unavoidable that they use multiple transactions, but dumps could only be trusted when the application was idle or, better yet, when the service was stopped.
For the function that those applications performed, it wasn't a big deal to work around down times or stop services.
I don't know how you'd determine this behaviour without being told by the developers. Just something to consider.

Is there a way to show everything that was changed in a PostgreSQL database during a transaction?

I often have to execute complex sql scripts in a single transaction on a large PostgreSQL database and I would like to verify everything that was changed during the transaction.
Verifying each single entry on each table "by hand" would take ages.
Dumping the database before and after the script to plain sql and using diff on the dumps isn't really an option since each dump would be about 50G of data.
Is there a way to show all the data that was added, deleted or modified during a single transaction?
Dude, What are you looking for is the most searchable thing on the internet when it comes to capturing Database changes. It is a kind of version control we can say.
But as long as I know, sadly there are no in-built approaches are available in PostgreSQL or MySql. But you can overcome it by setting/adding some triggers for your most usable operations.
You can create some backup schemas, and tables to capture your changes that are changed(updated), created, or deleted.
In this way you can achieve what you want. I know this process is fully manual, But really effective.
If you need to analyze the script's behaviour only sporadically, then the easiest approach would be to change server configuration parameter log_min_duration_statement to 0 and then back to any value it had before the analysis. Then all of the script activity will be written to the instance log.
This approach is not suitable if your storage is not prepared to accommodate this amount of data, or for systems in which you don't want sensitive client data to be written to a plain-text log file.

How to get a connection and hold it using DAAB?

I have a task ahead of me that requires the use of local temporary tables. For performance reasons I can't use transactions.
Temporary tables much like transactions require that all queries must come from one connection which must not be closed or reset. How can I accomplish this using Enterprise library data access application block?
Enterprise Library will use a single database connection if a transaction is active. However, there is no way to force a single connection for all Database methods in the absence of a transaction.
You can definitely use the Database.CreateConnection method to get a database connection. You could then use that connection along with the DbCommand objects to perform the appropriate logic.
Other approaches would be to modify Enterprise Library source code to do exactly what you want or create a new Database implementation that does not perform connection management.
Can't see a way of doing that with DAAB. I think you are going to have to drop back to use ADO.Net connections and manage them yourself, but even then, playing with temporary tables on the server from a client-side app doesn't strike me as an optimal solution to a problem.

libpq code to create, list and delete databases (C++/VC++, PostgreSQL)

I am new to the PostgreSQL database. What my visual c++ application needs to do is to create multiple tables and add/retrieve data from them.
Each session of my application should create a new and distinct database. I can use the current date and time for a unique database name.
There should also be an option to delete all the databases.
I have worked out how to connect to a database, create tables, and add data to tables. I am not sure how to make a new database for each run or how to retrieve number and name of databases if user want to clear all databases.
Please help.
See the libpq examples in the documentation. The example program shows you how to list databases, and in general how to execute commands against the database. The example code there is trivial to adapt to creating and dropping databases.
Creating a database is a simple CREATE DATABASE SQL statement, same as any other libpq operation. You must connect to a temporary database (usually template1) to issue the CREATE DATABASE, then disconnect and make a new connection to the database you just created.
Rather than creating new databases, consider creating new schema instead. Much less hassle, since all you need to do is change the search_path or prefix your table references, you don't have to disconnect and reconnect to change schemas. See the documentation on schemas.
I question the wisdom of your design, though. It is rarely a good idea for applications to be creating and dropping databases (or tables, except temporary tables) as a normal part of their operation. Maybe if you elaborated on why you want to do this, we can come up with solutions that may be easier and/or perform better than your current approach.

Sybase SQLAnywhere jConnect routines?

I have a database which is part of a closed system and the end-user of the system would like me to write some reports using the data contains in a Sybase SQL Anywhere Database. The system doesn't provide the reports that they are looking for, but access to the data is available by connecting to this ASA database.
The vendor of the software would likely prefer I not update the database and I am basically read-only as I am just doing some reporting. All is good, seal is not broken, warranty still intact, etc,etc..
My main problem is that I am using jConnect in order to read from the database, and jConnect requires some "jConnect Routines" to be installed into the database. I've found that I can make this happen by just doing an "Alter Database Upgrade JConnect On", but I just don't fully understand what this does and if there is any risks associated with it.
So, my question is does anyone know exactly what jConnect routines are and how are they used? Is there any risk adding these to a database? Should I be worried about this?
If the vendor wants you to write reports using jConnect they will have to allow the installation of the JConnect tables.
These are quite safe, where I work the DBA team install these as a matter of course and we run huge databases in production with no impact.
There is an alternative driver that you could use called jTDS. Its open source and supports MS SQL Server and Sybase. I'm not sure if they require the JConnect tables or not.
I think that the additional tables are a bit of anachronism in this day and age.
Looking at ASA 10 docs, there is another driver: the iAnywhere JDBC driver which seems to be going through the ODBC driver, and as such, probably will not require an alteration of the database.
On the other hand, installing the "jConnect system objects" is done by running the script scrits/jcatalog.sql... You can show it the DBAs, if you want to reassure them. It creates some procedures, tables, variables.
The need for this script probably comes from the fact that jConnect talks to both ASE (Sybase) and iAnywhere databases, so it needs a compatibility layer installed in the database...