Prepared statements while fetching prepared statement - mysqli

I want to make a loop with one INSERT query for each fetched row from a previous query with prepared statements. In a more visual way:
MAKE QUERY
While Fetching
- Make new query
I can't close my statement since I need it to keep working...
The problem is that when we use prepared statements, we have to fetch all data before doing a new prepare(). So, how could I do this? Maybe without statements, but it's not a nice solution.

You are going to kill your DB (and if you have a DBA your DBA is going to kill you) if you try to do this. The problem is that what you want to do is send one inser request per line to the database. You have to create and dispose of all these commands over and over again for each line of data. That is expensive.
If you are hard-set on doing it, Nothing prevents you from creating a second prepared statement (with a different name of course) within a loop which reads from the first, but I highly advise against it. At the very least, buffer your incoming data and insert a few hundred rows at a time.

Related

Update and retrieve records from Postgres

I am a new for Postgres. Currently I hit a situation, I need to retrieve a record from DB/table, then update this record with some changes, then retrieve the updated record again to return to customer, these 3 operations are executed sequentially.
After I run the above 3 steps, sometimes it seems the record has not been updated. But in fact the record has been updated, I suspect when I retrieve the record, Postgres return a cached data rather than fresh data. Actually all DB operations are correct, I guess just Postgres returns cached records.
I am wondering if there is any mechanism to flush my updated data immediately?
Another question is, I am wondering which one is a good practice:
Update a record (actually write to DB), then immediately retrieve the record from DB, both operations are using statement to operate.
Update a record (actually write to DB), then don't retrieve record from DB, because we know updated data, we just use these data to return to customer. However, the record might fail to write DB.
Any ideas for the above?
Some programming languages do db calls asynchronously, meaning that your code is moving on to the next db operation without waiting for the first to finish. So, it could be as simple as using your language's "await" keyword to make sure you are waiting for your db to finish the "update record" before trying to read it again.
If you are writing raw sql or you know it is not an issue with making several db call asynchronously, you could try writing your update and read calls as a single transaction.
See https://www.postgresql.org/docs/14/tutorial-transactions.html if you're unfamiliar with writing transactions.

What is the safe limit for the number of rows in postgresql?

I have a cron job that fires a query every three minutes. With each firing, data is entered into my db. So my database keeps on growing after each interval. However I am only interested in the latest row. Is it safe? Will postgresql truncate oldest entries automatically?
If you post the gist of your cron job you could get a better answer.
If its a direct insert, execute a truncate before hand to delete the old unwanted data. Delete is also possible, but you will end up with a lot of dead tuples and you will need to vacuum the table on a regular basis.
update is a good option but it depends on how much of the data is static and how much is not. eg. if you repeat values in any columns then go for update. This will also be subject to dead tuples and vacuuming.
if you are loading from an external source, eg csv, json, xml there are methods to overwrite existing data automatically. pg_loader may be an option here.

What's the difference between issuing a query with or without a "begin" and "commit" command in PostgreSQL?

As title say, it is possible to issue a query on psql with a "begin", query, and "commit".
What I want to know is what happens if I don't use a "begin" command?
Some database engine will allow you to execute modifications (INSERT, UPDATE, DELETE) without an open transaction. It's basically assumed that you have an instant BEGIN / COMMIT around each of your instructions, which is a bad practice in case something goes wrong in a batch of many instructions.
You can still make a SELECT, but no INSERT, UPDATE, DELETE without a BEGIN to enforces the good practice. That way, if something goes wrong, a ROLLBACK is instantly executed, canceling all your modifications as if they never existed.
Using a transaction around a batch of various SELECT will guarantee that the data you get for each SELECT matches the same version of the database at the instant you open the transaction depending on your ISOLATION level.
Please read this for more information :
http://www.postgresql.org/docs/9.5/static/sql-start-transaction.html
and
http://www.postgresql.org/docs/9.5/static/tutorial-transactions.html
If you don't use BEGIN/COMMIT, it's the same as wrapping each individual query in a BEGIN/COMMIT block. You can use BEGIN/COMMIT to group multiple queries into a single transaction. A few reasons you might want to do so include
Updating multiple tables at the same time. For instance, usually when you delete a record you also want to delete other rows that reference it. If you do this in the same transaction, nothing will ever be able to reference a row that's already been deleted.
You want to be able to revert some changes if something goes wrong later. Suppose you're writing some user inputted data to multiple tables. At some point you realize that some of it isn't formatted properly. You probably wouldn't want to insert any of it, so you should wrap the entire operation in a transaction.
If you want to ensure the data you're updating hasn't been updated while you're writing to it. Suppose I'm adding $10 to a bank account from two separate connections. I want to add $20 in total - I don't want one of the UPDATEs to clobber the other.
Postgres gives you the first two of these by default. The last one would require a higher transaction isolation level, and makes your query run the risk of raising a serialization error. Transaction isolation levels are a fairly complicated topic, so if you want more info on them the best place to go is the documentation.

PostgreSQL: OK to allow errors?

Before I try to insert a row into a PostgreSQL table, should I query whether the insert would violate a constraint?
I do check when the insert would cause unwanted side-effects (e.g., auto-increment) upon an error.
But, if there are no possible side effects, is it OK to just blindly try to insert into a table? Or, is it better practice to prevent errors by anticipating them when possible (as advised in Objective-C)?
Also, when performing the insert inside an SQL function, will other queries (e.g., CTEs) inside the function get rolled back if the insert fails?
In general testing before hand is not a good idea because it requires you to explicitly lock tables to prevent other clients from changing or inserting data between your test and inserts. Explicit locking is bad for concurrency.
Serials getting auto incremented by failed inserts is in general not a problem. Just don't assume the values inserted into the database are consecutive.
A database and obj-c are two completely different things. Let the database check for problems, it is much easier to add the appropriate constraints to your schema then it is to check everything in your client program.
The default is to rollback to the start of the transaction. But you can control it with savepoints and rollback to savepoint. However a CTE is part of the query and queries are always rolled back completely when part of them fails. However you might be able to work around that by splitting the CTE of into a full query that creates a temp table. Then you can use the temp table instead of the cte.

Why does SQLite give a "database is locked" for a second query in a transaction when using Perl's DBD::SQLite?

Is there a known problem with SQLite giving a "database is locked" error for a second query in a single transaction when using Perl DBD::SQLite? Scenario: Linux, Perl DBI, AutoCommit => 0, a subroutine with two code blocks (using the blocks to localize variable names). In the first code block a query handle is created by prepare() on a select statement, it is executed() and the block closed. The second code block another query handle is created by prepare for an update statement, and frequently (30% of the time) SQLite/DBI gives a database locked error at this stage. I think the error happens during prepare() and not during the execute().
My work around is to commit after the first query. ( Calling finish on the first query did not help). I prefer not to commit for several reasons relating to elegance and performance. The original code has worked fine for many years with Postgres as the database. I tried sqlite_use_immediate_transaction with no effect.
In all other situations, I've found SQLite to perform very well, so I suspect this is an oversight in the DBD driver, rather than an issue with SQLite. Sadly, my current code is a big pile of scripts and modules, so I don't have a short, single file test case.
Not related to this in anyway is it: Transaction and Database Locking from the DBD::SQLite perldoc?
Transaction by AutoCommit or begin_work is nice and handy, but sometimes you may get an annoying "database is locked" error. This typically happens when someone begins a transaction, and tries to write to a database while other person is reading from the database (in another transaction). You might be surprised but SQLite doesn't lock a database when you just begin a normal (deferred) transaction to maximize concurrency. It reserves a lock when you issue a statement to write, but until you actually try to write with a commit statement, it allows other people to read from the database. However, reading from the database also requires shared lock, and that prevents to give you the exclusive lock you reserved, thus you get the "database is locked" error, and other people will get the same error if they try to write afterwards, as you still have a pending lock. busy_timeout doesn't help in this case.
To avoid this, set a transaction type explicitly. You can issue a begin immediate transaction (or begin exclusive transaction) for each transaction, or set sqlite_use_immediate_transaction database handle attribute to true (since 1.30_02) to always use an immediate transaction (even when you simply use begin_work or turn off the AutoCommit.).
my $dbh = DBI->connect("dbi:SQLite::memory:", "", "", {
sqlite_use_immediate_transaction => 1,
});
Note that this works only when all of the connections use the same (non-deferred) transaction. See http://sqlite.org/lockingv3.html for locking details.