Getting number of rows affected by an UPDATE in PostgreSQL - postgresql

variations of this question has been asked on SO and on many blogs but none offers a straight-forward answer. I hope there is one.
I am updating PostgreSQL 9.0 (from CodeIgniter, PHP framework):
$sql_order = "UPDATE meters SET billed=true";
$query = $this->db->query($sql_order);
I simply need a count of rows that were affected by the update, but there seems to be no way to do this with PostgreSQL. The query is now returning a boolean - true.
The manual and web talk refer to the RETURNING syntax, to GET DIAGNOSTICS, and to a default return type from UPDATE. I haven't been able to get any of these to work.
Is there a straightforward way of getting rows affect count without having to embed this simple operation into a procedure or transaction.

In Java I would have used the following:
Statement stmt = connection.createStatement();
int rowsAffected = stmt.executeUpdate("UPDATE ...");
In PHP I believe pg_affected_rows is the way. And in your particular case $this->db->affected_rows()

Related

Get transaction ID from doctrine

I want to get the transaction ID of the current running transaction.
Here is my code:
$con = $entityManager->getConnection();
$con->beginTransaction();
$entity = new Entity(); .....
$entityManager->persist($entity);
$entityManager->flush();
$con->commit();
I can't find any method to get the ID... Only running native SQL can solve this, but I don't think this is proper
I'm assuming you're using the default settings of Doctrine, so it will use PHP PDO underneath. It looks that PDO does not have ability to resolve transaction ID - maybe because it's different for each DBMS, so it's not ANSI SQL.
Take a look on PDO::beginTransaction() documentation, it returns just boolean. Also, there is no other function to retrieve ID.
You have to execute raw SQL which may be not that bad. I know that many people thinks that ORM/DBAL will allow to change DB engine in future, but - from my experience, YMMV - I always used some engine-specific behaviours. Even running SQLite for testing instead of MySQL failed at some point because of small differences about handling nulls and default values.
To fetch transation ID in PostgreSQL:
$con = $entityManager->getConnection();
$query = $con->executeQuery('SELECT txid_current()');
$transactionId = $query->fetchOne();

T-SQL - Trying to query something across all databases on my server

I've got an environment where my server is hosting a variable number of databases, all of which utilize the same table structures/schemas. I need to pull a sum of customers that meet a certain series of constraints with say, the user table. I also need to show which database I am showing the sum for.
I already know all I need to get the sum in a db by db query, but what I'm really looking to do is have one script that hits all of the non-system DBs currently on my server to grab this info.
Please forgive my ignorance in this, just starting out.
Update-
So, to clarify things somewhat; I'm using MS SQL 2014. I know how to pull a listing of the dbs I want to hit by using:
SELECT name
FROM sys.databases
WHERE name not in ('master', 'model', 'msdb', 'tempdb')
AND state = 0
And for the purposes of gathering the data I need from each, let's just say I've got something like:
select count(u.userid)
from users n
join UserAttributes ua on u.userid = ua.userid
where ua.status = 2
New Update:
So, I went ahead and added the ps sp_foreachdb as suggested by #Philip Kelley, and I'm now running into a problem when trying to run this (admittedly, I can tell I'm closer to a solution). So, this is what I'm using to call the sp:
USE [master]
GO
DECLARE #return_value int
EXEC #return_value = [dbo].[sp_foreachdb]
#command = N'select count(userid) as number from ?..users',
#print_dbname = 1,
#user_only = 1
SELECT 'Return Value' = #return_value
GO
This provides a nice and clean output showing a count, but what I'd like to see is the db name in addition to the count, something like this:
|[DB_NAME]|[COUNT]|
But for each DB
Is this even possible?
Source Code: https://codereview.stackexchange.com/questions/113063/executing-dynamic-sql-programmatically
Example Usage:
declare #options int = (
select a.ExcludeSystemDatabases
from dbo.ForEachDatabaseOptions() as a
);
execute dbo.usp_ForEachDatabase
#Command = N'print Db_Name();'
, #Options = #options;
#Command can be anything you want but obviously it needs to be a query that every single database can understand. #Options currently has 3 built-in settings but can be expanded however you see fit.
I wrote this to mimic/expand upon the master.sys.sp_MSforeachdb procedure but it could still use a little bit of polish (especially around the "logic" that replaces ? with the current database name).
Enumerate the databases from schema / sysdatabases. At least in situations without replication, excluding db_ids 1 to 4 as system databases should be reasonably robust:
SELECT [name] FROM master.dbo.sysdatabases WHERE dbid NOT IN (1,2,3,4)
Other methods exist, see here: Get list of databases from SQL Server and here: SQL Server: How to tell if a database is a system database?
Then prefix the query or stored procedure call with the database name, and in a cursor loop over the resultset of the first query, store that in a sysname variable to construct a series of statements like that:
SELECT column FROM databasename.schema.Viewname WHERE ...
and call that using the string execute function
EXECUTE('SELECT ... FROM '+##fully_qualified_table_name+' WHERE ...')
There’s the undocumented sytem procedure, sp_msForEachDB, as found in the master database. Many pundits on the internet recommend not using this, as under obscure fringe cases it can be unreliable and somehow skip random databases. Count me as one of them, this caused me serious grief a few months back.
You can write your own routine to provide this kind of functionality. This is a common task, however, and many people have already done it and posted their code online… so why re-invent the wheel?
#kittoes0124 posted a link to “usp_ForEachDatabse”. This probably works, though pro forma I hate any stored procedures that beings with usp_. I ended up with Aaron Bertrand’s utility, which can be found at http://www.mssqltips.com/sqlservertip/2201/making-a-more-reliable-and-flexible-spmsforeachdb/.
Install a version of this routine, figure out how it works, plug in your script, and go!

Can't update specific columns? Only large string columns?

I'm trying to run some simple update statements on Cache 2008. Loging into the web portal. I'm able to run queries like:
update testspace.clients
set requires_attention = 'Yes'
, notes = 'testsdfsd'
where id = '1||CPL62549.001'
The web portal runs and looks like it updated things but when I do a select statement requires_attention is updated but notes isn't.
Both fields are of type string. The only difference is notes is MAXLEN = 32700.
I've tested this on other columns in other tables. Any string column with MAXLEN = 32700 wont let me update it. Seems odd. Perhaps this is a coincidence and something else is going on. Seems strange that I can update some fields of a record but not others.
any ideas?
I'm new to cache but have experience with SQL Server, Oracle, MySQL, etc.
Strings in Cache are limited to 32000 characters. Setting the MAXLEN to a number greater than that is going to cause problems.
Set the MAXLEN to 32000 and it should be fine.

How to optimize generic SQL to retrieve DDL information

I have a generic code that is used to retrieve DDL information from a Firebird database (FB2.1). It generates SQL code like
SELECT * FROM MyTable where 'c' <> 'c'
I cannot change this code. Actually, if that matters, it is inside Report Builder 10.
The fact is that some tables from my database are becoming a litle too populated (>1M records) and that query is starting to take too long to execute.
If I try to execute
SELECT * FROM MyTable where SomeIndexedField = SomeImpossibleValue
it will obviously use that index and run very quickly.
Well, it wouldn´t be that hard to the database find out that that is an impossible matcher and make some sort of optimization and avoid testing it against each row.
Is there any way to make my firebird database to optimize that search?
As the filter condition is a negative proposition (and also doesn't refer a column to search, but only a value to compare to another value), Firebird need to do a full table scan (without use any index) to confirm that aren't any record that meet your criteria.
If you can't change you need to wait for the upcoming 3.0 version, that will implement the Boolean data type, and therefore should start to evaluate "constant" fake comparisons in advance (maybe the client library will do this evaluation before send the statement to the server?).

Bulk insert in SYBASE using .NET

How can I do bulk data insert in Array in SYBASE table using in .NET. I don't want to use BCP utilities.
It's a bit untidy
You have to use sp_dboption to turn it on
then you can use Select Into to get the data in
the you turn the option back off again.
It's also recomended that your drop all triggers indexes etc before and put them back after for any 'erm lengthy operation...
How are you connected up, you might have a bit of fun if you are on ODBC, as it tends to blow up on proprietry stuff, unless you put pass thru on.
Found this, fater remembering similar troubles way back when with delphi and sybase
Sybase Manual
You can see this example to see how to execute the insert statement.
Then, you simply need to:
select each row of the excel at a time
build the insert command
execute it
or (the best way)
build an insert into command with several rows (not all! maybe 50 each time)
execute the command
One side note, this will take a lot more time that to do the simple
bull copy!
After so much investigation, I found DataAdapter is able to bulk insert. It has property batchsize( I forgot the name). We can specify the number of rows, we want to insert in one trip. DataAdapter insert command should be specified.
There is AseBulkCopy class in name space Sybase.Data.AseClient in Sybase.AdoNet2.AseClient.dll
DataTable dt = SourceDataSet.Tables[0];
using (AseBulkCopy bulkCopy = new AseBulkCopy((AseConnection)conn))
{
bulkCopy.BatchSize = 10000;
bulkCopy.NotifyAfter = 5000;
bulkCopy.AseRowsCopied += new AseRowsCopiedEventHandler(bc_AseRowsCopied);
bulkCopy.DestinationTableName = DestTableName;
bulkCopy.ColumnMappings.Add(new AseBulkCopyColumnMapping("id", "id");
bulkCopy.WriteToServer(dt);
}
static void bc_AseRowsCopied(object sender, AseRowsCopiedEventArgs e)
{
Console.WriteLine(e.RowCopied + "Copied ....");
}