What is the limit of the returned rows from a select statement for the SOCI Firebird backend? - firebird

Using SOCI Firebird v4.0 trunk to Read/Write data from/to a Firebird database.
The problem is, when I select all the rows from a table that are over 2 million rows via a select statement, SOCI throws an std::bad_alloc exception and I get only exactly the value of 1024 * 1024 rows that is: 1048576
I don't know if SOCI has a limit or there is something else I am missing here!
BTW I am storing the rows in a std::vector.

Neither SOCI nor Firebird was responsible for this.
The only cause was the system itself that ran out of memory and can't handle over 2^32 on 32-bit version.
I had to recompile the code in 64-bit and test it in a 64-bit system.

Related

How to fix invalid memory alloc request size(postgresql)

I can insert data into a bytea type column in a table, but I can't select it.
It seems like a memory issue, how do I fix it?
code sample
INSERT INTO bytea_test(data) values (repeat('x', 1024 * 1024 * 1023)::bytea);
-- OK
select repeat('x', 1024 * 1024 * 1023)::bytea;
-- invalid memory alloc request size 2145386499
container on k8s
PostgreSQL 12.5 (Debian 12.5-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
I tried editing around the system, but it didn't improve.
My current settings are:
max_connections=100
shared_buffers=6GB
effective_cache_size=4GB
maintenance_work_mem=64MB
checkpoint_completion_target=0.5
wal_buffers=16MB
default_statistics_target=100
random_page_cost=4
effective_io_concurrency=1
work_mem=3GB
min_wal_size=80MB
max_wal_size=6GB
max_worker_processes=8
max_parallel_workers_per_gather=2
You are running out of memory. Changing PostgreSQL parameters won't help with that. You'll have to set the memory limits of the container wider if you want to run a statement like that.
When formatted to return from a select, a bytea is converted to format which takes up twice as much space as it does when it is in storage. PostgreSQL can't deal with a single value as large as that.
In this case, you could work around it by doing set bytea_output=escape;, as that variable-width way of representing bytea would be more compact for this particular case.

SQL Server 2008 R2, "string or binary data would be truncated" error

In SQL Server 2008 R2, I am trying to insert 30 million records from a source table to the target table. Out of these 30 million records, few records have some bad data and exceeds the length of target field. Generally due to these bad data, the whole insert gets aborted with "string or binary data would be truncated" error, without loading any rows in the target table and SQL Server also do not specify which row had the problem. Is there a way that we can insert rest of rows and catch the bad data rows without big impact on the performance (because performance is the main concern in this case) .
You can use the len function in your where condition to filter out long values:
select ...
from ...
where len(yourcolumn) <= 42
gives you the "good" records
select ...
from ...
where len(yourcolumn) > 42
gives you the "bad" records. You can use such where conditions in an insert select syntax as well.
You can also truncate your string as well, like:
select
left(col, 42) col
from yourtable
In the examples I assumed that 42 is your character limit.
You are not mention that how to insert data i.e. bulk insert or SSIS.
I prefer in this condition SSIS, in which you have control and also find the solution of your issue means you can insert the proper data as #Lajos suggest as well as for bad data you can create a temporary table and get the bad datas.
You can give flow of your logic via transformation and also error handling. You can more search for this too.
https://www.simple-talk.com/sql/reporting-services/using-sql-server-integration-services-to-bulk-load-data/
https://www.mssqltips.com/sqlservertip/2149/capturing-and-logging-data-load-errors-for-an-ssis-package/
http://www.techbrothersit.com/2013/07/ssis-how-to-redirect-invalid-rows-from.html

SQL Server OpenQuery() behaving differently then a direct query from TOAD

The following query works efficiently when run directly against Oracle 11 using TOAD (with native Oracle drivers)
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date('12/31/9999','mm/dd/yyyy')
and rgn_nm = 'Boston'
) ...
;
The exact same query "never" returns if passed from SQL Server 2008 to the same Oracle database via openquery(). SQL Server has a link to the Oracle database using an Oracle Provider OLE DB driver.
select * from openquery( servername, '
select ... from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
The query doesn't return in a reasonable amount of time, and the user kills the query. I don't know if it would eventually return with the correct result.
This result where the direct TOAD query works efficiently and the openquery() version "never" returns is reproducible.
A small modification to the openquery() gives the correct efficient result: Change eff_endt to trunc(eff_endt).
That is well and good, but it doesn't seem like the change should be necessary.
openquery() is supposed to be pass through, so how can there be a difference between the TOAD and openquery() behavior?
The reason we care is because we frequently develop complex queries with TOAD directly accessing Oracle. Once we have the query functioning and optimized, we convert it to an openquery() string for use in a SQL Server application. It is extremely aggravating to have a query suddenly fail with openquery() when we know it worked as a direct query. Then we have to search for a work-around through trial and error.
I would like to see the Oracle trace files for the two scenarios, but the Oracle server is within another organization, and we are not getting cooperation from the Oracle DBAs.
Does anyone know of any driver, or TOAD, or ??? issues that could account for the discrepancy? Is there any way to eliminate the problem such that both methods always give the same result?
I know you asked this a while ago but I just came across your question.
I agree, they should be the same. Obviously there is a difference. We need to find out where the difference is.
I am thinking out loud as I type...
What happens if you specify just a few column instead of select * from openquery?
How many rows are supposed to be returned?
What if, in the oracle select, you limit the returned rows?
How quickly does the openquery timeout?
Are TOAD and SS on the same machine? Are you RDPing into the SS and running toad from there?
Are they using the same drivers? including bit? (32/64) version?
Are they using the same account on oracle?
It is interesting that using the trunc() makes a difference. I assume [eff_endt] is one of the returned fields?
I am wondering if SS is getting all the rows back but it is choking on doing the date conversions. The date type in oracle may need to be converted to a ss date type before ss shows it to you.
What if you insert the rows from the openquery into a table where the date field is just a (n)varchar. I am thinking ss might just dump the date it is getting back from oracle into that text field without trying to convert it.
something like:
insert into mytable(f1,f2,f3,datetimeX)
select f1,f2,f3,datetimeX from openquery( servername, '
select f1,f2,f3,datetimeX from ... where ...
and srvg_ocd in (
select ocd
from rptofc
where eff_endt = to_date(''12/31/9999'',''mm/dd/yyyy'')
and rgn_nm = ''Boston''
) ...
');
What if toad or ss is modifying the query statement before sending it to oracle. You could fire up wireshark and see what toad and ss are actually sending.
I would be very curious if you get this resolved. I link ss to oracle often and have not run into this issue.
Here are basic things you can check for to see what the database is doing after it receives the query. First, check that the execution plans are the same in TOAD as when the query runs using openquery. You could plan the query yourself in TOAD using:
explain plan set statement_id = 'openquery_test' for <your query here>;
select *
from table(dbms_xplan.display(statement_id => 'openquery_test';
then have someone initiate the query using openquery() and have someone with permissions to view v$ tables to run:
select sql_id from v$session where username = '<user running the query>';
(If there's more than one connection with the same user, you'll have to find an additional attribute to isolate the row representing the session running the query.)
select *
from table(dbms_xplan.display_cursor('<value from query above'));
If those look the same then I'd move on to checking database waits and see what it's stuck on.
select se.username
, sw.event
, sw.p1text
, sw.p2text
, sw.p3text
, sw.wait_time_micro/1000000 as seconds_in_wait
, sw.state
, sw.time_since_last_wait_micro/1000000 as seconds_since_last_wait
from v$session se
inner join
v$session_wait sw
on se.sid = sw.sid
where se.username = '<user running the query>'
;
(again, if there's more than one session with the same username, you'll need another attribute to whittle it down to the one you're interested in.)
If the plans are different, then you need to find out why, or if they're the same, look into what it's waiting on (e.g. SQL*Net message to client ?) and why.
I noticed a difference using OLEDB through MS Access (2013) connecting to Oracle 10g & 11g tables, in that it did not always recognize indexes or primary keys on the Oracle tables properly. The same query through an MS Access 2000 database (using odbc) worked fine / had no problem with indexes & keys. The only way I found to fix the OLEDB version was to include all of the key fields in the SELECT -- which was not a satisfying answer, but it's all I could find. This might be an option to try through SSMS / OpenQuery(...) as well.
Besides that... you can try some alternatives to OPENQUERY, such as:
4-part names: SELECT ... FROM Server..Schema.Table
Execute AT: EXEC ('select...') at linked server
But as for why the OLEDB provider works differently than the native Oracle Provider -- the providers are not identical, and the native provider would be more likely to pave-over Oracle quirks than the more generic OLEDB provider would.

PostgreSQL, ODBC and temp table

Could you tell me why this query works in pgAdmin, but doesn't with software using ODBC:
CREATE TEMP TABLE temp296 WITH (OIDS) ON COMMIT DROP AS
SELECT age_group AS a,male AS m,mode AS t,AVG(speed) AS speed
FROM person JOIN info ON person.ppid=info.ppid
WHERE info.mode=2
GROUP BY age_group,male,mode;
SELECT age_group,male,mode,
CASE
WHEN age_group=1 AND male=0 THEN (info_dist_km/(SELECT avg_speed FROM temp296 WHERE a=1 AND m=0))*60
ELSE 0
END AS info_durn_min
FROM person JOIN info ON person.ppid=info.ppid
WHERE info.mode IN (7) AND info.info_dist_km>2;
I got "42P01: ERROR: relation "temp296" does not exist".
I also have tried with "BEGIN; [...] COMMIT;" - "HY010:The cursor is open".
PostgreSQL 9.0.10, compiled by Visual C++ build 1500, 64-bit
psqlODBC 09.01.0200
Windows 7 x64
I think that the reason why it did not work for you because by default ODBC works in autocommit mode. If you executed your statements serially, the very first statement
CREATE TEMP TABLE temp296 ON COMMIT DROP ... ;
must have autocommitted after finishing, and thus dropped your temp table.
Unfortunately, ODBC does not support directly using statements like BEGIN TRANSACTION; ... COMMIT; to handle transactions.
Instead, you can disable auto-commit using SQLSetConnectAttr function like this:
SQLSetConnectAttr(hdbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF, 0);
But, after you do that, you must remember to commit any change by using SQLEndTran like this:
SQLEndTran(SQL_HANDLE_DBC, hdbc, SQL_COMMIT);
While WITH approach has worked for you as a workaround, it is worth noting that using transactions appropriately is faster than running in auto-commit mode.
For example, if you need to insert many rows into the table (thousands or millions), using transactions can be hundreds and thousand times faster than autocommit.
It is not uncommon for temporary tables to not be available via SQLPrepare/SQLExecute in ODBC i.e., on prepared statements e.g., MS SQL Server is like this. The solution is usually to use SQLExecDirect.

Checksum Validation after migration from Oracle to SQL Server

I am migrating a large database from oracle 11g to SQL SERVER 2008R2 using SSIS. How can the data integrity can be validated for numeric data using checksum?
In the past, I've always done this using a few application controls. It should be something that is easy to compute on both platforms.
Frequently, the end result is a query like:
select count(*)
, count(distinct col1) col1_dist_cnt
...
, count(distinct col99) col99_dist_cnt
, sum(col1) col1_sum
...
, sum(col99) col99_sum
from table
Spool to file, Excel or database and compare outcome. Save for project management and auditors.
Please read more on application control here. I wrote it for checks between various financial reporting systems for the regulatory reporting, so this approach serves most cases.
If exactly one field value is wrong, it will always show up. Two errors might compensate each other. For example row 1 col 1 gets the value from row 2 col 1.
To detect for that, multiply each value with something unique for the row. For instance, if you have a unique ID column or identity that is included in the migration too:
, sum(ID * col1) col1_sum
...
, sum(ID * col99) col99_sum
When you get number overflows, first try using the largest available precision (especially on SQL Server sometimes difficult). If not feasibly anymore, use mod function. Only few types of error are hidden by mod.
Icing on the cake is to auto generate these statements. On Oracle look at user_tables, user_tab_columns. On SQL Server look at syscolumns etc.