zend query maximum field length - zend-framework

Im trying to insert a log record in my log table. But somehow when the field value lenght exceeds 199 chars, my apache restarts and my browsers says net::ERR_CONNECTION_RESET.
I'm using the Zend Framework, so I insert my record with the following lines of code:
$db = Global_Db_Connection::getInstance();
$sql = "INSERT INTO log_table (log) VALUES ('ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd')";
$db->query($sql);
If i dont use the framework, using:
mysql_query($sql);
Then i dont have any problems.
Anyone can tell me how to fix this limit in Zend?
Tried this on FreeBSD same problem. I also found out that when trying to insert it into a table that does not exist, it returns the same error. Only after shortening the value it gives the error that the table does not exist.

May be late to answer, but I have the soultion. Two solution for zend I found:
$db->getConnection()->query($sql); // use getConnection()
$db->exec($sql);
This issue is because of memory stack size. On linux the stack grows as needed, but on Windows & Mac this issue gets bubbled because of the stack size. For this there a ticket raised in php.net(here) Have a look. Enjoy!!!

Related

PostgREST / PostgreSQL Cannot enlarge string buffer message

I run into a Cannot enlarge string buffer message on my running postgREST API. I guess some tables are too large to work successful with the API.
I am using the docker postgrest/postgrest container from https://hub.docker.com/r/postgrest/postgrest with the version PostgREST 5.1.0.
Everything is working as expected but if the tables size getting too large, I get following error message.
hint null
details "Cannot enlarge string buffer containing 1073741822 bytes by 1 more bytes."
code "54000"
message "out of memory"
I can't determine the threshold when it's working or not.
Is there a possibility to enlarge the string buffer in some config file or is this hardcoded?
Are there any limits from the table size working with the API. So far I couldn’t find any information in the docu.
=========== Update
The postgres logs give me following SQL query:
WITH pg_source AS (
SELECT "public"."n_osm_bawue_line".*
FROM "public"."n_osm_bawue_line"
)
SELECT null AS total_result_set,
pg_catalog.count(_postgrest_t) AS page_total,
array[]::text[] AS header,
coalesce(json_agg(_postgrest_t), '[]')::character varying AS body
FROM (
SELECT *
FROM pg_source
) _postgrest_t
I use following postgres version:
"PostgreSQL 11.1 (Debian 11.1-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit"
Unless you recompile PostgreSQL is not possible to upgrade the limit(defined here).
My suggestion would be to try to reduce the size of the payload(are you sure you need all the data?) or get all of the payload in multiple requests.
With PostgREST you can do vertical filtering(just select the columns that you need) or paginate to reduce the number of rows you get in one request.
The error message comes from PostgreSQL. PostgREST just wraps the message in JSON and sends the HTTP response.
As a first step for finding the problem, look what is the exact HTTP request you do to trigger the error.
Then, enable PostgreSQL logging and repeat the request, check the logs and then you'll see what is the SQL query that causes this error. Run the query through pgAdmin or psql to make sure you got the problematic query.
Update your question with your findings. The SQL query would be what is needed to continue.
After that you could add a postgresql tag to your question.
There is always the possibility that the file being imported is either corrupted or malformed because of any number of reasons.
I just happened to have discovered in my case that my file had something like incorrect line endings (long story, unnecessary here) which caused the whole file to appear as one line, thus causing the obvious result. You may have something similar in your case that requires a find+replace kind of solution.
For whatever benefit to anyone else, I used this to resolve it:
tr -d '\0' < bad_file.csv > bad_file.csv.fixed

When i run a specific query i get , ORA-00604: error occurred at recursive SQL level 1 ORA-12899: value too large for column"PLAN_TABLE"."OBJECT_NAME"

Am using Oracle 12.1 c when i run specific query ( i cant show for security reason , and because its un related); i get exception
ORA-00604: error occurred at recursive SQL
level 1 ORA-12899: value too large for column "SOME_SCHEMA"."PLAN_TABLE"."OBJECT_NAME"
(actual: 38, maximum: 30)
I cant make it work , i will try revert last changes i did because it was working before.
BTW i was doing Explain and doing index optimizations
Any idea why!
P.S i will keep trying
How i solved this:
When i was reverting and reviewing my last changes i was doing alters for adding indexes, and each time i try to run the query again to make sure it is working.
So when i reached a specific alter i noticed the name of the index is too long,
so even if the index was created successfully, but the explain plan for select
was failing not the select it self.
The solution:
I renamed the index to be shorter ( 30 maximum ) it worked
Change table/column/index names size in oracle 11g or 12c
Why are Oracle table/column/index names limited to 30 characters?
Using EXPLAIN PLAN Oracle websites docs

What could be causing this 'invalid host' error on kdb query?

I get an odd error when trying to query too many dates from a date-partitioned historical database:
q)eod: h"select from eod where date within 2018.01.01 2018.04.22"
'/tablepath/2018.04.04/eod/somecolumn: invalid host
q)eod: h"select from eod where date within 2018.01.17 2018.04.20"
'/tablepath/2018.04.20/eod/othercolumn: invalid host
q)eod: h"select from eod where date within 2018.01.18 2018.04.20"
q)
Note that both dates mentioned in the error messages are within the date range that we manage to extract in the end, and that it fails on a different column each time. This seems to indicate it's something to do with the size of the table being pulled, but when we check the size of the largest table we managed to get:
q)(-22!eod) % 1024 * 1024
646.9043
q)count eod
2872546
we find that it's not particularly large by either memory size nor by number of rows.
Googling for "invalid host" errors doesn't seem to turn up anything relevant, and I'm not seeing anything in the kdb docs about size limits that would be relevant. Anyone got any ideas?
Edit:
When loading the table in a session and making the queries directly, we get what appears to be the same error, but with a different message. For instance:
q)jj: select from eod where date within 2018.01.01 2018.04.22
Too many compressed files open
k){0!(?).#[x;0;p1[;y;z]]}
'./2018.04.04/eod/settlecab: No such file or directory
.
?
(+`exch`date`class..
q.Q))
Note that the file ./2018.04.04/eod/settlecab does in fact exist, and contains data:
I have no problem loading the data for just the date mentioned in the error, and the column mentioned has meaningful values:
q)jj: select from eod where date=2018.04.04
q)select count i by settlecab from jj
settlecab| x
---------| -----
0 | 41573
1 | 2269
The key point seems to be the Too many compressed files open message, but what can I do about this?
Edit for Summary/Solutions:
The table in question had many columns, all stored in a compressed format. When issuing a query against too many dates at once, kdb would try to mmap all of those columns at once, running into a limit on how many compressed files could be open at once.
Once I understood the problem, several solutions were available:
I could pull only certain columns from the database, reducing the number of files that kdb needed to keep open,
I could force kdb to pull all the data into memory by adding a dummy where clause to the query, such as (null column) | not null column (hacky, but it works),
I could have upgraded the kdb version and lifted OS limits (not practical in my case).
I still have no idea why this resulted in an invalid host error when querying the database remotely.
First off, can we just clarify the database structure you're working with. It seems from the filepaths returned in your errors that you've got a date-partitioned database. Did you mean non-segmented database when you said non-partitioned in your original query?
In terms of a fix for your issue, have you tried loading your database into a session, and making those queries directly? If so do you get the same issues?
If that seems to be working alright, the problem might lie with how you're defining your database handle. How is h defined in your original example?
It might also be worth trying to select individual dates from your database, to try and isolate the problem, and to determine if it lies with your on-disk data. Try specifically querying the dates that are mentioned in your errors.
You could also try performing your original queries with a subset a columns, again to try and pinpoint where your issue is coming from.
Let us know if you get any further with this.
Joseph

Cannot map an error code returned from unixodbc using PostgreSQL database

I am using PostgreSQL using unixodbc driver, and while trying to get connection, I get error. I am only printing the value of pfNativeError of SQLError, and I get a value of '26'.
I have gone through the error codes returned by postgresql, as listed here: http://www.postgresql.org/docs/8.1/static/errcodes-appendix.html#ERRCODES-TABLE
I wanted to know if unixodbc returns in pfNativeError just the last three characters of the error codes mentioned in link above? If that is true, I assume the only possibility is for the following error code:
22026 STRING DATA LENGTH MISMATCH string_data_length_mismatch
Do let me know if I am thinking in right direction. Also, this issue I've noticed only when the PostgreSQL has millions of rows, and the query which results in connection failure is trying to fetch a lot of data (10Ks). Can someone give any idea to why the problem might be happening?
EDIT 1:
If its of any help, I'm getting following values for szErrorMsg:
Error while executing the query
Could not send Query(connection dead)
EDIT 2:
'26' returned is in INTEGER, and the codes mentioned in link above are in HEX. 26 in decimal corresponds to 1A in hex. Unfortunately, it does not correspond to any thing in the above mentioned documentation. Clearly I am out of ideas! Can someone tell me what different pfNativeError codes correspond to?

Cocoa error 256 core data

I have error "Cocoa error 256" when I try to save data. How to fix it? And what problem?
According to the help reference in Xcode:
NSFileReadUnknownError
Read error, reason unknown
Available in Mac OS X v10.4 and later.
Declared in FoundationErrors.h.
Sadly, that's probably not too helpful, though it is an unknown -read- error.
If its a core data error there is probably an actual error object somewhere near where the error occurs. If you dump the error objects userInfo dictionary, you can usually get a lot more detail than just the error code itself.
This is what it boils down to (as Tegeril said)
NSFileReadUnknownError Read error,
reason unknown
Available in Mac OS X v10.4 and later.
Declared in FoundationErrors.h.
A file can also be a resource located at a URL/URI, if the URL has unencoded characters it can cause this type of error.
Check the path to the resource/file.
I ran into exactly this error when populating an SQLite database for an iOS app using a custom script (ie not using Core Data). It turns out that there is some metadata which you have to update yourself, after adding new rows. Find the row in Z_PRIMARYKEY where Z_NAME equals the name of the table you've just inserted into. Make sure that Z_MAX in this row is equal to the highest value of Z_PK in the table you've inserted the rows into. In my case, as soon as I updated Z_MAX with the correct number, the error went away.
So, for the "ZAUTHOR" table:
SELECT z_pk FROM ZAUTHOR ORDER BY z_pk DESC LIMIT 1; /* Returns 1234 */
UPDATE Z_PRIMARYKEY SET z_max = 1234 WHERE z_name = 'Author';
This is the article which helped me track down the error.
I get this error on Xcode 6 (& 7) when switching a network connection while the Simulator is open. For example moving from one wireless network to another. The solution for me is to Quit Simulator and restart.