Cannot map an error code returned from unixodbc using PostgreSQL database - postgresql

I am using PostgreSQL using unixodbc driver, and while trying to get connection, I get error. I am only printing the value of pfNativeError of SQLError, and I get a value of '26'.
I have gone through the error codes returned by postgresql, as listed here: http://www.postgresql.org/docs/8.1/static/errcodes-appendix.html#ERRCODES-TABLE
I wanted to know if unixodbc returns in pfNativeError just the last three characters of the error codes mentioned in link above? If that is true, I assume the only possibility is for the following error code:
22026 STRING DATA LENGTH MISMATCH string_data_length_mismatch
Do let me know if I am thinking in right direction. Also, this issue I've noticed only when the PostgreSQL has millions of rows, and the query which results in connection failure is trying to fetch a lot of data (10Ks). Can someone give any idea to why the problem might be happening?
EDIT 1:
If its of any help, I'm getting following values for szErrorMsg:
Error while executing the query
Could not send Query(connection dead)
EDIT 2:
'26' returned is in INTEGER, and the codes mentioned in link above are in HEX. 26 in decimal corresponds to 1A in hex. Unfortunately, it does not correspond to any thing in the above mentioned documentation. Clearly I am out of ideas! Can someone tell me what different pfNativeError codes correspond to?

Related

ERROR: invalid string enlargement request size 1073741823 (redshift Query error)

I use datagrip as client to connect redshift and encounter a stranger issue which exhaust my whole day.
When I run my query sql the datagrip complains
[XX000] ERROR: invalid string enlargement request size 1073741823
It seemed that there dont exist a place that I can check more detail error log. And I google this error it also have very little similar question and it seemed maybe due to my field is too long which exceed the max length that redshift can accept. But actually, the story is not such for me I dont have long field, then I comment all my sql statement and re-add them incrementally to locate this issue statement.
Finally, I find the error-msg-triggered statement as below:
(
case when trunc(request_date_skip_weekend_tmp) = to_date('2022-03-21', 'YYYY-MM-DD')
then dateadd(day, 1, trunc(request_date_skip_weekend_tmp))
else request_date_skip_weekend_tmp end
)
request_date_skip_weekend,
After I change it with:
dateadd(day, 1, trunc(request_date_skip_weekend_tmp)) request_date_skip_weekend,
the error complain disappear, it is very hard for me to accept the relationship error message and the sql change, I dont know why my the former statement will trigger error complain.
I will appreciate if you can spot why the former expression error or share some knowledge about where can I fetch more detail error message to know what happened.
Your code snippet is dates and timestamps but the error is for strings. So it is likely you have identified a "trigger" and not root cause. Also since you report that the SQL is very long you could be dealing with compiler optimization changes, moving the failure. Removing a CASE can cause the compiler/optimizer to choose different structures for the query.
One experiment to try is to change the to_date() to a cast to timestamp so there are no implicit casts ('2022-03-21'::timestamp). This is unlikely the cause but it may help.
I expect you will need to post the query to get more help. How large is it? This error could be related to building a large string in the query OR could be related to the text of the query OR creating the output. This isn't a standard "string too long" message so this is something more implicit. You could post to a google doc or some other file sharing service. Just link in the question.

Where do I find what SQLAnywhere ASA6532 Error Is Tellling Me?

I am trying to do an insert into a micros sql anywhere table (not created by me) and am getting this error
ERROR [23000] [Sybase][ODBC Driver][SQL Anywhere]Constraint 'ASA6532' violated: Invalid value in table 'emp_def'
I can't find where the contraints on the table are defined nore am I able to figure out what this error is trying to tell me.
I have googled without any success - I have tried pulling the values off the insert one-by-one without success.
Could someone point me to where I might find what this error is trying to tell me?
Also, could someone tell me where to find custom-defined column types?
I found my specific error - a column is defined as int and a 8-digit value was being added which would seem to fit but did not - the error message from SQLAnywhere was terrible and did not tell me this, however. And I never did find the right page to translate the error codes properly

PostgREST / PostgreSQL Cannot enlarge string buffer message

I run into a Cannot enlarge string buffer message on my running postgREST API. I guess some tables are too large to work successful with the API.
I am using the docker postgrest/postgrest container from https://hub.docker.com/r/postgrest/postgrest with the version PostgREST 5.1.0.
Everything is working as expected but if the tables size getting too large, I get following error message.
hint null
details "Cannot enlarge string buffer containing 1073741822 bytes by 1 more bytes."
code "54000"
message "out of memory"
I can't determine the threshold when it's working or not.
Is there a possibility to enlarge the string buffer in some config file or is this hardcoded?
Are there any limits from the table size working with the API. So far I couldn’t find any information in the docu.
=========== Update
The postgres logs give me following SQL query:
WITH pg_source AS (
SELECT "public"."n_osm_bawue_line".*
FROM "public"."n_osm_bawue_line"
)
SELECT null AS total_result_set,
pg_catalog.count(_postgrest_t) AS page_total,
array[]::text[] AS header,
coalesce(json_agg(_postgrest_t), '[]')::character varying AS body
FROM (
SELECT *
FROM pg_source
) _postgrest_t
I use following postgres version:
"PostgreSQL 11.1 (Debian 11.1-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit"
Unless you recompile PostgreSQL is not possible to upgrade the limit(defined here).
My suggestion would be to try to reduce the size of the payload(are you sure you need all the data?) or get all of the payload in multiple requests.
With PostgREST you can do vertical filtering(just select the columns that you need) or paginate to reduce the number of rows you get in one request.
The error message comes from PostgreSQL. PostgREST just wraps the message in JSON and sends the HTTP response.
As a first step for finding the problem, look what is the exact HTTP request you do to trigger the error.
Then, enable PostgreSQL logging and repeat the request, check the logs and then you'll see what is the SQL query that causes this error. Run the query through pgAdmin or psql to make sure you got the problematic query.
Update your question with your findings. The SQL query would be what is needed to continue.
After that you could add a postgresql tag to your question.
There is always the possibility that the file being imported is either corrupted or malformed because of any number of reasons.
I just happened to have discovered in my case that my file had something like incorrect line endings (long story, unnecessary here) which caused the whole file to appear as one line, thus causing the obvious result. You may have something similar in your case that requires a find+replace kind of solution.
For whatever benefit to anyone else, I used this to resolve it:
tr -d '\0' < bad_file.csv > bad_file.csv.fixed

What could be causing this 'invalid host' error on kdb query?

I get an odd error when trying to query too many dates from a date-partitioned historical database:
q)eod: h"select from eod where date within 2018.01.01 2018.04.22"
'/tablepath/2018.04.04/eod/somecolumn: invalid host
q)eod: h"select from eod where date within 2018.01.17 2018.04.20"
'/tablepath/2018.04.20/eod/othercolumn: invalid host
q)eod: h"select from eod where date within 2018.01.18 2018.04.20"
q)
Note that both dates mentioned in the error messages are within the date range that we manage to extract in the end, and that it fails on a different column each time. This seems to indicate it's something to do with the size of the table being pulled, but when we check the size of the largest table we managed to get:
q)(-22!eod) % 1024 * 1024
646.9043
q)count eod
2872546
we find that it's not particularly large by either memory size nor by number of rows.
Googling for "invalid host" errors doesn't seem to turn up anything relevant, and I'm not seeing anything in the kdb docs about size limits that would be relevant. Anyone got any ideas?
Edit:
When loading the table in a session and making the queries directly, we get what appears to be the same error, but with a different message. For instance:
q)jj: select from eod where date within 2018.01.01 2018.04.22
Too many compressed files open
k){0!(?).#[x;0;p1[;y;z]]}
'./2018.04.04/eod/settlecab: No such file or directory
.
?
(+`exch`date`class..
q.Q))
Note that the file ./2018.04.04/eod/settlecab does in fact exist, and contains data:
I have no problem loading the data for just the date mentioned in the error, and the column mentioned has meaningful values:
q)jj: select from eod where date=2018.04.04
q)select count i by settlecab from jj
settlecab| x
---------| -----
0 | 41573
1 | 2269
The key point seems to be the Too many compressed files open message, but what can I do about this?
Edit for Summary/Solutions:
The table in question had many columns, all stored in a compressed format. When issuing a query against too many dates at once, kdb would try to mmap all of those columns at once, running into a limit on how many compressed files could be open at once.
Once I understood the problem, several solutions were available:
I could pull only certain columns from the database, reducing the number of files that kdb needed to keep open,
I could force kdb to pull all the data into memory by adding a dummy where clause to the query, such as (null column) | not null column (hacky, but it works),
I could have upgraded the kdb version and lifted OS limits (not practical in my case).
I still have no idea why this resulted in an invalid host error when querying the database remotely.
First off, can we just clarify the database structure you're working with. It seems from the filepaths returned in your errors that you've got a date-partitioned database. Did you mean non-segmented database when you said non-partitioned in your original query?
In terms of a fix for your issue, have you tried loading your database into a session, and making those queries directly? If so do you get the same issues?
If that seems to be working alright, the problem might lie with how you're defining your database handle. How is h defined in your original example?
It might also be worth trying to select individual dates from your database, to try and isolate the problem, and to determine if it lies with your on-disk data. Try specifically querying the dates that are mentioned in your errors.
You could also try performing your original queries with a subset a columns, again to try and pinpoint where your issue is coming from.
Let us know if you get any further with this.
Joseph

PostGIS Conversion Issues

I am having an issue using PostGIS (1.5.4) data. It may be that I'm just not familiar enough with this technology to see the obvious (I'm a regular expert with nearly 4 hours of experience), but I am running into an error that I have been unable to solve with Google.
I have a table which includes Polygon data (and yes, I checked; the column type is geometry, not polygon- the Postgres native type). The problem arises when I am trying to run a query on the table to find which shape contains a particular problem.
I am using the following query:
SELECT *
FROM geo_shape
WHERE ST_Contains(geoshp_polygon, POINT(-97.4388046000, 38.1112251000));
The error I receive is 'ERROR: function st_contains(geometry, point) does not exist'. I tried using a CAST() function, but got 'ERROR: cannot cast type geometry to polygon'. I'm guessing the issue has to do with the way the data is stored- PGAdmin shows it as hex data. I tried using ST_GeomFromHEXEWKB() just on a hunch, but received 'ERROR: function st_geomfromhexewkb(geometry) does not exist'.
I'm fairly confused as to what the issue is here, so any ideas at all would be much appreciated.
st_contains needs a geom,geom as arguments...
Give this a try...
SELECT * FROM geo_shape
WHERE ST_Contains(geoshp_polygon,
GeomFromText('POINT(-97.4388046000 38.1112251000)'));
Editted to correct , issue in the point data. ST_geomfromtext will work, kinda curious what the difference is there
You cannot mix PostgreSQL's geometric data types with PostGIS's geometry type, which is why you see that error. I suggest using one of PostGIS's geometry contstructors to help out:
SELECT *
FROM geo_shape
WHERE ST_Contains(geoshp_polygon,
ST_SetSRID(ST_MakePoint(-97.4388046000, 38.1112251000),4326);
Or a really quick text way is to piece together the well-known text:
SELECT 'SRID=4326;POINT(-97.4388046000 38.1112251000)'::geometry AS geom;
(this will output the WKB for the geometry type).