When i run a specific query i get , ORA-00604: error occurred at recursive SQL level 1 ORA-12899: value too large for column"PLAN_TABLE"."OBJECT_NAME" - oracle12c

Am using Oracle 12.1 c when i run specific query ( i cant show for security reason , and because its un related); i get exception
ORA-00604: error occurred at recursive SQL
level 1 ORA-12899: value too large for column "SOME_SCHEMA"."PLAN_TABLE"."OBJECT_NAME"
(actual: 38, maximum: 30)
I cant make it work , i will try revert last changes i did because it was working before.
BTW i was doing Explain and doing index optimizations
Any idea why!
P.S i will keep trying

How i solved this:
When i was reverting and reviewing my last changes i was doing alters for adding indexes, and each time i try to run the query again to make sure it is working.
So when i reached a specific alter i noticed the name of the index is too long,
so even if the index was created successfully, but the explain plan for select
was failing not the select it self.
The solution:
I renamed the index to be shorter ( 30 maximum ) it worked
Change table/column/index names size in oracle 11g or 12c
Why are Oracle table/column/index names limited to 30 characters?
Using EXPLAIN PLAN Oracle websites docs

Related

PostgREST / PostgreSQL Cannot enlarge string buffer message

I run into a Cannot enlarge string buffer message on my running postgREST API. I guess some tables are too large to work successful with the API.
I am using the docker postgrest/postgrest container from https://hub.docker.com/r/postgrest/postgrest with the version PostgREST 5.1.0.
Everything is working as expected but if the tables size getting too large, I get following error message.
hint null
details "Cannot enlarge string buffer containing 1073741822 bytes by 1 more bytes."
code "54000"
message "out of memory"
I can't determine the threshold when it's working or not.
Is there a possibility to enlarge the string buffer in some config file or is this hardcoded?
Are there any limits from the table size working with the API. So far I couldn’t find any information in the docu.
=========== Update
The postgres logs give me following SQL query:
WITH pg_source AS (
SELECT "public"."n_osm_bawue_line".*
FROM "public"."n_osm_bawue_line"
)
SELECT null AS total_result_set,
pg_catalog.count(_postgrest_t) AS page_total,
array[]::text[] AS header,
coalesce(json_agg(_postgrest_t), '[]')::character varying AS body
FROM (
SELECT *
FROM pg_source
) _postgrest_t
I use following postgres version:
"PostgreSQL 11.1 (Debian 11.1-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit"
Unless you recompile PostgreSQL is not possible to upgrade the limit(defined here).
My suggestion would be to try to reduce the size of the payload(are you sure you need all the data?) or get all of the payload in multiple requests.
With PostgREST you can do vertical filtering(just select the columns that you need) or paginate to reduce the number of rows you get in one request.
The error message comes from PostgreSQL. PostgREST just wraps the message in JSON and sends the HTTP response.
As a first step for finding the problem, look what is the exact HTTP request you do to trigger the error.
Then, enable PostgreSQL logging and repeat the request, check the logs and then you'll see what is the SQL query that causes this error. Run the query through pgAdmin or psql to make sure you got the problematic query.
Update your question with your findings. The SQL query would be what is needed to continue.
After that you could add a postgresql tag to your question.
There is always the possibility that the file being imported is either corrupted or malformed because of any number of reasons.
I just happened to have discovered in my case that my file had something like incorrect line endings (long story, unnecessary here) which caused the whole file to appear as one line, thus causing the obvious result. You may have something similar in your case that requires a find+replace kind of solution.
For whatever benefit to anyone else, I used this to resolve it:
tr -d '\0' < bad_file.csv > bad_file.csv.fixed

SSIS connection manager error with PostgreSQL 'Value does not fall within the expected range'

I have very simple SSIS package which brings 8 columns from PostgreSQL into SQL server. There are only 10 rows feeding through. Now requirement changed by adding another
- join
- with extra column to bring back.
- Also one where clause condition to be removed.
When changed and preview using OLE DB source editor it throws error 'Value does not fall within the expected range'
script runs successfully in PostgreSQL within 40 sec(though it should not take that long).
In 'OLE DB source editor:
When adding Join - it previews(successfully)
when adding Join and bringing column - it previews (successfully)
when adding Join, bringing column and removing condition ( condition let it bring 149 records instead of 10 ) - (throws error)
I guess it is very basic error and someone with experience will be able to spot it quickly. Will be very thankful for kind and prompt response..
I have checked datatype for new column source and destination and it matches. character (255)
I have run script using PGAdmin 4 and it runs successfully in PostgreSQL.
.

Can I have more than 250 columns in the result of a PostgreSQL query?

Note that PostgreSQL website mentions that it has a limit on number of columns between 250-1600 columns depending on column types.
Scenario:
Say I have data in 17 tables each table having around 100 columns. All are joinable through primary keys. Would it be okay if I select all these columns in a single select statement? The query would be pretty complex but can be programmatically generated. The reason for doing this is to get denormalised data to populate a web page. Please do not ask why though :)
Quite obviously if I do create table table1 as (<the complex select statement>), I will be hitting the limit mentioned in the website. But do simple queries also face the same restriction?
I could probably find this out by doing the exercise myself. In the next few days I probably will. However, if someone has an idea about this and the problems I might face by doing a single query, please share the knowledge.
I can't find definitive documentation to back this up, but I have
received the following error using JDBC on Postgresql 9.1 before.
org.postgresql.util.PSQLException: ERROR: target lists can have at most 1664 entries
As I say though, I can't find the documentation for that so it may
vary by release.
I've found the confirmation. The maximum is 1664.
This is one of the metrics that is available for confirmation in the INFORMATION_SCHEMA.SQL_SIZING table.
SELECT * FROM INFORMATION_SCHEMA.SQL_SIZING
WHERE SIZING_NAME = 'MAXIMUM COLUMNS IN SELECT';

WITH... UPDATE in PostgresQL

I'm trying to update a table (in pgsql) with a complex expression that needs to occur several times in the UPDATE statement. WITH seems perfect for this:
WITH newtz AS (SELECT timezone FROM timezonebyzipcode WHERE zip=(SELECT zip_code FROM company WHERE id=company_id))
UPDATE cross_rental
SET return_timezone=newtz
return_time=(return_time AT TIME ZONE return_timezone) AT TIME ZONE newtz
WHERE return_to='Vendor' AND return_timezone<>newtz
Unfortunately, it doesn't work:
ERROR: syntax error at or near "UPDATE"
LINE 2: UPDATE cross_rental
^
I've searched and couldn't find any examples of using WITH with UPDATE in this way, but I also don't see anything indicating it shouldn't work. Is this just unsupported, or am I making some silly mistake?
And, if it's unsupported, should I just copy that nasty long expression into each of the three places where I'm using "newtz" in the UPDATE clause? Or is there some better way to accomplish this update?
ERROR: syntax error at or near "UPDATE"
LINE 2: UPDATE cross_rental
This specific error message reveals you're using a PostgreSQL version 9.0 or older. The two major versions before 9.1 featured CTEs and WITH, but not in the context of data modifying queries.
This appeared in 9.1. See 7.8.2. Data-Modifying Statements in WITH in PostgreSQL 9.1 doc.
Assuming a newer version, a CTE must be used as a table with rows and columns (not as a scalar variable), so the query should be fixed as mentioned in Richard Huxton's answer.
It's a table (or a from-recordset-source anyway).
WITH calculated AS (
SELECT ....
)
UPDATE foo
SET bar = calculated.something
FROM calculated
WHERE ...
WITH (SELECT timezone FROM timezonebyzipcode WHERE zip=(SELECT zip_code FROM company WHERE id=company_id)) AS newtz
You have the alias and referent backwards...

Partial index not being used in psql 8.2

I would like to run a query on a large table along the lines of:
SELECT DISTINCT user FROM tasks
WHERE ctime >= '2012-01-01' AND ctime < '2013-01-01' AND parent IS NULL;
There is already an index on tasks(ctime), but most (75%) of rows have a non-NULL parent, so that's not very effective.
I attempted to create a partial index for those rows:
CREATE INDEX CONCURRENTLY task_ctu_np ON tasks (ctime, user)
WHERE parent IS NULL;
but the query planner continues to choose the tasks(ctime) index instead of my partial index.
I'm using postgresql 8.2 on the server, and my psql client is 8.1.
First, I second Richard's suggestion that upgrading should be at the top of your priority. The areas of partial indexes, etc. have, as I understood it, improved significantly since 8.2.
The second thing is you really need the actual query plans with timing information (EXPLAIN ANALYZE) because without these we can't talk about selectivity, etc.
So my order of business if I were you would be to upgrade first and then tune after that.
Now, I understand that 8.3 is a big upgrade (it is the only one that caused us issues in LedgerSMB). You may need some time to address that, but the alternative is to get further behind and be asking questions on a version that is less and less in current understanding as time goes on.