Cocoa error 256 core data - iphone

I have error "Cocoa error 256" when I try to save data. How to fix it? And what problem?

According to the help reference in Xcode:
NSFileReadUnknownError
Read error, reason unknown
Available in Mac OS X v10.4 and later.
Declared in FoundationErrors.h.
Sadly, that's probably not too helpful, though it is an unknown -read- error.

If its a core data error there is probably an actual error object somewhere near where the error occurs. If you dump the error objects userInfo dictionary, you can usually get a lot more detail than just the error code itself.

This is what it boils down to (as Tegeril said)
NSFileReadUnknownError Read error,
reason unknown
Available in Mac OS X v10.4 and later.
Declared in FoundationErrors.h.
A file can also be a resource located at a URL/URI, if the URL has unencoded characters it can cause this type of error.
Check the path to the resource/file.

I ran into exactly this error when populating an SQLite database for an iOS app using a custom script (ie not using Core Data). It turns out that there is some metadata which you have to update yourself, after adding new rows. Find the row in Z_PRIMARYKEY where Z_NAME equals the name of the table you've just inserted into. Make sure that Z_MAX in this row is equal to the highest value of Z_PK in the table you've inserted the rows into. In my case, as soon as I updated Z_MAX with the correct number, the error went away.
So, for the "ZAUTHOR" table:
SELECT z_pk FROM ZAUTHOR ORDER BY z_pk DESC LIMIT 1; /* Returns 1234 */
UPDATE Z_PRIMARYKEY SET z_max = 1234 WHERE z_name = 'Author';
This is the article which helped me track down the error.

I get this error on Xcode 6 (& 7) when switching a network connection while the Simulator is open. For example moving from one wireless network to another. The solution for me is to Quit Simulator and restart.

Related

PostgREST / PostgreSQL Cannot enlarge string buffer message

I run into a Cannot enlarge string buffer message on my running postgREST API. I guess some tables are too large to work successful with the API.
I am using the docker postgrest/postgrest container from https://hub.docker.com/r/postgrest/postgrest with the version PostgREST 5.1.0.
Everything is working as expected but if the tables size getting too large, I get following error message.
hint null
details "Cannot enlarge string buffer containing 1073741822 bytes by 1 more bytes."
code "54000"
message "out of memory"
I can't determine the threshold when it's working or not.
Is there a possibility to enlarge the string buffer in some config file or is this hardcoded?
Are there any limits from the table size working with the API. So far I couldn’t find any information in the docu.
=========== Update
The postgres logs give me following SQL query:
WITH pg_source AS (
SELECT "public"."n_osm_bawue_line".*
FROM "public"."n_osm_bawue_line"
)
SELECT null AS total_result_set,
pg_catalog.count(_postgrest_t) AS page_total,
array[]::text[] AS header,
coalesce(json_agg(_postgrest_t), '[]')::character varying AS body
FROM (
SELECT *
FROM pg_source
) _postgrest_t
I use following postgres version:
"PostgreSQL 11.1 (Debian 11.1-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit"
Unless you recompile PostgreSQL is not possible to upgrade the limit(defined here).
My suggestion would be to try to reduce the size of the payload(are you sure you need all the data?) or get all of the payload in multiple requests.
With PostgREST you can do vertical filtering(just select the columns that you need) or paginate to reduce the number of rows you get in one request.
The error message comes from PostgreSQL. PostgREST just wraps the message in JSON and sends the HTTP response.
As a first step for finding the problem, look what is the exact HTTP request you do to trigger the error.
Then, enable PostgreSQL logging and repeat the request, check the logs and then you'll see what is the SQL query that causes this error. Run the query through pgAdmin or psql to make sure you got the problematic query.
Update your question with your findings. The SQL query would be what is needed to continue.
After that you could add a postgresql tag to your question.
There is always the possibility that the file being imported is either corrupted or malformed because of any number of reasons.
I just happened to have discovered in my case that my file had something like incorrect line endings (long story, unnecessary here) which caused the whole file to appear as one line, thus causing the obvious result. You may have something similar in your case that requires a find+replace kind of solution.
For whatever benefit to anyone else, I used this to resolve it:
tr -d '\0' < bad_file.csv > bad_file.csv.fixed

Postgresql function failed with "relation with OID xxxxx does not exist"

I am trying to extend a item profile table by parsering the its part_number column further down into properties. It works fine outside a function.
ALTER TABLE tbl_item_info
ADD prop1 varchar(2),
ADD prop2 varchar(1),
ADD prop3 numeric(4,3);
UPDATE tbl_item_info
SET prop1 = substr(part_num,5,2)
, prop2 = substr(part_num,7,1)
, prop3 = to_number( substr(part_num,8,5) , '9G999')
WHERE ARRAY[left(part_num,3)] <# ARRAY['NTX','EXC'] ;
But when I try to put the statements into a function. It always fail with error "relation with OID xxxxx does not exist" pointing to the UPDATE statements.
I have no clue what it is trying to say. Any idea why ?
I wish I had a definitive answer, but this seems to be related to a known bug in PostgreSQL as described here:
https://github.com/greenplum-db/gpdb/issues/1094
Bear in mind that the greenplum implementation of PostgreSQL is proprietary to Dell EMC, however, the core code issue is likely the same for all major PostgreSQL distributions. I am still researching this to determine if there is a good resolution to the problem. The database in which I experienced a markedly similar error is not the greenplum implementation of PostgreSQL. The error was thrown when I called the pg_relation_filepath() function in a query on an oid that was dynamically obtained from a record in the pg_class table that should have had an associated external file in a subdirectory of the ./base/ path. The error that was thrown was:
ERROR: relation "pg_toast_34474_index" does not exist
The point here is that for a toast entity to exist, it is supposed to be tied to another relation and acts as a reference to additional files created out on the storage media to accommodate additional data that does not fit into the owning relation's top level file - in this case, most likely a table. But when I search for the owning relation's oid (34474), the owner doesn't exist. Since the owner doesn't exist I think the logic assumes that the toast entity doesn't either, even though it has a record in the pg_class table.
This is as close as I can get to a root cause for now. Although the above link suggests code to improve the issue is supposed to have been released in version 8.3, my database has been upgraded from version 8.1 to version 9.4.7, so it appears that even though code may have improved between those two version to prevent new occurrences of the problem, if the problem was created before the database was upgraded, the newer code does not know how to reassemble the tinker toys left behind from issues created by this apparent bug before the fix was implemented.
At present I am investigating if a PLPGSQL function can wrap and trap the error for all relations so I can identify which ones have the problem (as well as to solve my original problem of determining which relation is hosted in a specific file that the server.postmaster log tells me it is unable to read from - hopefully it is just an index I can drop/create).
I found this issue at server 13.7. It was not at server 14.3.
It happened when I changed the signature (parameters) of the stored procedure:
SQL Error [42883]: ERROR: function with OID 894070 does not exist
I removed the old procedure and created new one.
But when I called a function which used that procedure it triggered the error.
To fix it I recreated the function which used changed object.
So general rule:
look where error happens, make sure to recreate object that triggers error, and recompile the code which uses it.
Hope it will help.

string or binary data error even if 0 rows are inserted

I am trying to insert data into a temp table by joining other two tables but for some reason, i keep getting this error String or Binary data would be truncated.
On debugging, I realized there are no rows being inserted into the table and it still throws an error.
To get rid of this, I had finally used SET ANSI_WARNINGS OFF inside the stored procedure and it worked fine. Now the issue is I cannot recompile the stored procedure with this settings in the production database and I want this issue to be fixed. And the other thing which is more irritating is, by default the ANSI_WARNINGS are actually OFF for the database.
Please let me know what could be the possible solution. It would be of great help.

zend query maximum field length

Im trying to insert a log record in my log table. But somehow when the field value lenght exceeds 199 chars, my apache restarts and my browsers says net::ERR_CONNECTION_RESET.
I'm using the Zend Framework, so I insert my record with the following lines of code:
$db = Global_Db_Connection::getInstance();
$sql = "INSERT INTO log_table (log) VALUES ('ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd')";
$db->query($sql);
If i dont use the framework, using:
mysql_query($sql);
Then i dont have any problems.
Anyone can tell me how to fix this limit in Zend?
Tried this on FreeBSD same problem. I also found out that when trying to insert it into a table that does not exist, it returns the same error. Only after shortening the value it gives the error that the table does not exist.
May be late to answer, but I have the soultion. Two solution for zend I found:
$db->getConnection()->query($sql); // use getConnection()
$db->exec($sql);
This issue is because of memory stack size. On linux the stack grows as needed, but on Windows & Mac this issue gets bubbled because of the stack size. For this there a ticket raised in php.net(here) Have a look. Enjoy!!!

Warning message on Insert or update

I keep getting a message whenever I insert or update any record in any table in my database:
[34931.406] SQL_Statement 1 4 -1 999999999 01S02 -5 -- cursor updatability changed
I was wondering exactly what this message means and why I am getting it. Is it safe to ignore?
Am I supposed to react to it / do something different?
Thanks for reading
Just in case its necessary -
I'm running PostgreSQL 9.1.2 on Ubuntu LTE
I'm using 32bit ODBC psqlodbc_09_01_0100 on Windows 7 x64
I'm also using a third party odbc library "SQLTools" by PerfectSync - but I don't think thats making the message because I also use it with MySQL with no problems
Are u updating cursor's elements directly?
in this case warning message is informing u that u are changin' cursor's elements while it's open.
Something similar happens, in java language when tryng to change list's elements number (adding or removing elements) while iterating it.