Oracle - NCLOB getting issue - inserted value too large for column - oracle10g

I am working on Oracle10g, And I am currently facing one issue regarding NCLOB column.
Application uses Silverlight, WCF, NHibernate and oracle10g database.
Application firing one select query, in that query, it contains XML data for NCLOB column and it is giving me the following error:
"inserted value too large for column".
Can anyone please help me to solve this issue?
Thanks in advance,
Mahesh

Related

Converting SQL query with FORMAT command to use in entity framework core

I have an SQL query:
SELECT
FORMAT(datetime_scrapped, 'MMMM-yy') [date],
count(FORMAT(datetime_scrapped, 'MMMM-yy')) as quantity
FROM scrap_log
GROUP BY FORMAT(datetime_scrapped, 'MMMM-yy')
It basically summarises all the entries in the scrap_log table by month/year and counts how many entries are in each month/year. Returns two columns (date and quantity). But I need to execute this in an ASP.NET core API using Entity Framework core. I tried using .fromSqlRaw(), but this expects all columns to be returned and so doesn't work.
I can find plenty of info on EF to implement group by and count etc... But I cannot find anything for the FORMAT(datetime, "MMMM-yy") part. Please could somebody explain to me how to do this?
EDIT: Seems already I appear to be going about this the wrong way in terms of efficiency. I will look into alternative solutions based on comments already made. Thanks for the fast response.

Can I have more than 250 columns in the result of a PostgreSQL query?

Note that PostgreSQL website mentions that it has a limit on number of columns between 250-1600 columns depending on column types.
Scenario:
Say I have data in 17 tables each table having around 100 columns. All are joinable through primary keys. Would it be okay if I select all these columns in a single select statement? The query would be pretty complex but can be programmatically generated. The reason for doing this is to get denormalised data to populate a web page. Please do not ask why though :)
Quite obviously if I do create table table1 as (<the complex select statement>), I will be hitting the limit mentioned in the website. But do simple queries also face the same restriction?
I could probably find this out by doing the exercise myself. In the next few days I probably will. However, if someone has an idea about this and the problems I might face by doing a single query, please share the knowledge.
I can't find definitive documentation to back this up, but I have
received the following error using JDBC on Postgresql 9.1 before.
org.postgresql.util.PSQLException: ERROR: target lists can have at most 1664 entries
As I say though, I can't find the documentation for that so it may
vary by release.
I've found the confirmation. The maximum is 1664.
This is one of the metrics that is available for confirmation in the INFORMATION_SCHEMA.SQL_SIZING table.
SELECT * FROM INFORMATION_SCHEMA.SQL_SIZING
WHERE SIZING_NAME = 'MAXIMUM COLUMNS IN SELECT';

DB2 ERRORCODE=-4229, SQLSTATE=null

I'm getting this error while executing a batch operation.
Use getNextException() to retrieve the exceptions for specific batched elements.ERRORCODE=-4229, SQLSTATE=null
I'm not finding any pointer to proceed with debugging this error.
Appreciating any help!!!
Search for the error on the IBM page:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=%2Fcom.ibm.db2z10.doc.java%2Fsrc%2Ftpc%2Fimjcc_rjvjcsqc.htm
-4229 Message text: text-from-getMessage Explanation: An error occurred during a batch execution.
User response: Call SQLException.getMessage to retrieve specific
information about the problem.
So, it might be related to any underlying error during the execution of your batch insert/update/delete
For those who are looking for an solution to this error.
For me this was due to
THE INSERT OR UPDATE VALUE OF FOREIGN KEY constraint-name IS INVALID.
DB2 SQL Error: SQLCODE=-530, SQLSTATE=23503
In my case, this occurred because I had an unique covering index defined on two columns and the combination of these two values was not unique when I was inserting the records.
For anyone who is still wondering, try entering a unique record and check if the error still persists?
For me it was because of duplicate entry of a foreign key.
In my case, this was due to having rows in the database with the same PK IDs that the sequence was generating. The solution can be to fix these "future" row IDs or adapt the sequence to jump those numbers.

DB2 subselect with 'in', limitations?

Somebody mentioned to me, that when performing a subselect with 'in' in DB2, there may be a limit to how many results can be returned by the subselect. If so, does anybody know what that limit is? Or if it might be dependent on the version of the DB, how to find this information? Thanks in advance.
The best place to find such information is on IBM's website. For instance, here are the limitations for DB2 on z/OS
I didn't see anything about there being a limit to the number of values in an "IN" clause however the "Maximum number of columns that are in a table or view (the value depends on the complexity of the CREATE VIEW statement) or columns returned by a table function." is 750.
Unrelated to your question - the DB2 SQL Cookbook is an excellent reference for working with DB2.

zend query maximum field length

Im trying to insert a log record in my log table. But somehow when the field value lenght exceeds 199 chars, my apache restarts and my browsers says net::ERR_CONNECTION_RESET.
I'm using the Zend Framework, so I insert my record with the following lines of code:
$db = Global_Db_Connection::getInstance();
$sql = "INSERT INTO log_table (log) VALUES ('ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd')";
$db->query($sql);
If i dont use the framework, using:
mysql_query($sql);
Then i dont have any problems.
Anyone can tell me how to fix this limit in Zend?
Tried this on FreeBSD same problem. I also found out that when trying to insert it into a table that does not exist, it returns the same error. Only after shortening the value it gives the error that the table does not exist.
May be late to answer, but I have the soultion. Two solution for zend I found:
$db->getConnection()->query($sql); // use getConnection()
$db->exec($sql);
This issue is because of memory stack size. On linux the stack grows as needed, but on Windows & Mac this issue gets bubbled because of the stack size. For this there a ticket raised in php.net(here) Have a look. Enjoy!!!