Error writing to database in moodle - moodle

I want to customize some strings into moodle. so when i clicked on open language pack for editing it will run till 69% and thowing an error "Error writing to database"
Error/moodle/dmlwriteexception
This indicates that a general error occurred when Moodle tried to write to the database. If you turn on Debugging you will get more detailed information about what the problem is.
MySQL
If you're using a MySQL database for your Moodle installation, this error can be caused by the server's max_allowed_packet size being configured incorrectly. Increasing this value may resolve the issue.
I have tried to increase the value of max_allowed_packet 1M to 100M but still getting the same error.
Please help me

Try putting the following settings in your config.php:
#ini_set('display_errors', '1'); // NOT FOR PRODUCTION SERVERS!
$CFG->debug = 32767; // NOT FOR PRODUCTION SERVERS! // for Moodle 2.0 - 2.2, use: $CFG->debug = 38911;
$CFG->debugdisplay = true; // NOT FOR PRODUCTION SERVERS!
That should tell you more about the query that is causing the error, and help resolve your problem.
When your'e done with debugging please remove these lines, since they may cause a security risk.
Source: https://docs.moodle.org/23/en/Debugging#In_config.php

Its probably a custom plugin with an incorrect string id in the language file - https://moodle.org/mod/forum/discuss.php?d=222815
Try uninstalling any custom modules then trying again with the language customisation until you can identify which custom plugin is causing the issue.

So I'm not sure if this is related, but since I received the exact same message, I', assuming so.
I am new to Moodle, and tried to install it locally on my machine. Once I was comfortable with that I migrated it to a hosted environment. I am certain I did not execute the process correctly in terms of user permissions, since as soon as I migrated the instance I started getting the "error writing to database" exception.
After a lot of research, and after trying many solutions that did not work, I eventually found something that did. It is dangerous to do this on a database that has records already written to it, however the solution was to drop the 'id' column of the effected table and add it again with auto_increment on:
Step 1: Turn on Debugging in the Page settings of your Moodle site. Any database insert will then likely return en error like this"
Debug info: Field 'id' doesn't have a default value
INSERT INTO mdl_course_sections (course,section,summary,summaryformat,sequence,name,visible,availability,timemodified) VALUES(?,?,?,?,?,?,?,?,?)
[array (
0 => '2',
1 => 10,
2 => '',
3 => '1',
4 => '',
5 => NULL,
6 => 1,
7 => NULL,
8 => 1548419949,
)]
Error code: dmlwriteexception
Step 2: Since this gives you the table name and the effected field, you can then exectue the following SQL statement:
ALTER TABLE {table_name} DROP COLUMN {column_name};
ALTER TABLE {table_name} ADD COLUMN {column_name} INT AUTO_INCREMENT UNIQUE FIRST;
As there are no restrictions here, dropping a column of primary keys, which are likely foreign keys in other tables, I would not advise this unless it is a fresh install. But it worked well for me.

Related

PostgREST / PostgreSQL Cannot enlarge string buffer message

I run into a Cannot enlarge string buffer message on my running postgREST API. I guess some tables are too large to work successful with the API.
I am using the docker postgrest/postgrest container from https://hub.docker.com/r/postgrest/postgrest with the version PostgREST 5.1.0.
Everything is working as expected but if the tables size getting too large, I get following error message.
hint null
details "Cannot enlarge string buffer containing 1073741822 bytes by 1 more bytes."
code "54000"
message "out of memory"
I can't determine the threshold when it's working or not.
Is there a possibility to enlarge the string buffer in some config file or is this hardcoded?
Are there any limits from the table size working with the API. So far I couldn’t find any information in the docu.
=========== Update
The postgres logs give me following SQL query:
WITH pg_source AS (
SELECT "public"."n_osm_bawue_line".*
FROM "public"."n_osm_bawue_line"
)
SELECT null AS total_result_set,
pg_catalog.count(_postgrest_t) AS page_total,
array[]::text[] AS header,
coalesce(json_agg(_postgrest_t), '[]')::character varying AS body
FROM (
SELECT *
FROM pg_source
) _postgrest_t
I use following postgres version:
"PostgreSQL 11.1 (Debian 11.1-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit"
Unless you recompile PostgreSQL is not possible to upgrade the limit(defined here).
My suggestion would be to try to reduce the size of the payload(are you sure you need all the data?) or get all of the payload in multiple requests.
With PostgREST you can do vertical filtering(just select the columns that you need) or paginate to reduce the number of rows you get in one request.
The error message comes from PostgreSQL. PostgREST just wraps the message in JSON and sends the HTTP response.
As a first step for finding the problem, look what is the exact HTTP request you do to trigger the error.
Then, enable PostgreSQL logging and repeat the request, check the logs and then you'll see what is the SQL query that causes this error. Run the query through pgAdmin or psql to make sure you got the problematic query.
Update your question with your findings. The SQL query would be what is needed to continue.
After that you could add a postgresql tag to your question.
There is always the possibility that the file being imported is either corrupted or malformed because of any number of reasons.
I just happened to have discovered in my case that my file had something like incorrect line endings (long story, unnecessary here) which caused the whole file to appear as one line, thus causing the obvious result. You may have something similar in your case that requires a find+replace kind of solution.
For whatever benefit to anyone else, I used this to resolve it:
tr -d '\0' < bad_file.csv > bad_file.csv.fixed

When i run a specific query i get , ORA-00604: error occurred at recursive SQL level 1 ORA-12899: value too large for column"PLAN_TABLE"."OBJECT_NAME"

Am using Oracle 12.1 c when i run specific query ( i cant show for security reason , and because its un related); i get exception
ORA-00604: error occurred at recursive SQL
level 1 ORA-12899: value too large for column "SOME_SCHEMA"."PLAN_TABLE"."OBJECT_NAME"
(actual: 38, maximum: 30)
I cant make it work , i will try revert last changes i did because it was working before.
BTW i was doing Explain and doing index optimizations
Any idea why!
P.S i will keep trying
How i solved this:
When i was reverting and reviewing my last changes i was doing alters for adding indexes, and each time i try to run the query again to make sure it is working.
So when i reached a specific alter i noticed the name of the index is too long,
so even if the index was created successfully, but the explain plan for select
was failing not the select it self.
The solution:
I renamed the index to be shorter ( 30 maximum ) it worked
Change table/column/index names size in oracle 11g or 12c
Why are Oracle table/column/index names limited to 30 characters?
Using EXPLAIN PLAN Oracle websites docs

Postgresql Transaction ID Not Found

Whenever I am executing a query in postgresql, this is the error message I receive
Transaction ID not found in the session.
Does anyone have any idea how to resolve this? I recently created a new user, but I was unable to find documentation that even shows this as a valid error.
Additional Details:
I've managed to resolve the error by re-connecting with admin credentials.
I was using PG Admin V4 with Postgres V9.6, and that was the only message appearing in any query I executed, even if it was a basic query like 'SELECT NOW()'.
At the same time, this was the error message being received by the client device (an iOS device with a AWS Lambda / NodeJS backend) :
'message' : {
'name' : 'error',
'length' : 114,
'severity' : 'fatal',
'code' : '28000',
'file' : 'miscinit.c',
'line' : '587',
'routine' : 'InitializeSessionUserId'
}
I assume you found a solution, but for anyone else that finds this post, I had the same issue and I just closed PG Admin 4 and restarted it and it cleared up.
To anyone who has this problem, all you have to do is:
1) Reconnect to the database
2) Open a new Query Tab. (Run your query here)
You're welcome.
For me helped changing 'localhost' in connection settings to '127.0.0.1', as mentioned here: https://stackoverflow.com/a/59747781/2590805
Disconnecting and reconnecting to the database solved this issue for me; it wasn't necessary to exit/open PGAdmin 4 completely.
Create a new Query editor tab, that works for me
This is not a PostgreSQL error message. It must come from something else in the stack you are using - a client driver, ORM, etc.
Please post a more detailed question with full information on the stack you're using.
So I don't know the exact specifics of my solution, but I found this issue in the following circumstance:
Database user was created.
Role was assigned for the user.
A transaction was used
I'm still not entirely sure I discovered the solution of the root problem, but if others have the same scenario, it might help troubleshoot it further. If any of those three are not used, then I never encountered the issue.
Has anyone found an explanation for this problem? I am also getting a "Transaction ID not found in the session.". It's for a long running (several days) query. I ran it on a 10% sample of my data and had no trouble, but now need to repeat the process for the full dataset. I reconnect to the database and the query appears as still active. A new idle query appears as follows:
SELECT rel.oid, rel.relname AS name,
(SELECT count(*) FROM pg_trigger WHERE tgrelid=rel.oid AND tgisinternal = FALSE) AS triggercount,
(SELECT count(*) FROM pg_trigger WHERE tgrelid=rel.oid AND tgisinternal = FALSE AND tgenabled = 'O') AS has_enable_triggers,
(CASE WHEN rel.relkind = 'p' THEN true ELSE false END) AS is_partitioned
FROM pg_class rel
WHERE rel.relkind IN ('r','s','t','p') AND rel.relnamespace = 2200::oid
AND NOT rel.relispartition
ORDER BY rel.relname;

Postgresql function failed with "relation with OID xxxxx does not exist"

I am trying to extend a item profile table by parsering the its part_number column further down into properties. It works fine outside a function.
ALTER TABLE tbl_item_info
ADD prop1 varchar(2),
ADD prop2 varchar(1),
ADD prop3 numeric(4,3);
UPDATE tbl_item_info
SET prop1 = substr(part_num,5,2)
, prop2 = substr(part_num,7,1)
, prop3 = to_number( substr(part_num,8,5) , '9G999')
WHERE ARRAY[left(part_num,3)] <# ARRAY['NTX','EXC'] ;
But when I try to put the statements into a function. It always fail with error "relation with OID xxxxx does not exist" pointing to the UPDATE statements.
I have no clue what it is trying to say. Any idea why ?
I wish I had a definitive answer, but this seems to be related to a known bug in PostgreSQL as described here:
https://github.com/greenplum-db/gpdb/issues/1094
Bear in mind that the greenplum implementation of PostgreSQL is proprietary to Dell EMC, however, the core code issue is likely the same for all major PostgreSQL distributions. I am still researching this to determine if there is a good resolution to the problem. The database in which I experienced a markedly similar error is not the greenplum implementation of PostgreSQL. The error was thrown when I called the pg_relation_filepath() function in a query on an oid that was dynamically obtained from a record in the pg_class table that should have had an associated external file in a subdirectory of the ./base/ path. The error that was thrown was:
ERROR: relation "pg_toast_34474_index" does not exist
The point here is that for a toast entity to exist, it is supposed to be tied to another relation and acts as a reference to additional files created out on the storage media to accommodate additional data that does not fit into the owning relation's top level file - in this case, most likely a table. But when I search for the owning relation's oid (34474), the owner doesn't exist. Since the owner doesn't exist I think the logic assumes that the toast entity doesn't either, even though it has a record in the pg_class table.
This is as close as I can get to a root cause for now. Although the above link suggests code to improve the issue is supposed to have been released in version 8.3, my database has been upgraded from version 8.1 to version 9.4.7, so it appears that even though code may have improved between those two version to prevent new occurrences of the problem, if the problem was created before the database was upgraded, the newer code does not know how to reassemble the tinker toys left behind from issues created by this apparent bug before the fix was implemented.
At present I am investigating if a PLPGSQL function can wrap and trap the error for all relations so I can identify which ones have the problem (as well as to solve my original problem of determining which relation is hosted in a specific file that the server.postmaster log tells me it is unable to read from - hopefully it is just an index I can drop/create).
I found this issue at server 13.7. It was not at server 14.3.
It happened when I changed the signature (parameters) of the stored procedure:
SQL Error [42883]: ERROR: function with OID 894070 does not exist
I removed the old procedure and created new one.
But when I called a function which used that procedure it triggered the error.
To fix it I recreated the function which used changed object.
So general rule:
look where error happens, make sure to recreate object that triggers error, and recompile the code which uses it.
Hope it will help.

zend query maximum field length

Im trying to insert a log record in my log table. But somehow when the field value lenght exceeds 199 chars, my apache restarts and my browsers says net::ERR_CONNECTION_RESET.
I'm using the Zend Framework, so I insert my record with the following lines of code:
$db = Global_Db_Connection::getInstance();
$sql = "INSERT INTO log_table (log) VALUES ('ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd')";
$db->query($sql);
If i dont use the framework, using:
mysql_query($sql);
Then i dont have any problems.
Anyone can tell me how to fix this limit in Zend?
Tried this on FreeBSD same problem. I also found out that when trying to insert it into a table that does not exist, it returns the same error. Only after shortening the value it gives the error that the table does not exist.
May be late to answer, but I have the soultion. Two solution for zend I found:
$db->getConnection()->query($sql); // use getConnection()
$db->exec($sql);
This issue is because of memory stack size. On linux the stack grows as needed, but on Windows & Mac this issue gets bubbled because of the stack size. For this there a ticket raised in php.net(here) Have a look. Enjoy!!!