Im a using the IBM DB2 Text Search Index (DB2TS) on LUW, latest version. The statements are executed in the IBM Data Studio, latest Version as DB2ADMIN.
From time to time I execute
CALL SYSPROC.SYSTS_UPDATE('M2F', 'IX_X3_2DEDEMSJ', '', 'en_US', ?)
to update the index.
One Index (of 10) is not updateable. The error codes are SQLCODE=-20426, SQLSTATE=38H13.
It says: A text search management function is in conflict with a pending/running function.
CALL SYSPROC.SYSTS_DROP('M2F', 'IX_X3_2DEDEMSJ', '', 'en_US', ?)
I tried to DROP the Index -> Same error
I tried to CLEAR the messages events -> Same error
I restartet the database, the DB2TS-service and at least the whole server.
Still the same message. Is there anybody out there who has an idea?
How can I see the pending tasks on an index? Is it possible to cancel tasks by command?
Many thanks :-)
Oliver
I found the solution by myself:
CALL SYSPROC.SYSTS_CLEAR_COMMANDLOCKS('M2F', 'IX_X3_2DEDEMSJ', 'en_US', ?)
Link to IBM documentation
Related
I'm converting DBMS_SCHEDULER.CREATE_JOB in Oracle SQL to PostgreSQL. I can't find it like Oracle in PostgreSQL.
Can anyone help me to resolve this?
DBMS_SCHEDULER.CREATE_JOB(
JOB_NAME => CONCAT('ANALYZE_',my Column,'_',my Column),
JOB_TYPE => 'PLSQL_BLOCK',
JOB_ACTION => my Column,
ENABLED => TRUE,
COMMENTS => CONCAT('ANALYZE_',my Column)
);
PostgreSQL doesn't have a built-in scheduler. You can schedule jobs using the operating system's built-in scheduler. Or you can install pgAgent and use that to schedule jobs. Or you could use any number of third-party job scheduling tools.
I'm not completely sure what your job is doing here-- you say the job_action is "my column" which implies that the column in the table contains a PL/SQL block but that seems weird if "my column" is also used to create the job_name. I'd guess that whatever solution you go with will need to include a bit of additional plumbing in PostgreSQL. For example, rather than creating the job, perhaps you write a row to a table and then have a separate job that goes through that table and actually takes the appropriate action.
I run into a Cannot enlarge string buffer message on my running postgREST API. I guess some tables are too large to work successful with the API.
I am using the docker postgrest/postgrest container from https://hub.docker.com/r/postgrest/postgrest with the version PostgREST 5.1.0.
Everything is working as expected but if the tables size getting too large, I get following error message.
hint null
details "Cannot enlarge string buffer containing 1073741822 bytes by 1 more bytes."
code "54000"
message "out of memory"
I can't determine the threshold when it's working or not.
Is there a possibility to enlarge the string buffer in some config file or is this hardcoded?
Are there any limits from the table size working with the API. So far I couldn’t find any information in the docu.
=========== Update
The postgres logs give me following SQL query:
WITH pg_source AS (
SELECT "public"."n_osm_bawue_line".*
FROM "public"."n_osm_bawue_line"
)
SELECT null AS total_result_set,
pg_catalog.count(_postgrest_t) AS page_total,
array[]::text[] AS header,
coalesce(json_agg(_postgrest_t), '[]')::character varying AS body
FROM (
SELECT *
FROM pg_source
) _postgrest_t
I use following postgres version:
"PostgreSQL 11.1 (Debian 11.1-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit"
Unless you recompile PostgreSQL is not possible to upgrade the limit(defined here).
My suggestion would be to try to reduce the size of the payload(are you sure you need all the data?) or get all of the payload in multiple requests.
With PostgREST you can do vertical filtering(just select the columns that you need) or paginate to reduce the number of rows you get in one request.
The error message comes from PostgreSQL. PostgREST just wraps the message in JSON and sends the HTTP response.
As a first step for finding the problem, look what is the exact HTTP request you do to trigger the error.
Then, enable PostgreSQL logging and repeat the request, check the logs and then you'll see what is the SQL query that causes this error. Run the query through pgAdmin or psql to make sure you got the problematic query.
Update your question with your findings. The SQL query would be what is needed to continue.
After that you could add a postgresql tag to your question.
There is always the possibility that the file being imported is either corrupted or malformed because of any number of reasons.
I just happened to have discovered in my case that my file had something like incorrect line endings (long story, unnecessary here) which caused the whole file to appear as one line, thus causing the obvious result. You may have something similar in your case that requires a find+replace kind of solution.
For whatever benefit to anyone else, I used this to resolve it:
tr -d '\0' < bad_file.csv > bad_file.csv.fixed
Whenever I am executing a query in postgresql, this is the error message I receive
Transaction ID not found in the session.
Does anyone have any idea how to resolve this? I recently created a new user, but I was unable to find documentation that even shows this as a valid error.
Additional Details:
I've managed to resolve the error by re-connecting with admin credentials.
I was using PG Admin V4 with Postgres V9.6, and that was the only message appearing in any query I executed, even if it was a basic query like 'SELECT NOW()'.
At the same time, this was the error message being received by the client device (an iOS device with a AWS Lambda / NodeJS backend) :
'message' : {
'name' : 'error',
'length' : 114,
'severity' : 'fatal',
'code' : '28000',
'file' : 'miscinit.c',
'line' : '587',
'routine' : 'InitializeSessionUserId'
}
I assume you found a solution, but for anyone else that finds this post, I had the same issue and I just closed PG Admin 4 and restarted it and it cleared up.
To anyone who has this problem, all you have to do is:
1) Reconnect to the database
2) Open a new Query Tab. (Run your query here)
You're welcome.
For me helped changing 'localhost' in connection settings to '127.0.0.1', as mentioned here: https://stackoverflow.com/a/59747781/2590805
Disconnecting and reconnecting to the database solved this issue for me; it wasn't necessary to exit/open PGAdmin 4 completely.
Create a new Query editor tab, that works for me
This is not a PostgreSQL error message. It must come from something else in the stack you are using - a client driver, ORM, etc.
Please post a more detailed question with full information on the stack you're using.
So I don't know the exact specifics of my solution, but I found this issue in the following circumstance:
Database user was created.
Role was assigned for the user.
A transaction was used
I'm still not entirely sure I discovered the solution of the root problem, but if others have the same scenario, it might help troubleshoot it further. If any of those three are not used, then I never encountered the issue.
Has anyone found an explanation for this problem? I am also getting a "Transaction ID not found in the session.". It's for a long running (several days) query. I ran it on a 10% sample of my data and had no trouble, but now need to repeat the process for the full dataset. I reconnect to the database and the query appears as still active. A new idle query appears as follows:
SELECT rel.oid, rel.relname AS name,
(SELECT count(*) FROM pg_trigger WHERE tgrelid=rel.oid AND tgisinternal = FALSE) AS triggercount,
(SELECT count(*) FROM pg_trigger WHERE tgrelid=rel.oid AND tgisinternal = FALSE AND tgenabled = 'O') AS has_enable_triggers,
(CASE WHEN rel.relkind = 'p' THEN true ELSE false END) AS is_partitioned
FROM pg_class rel
WHERE rel.relkind IN ('r','s','t','p') AND rel.relnamespace = 2200::oid
AND NOT rel.relispartition
ORDER BY rel.relname;
I am running a db2 query that unions two very large tables. I started the query 10 hours ago, and it doesn't seem to finish yet.
However, when I check the status of the process by using top, it shows the status is 'S'. Does this mean that my query stopped running? But I couldn't find any error message.
How can I check what is happening to the query?
In DB2 for LUW 11.1 there is a text-based dsmtop utility that allows you to monitor the DB2 instance, down to individual executing statements, in real time. It's pre-11.1 equivalent is called db2top.
There is also a Web-based application, IBM Data Server Manager, which has a free edition with basic monitoring features.
Finally, you can query one of the supplied SQL monitor interfaces, for example, the SYSIBMADM.MON_CURRENT_SQL view:
SELECT session_auth_id,
application_handle,
elapsed_time_sec,
activity_state,
rows_read,
SUBSTR(stmt_text,1,200)
FROM sysibmadm.mon_current_sql
ORDER BY elapsed_time_sec DESC
FETCH FIRST 5 ROWS ONLY
You can try this command as well
db2 "SELECT agent_id,
Substr(appl_name, 1, 20) AS APPLNAME,
elapsed_time_min,
Substr(authid, 1, 10) AS AUTH_ID,
agent_id,
appl_status,
Substr(stmt_text, 1, 30) AS STATEMENT
FROM sysibmadm.long_running_sql
WHERE elapsed_time_min > 0
ORDER BY elapsed_time_min desc
FETCH first 5 ROWS only"
I want to customize some strings into moodle. so when i clicked on open language pack for editing it will run till 69% and thowing an error "Error writing to database"
Error/moodle/dmlwriteexception
This indicates that a general error occurred when Moodle tried to write to the database. If you turn on Debugging you will get more detailed information about what the problem is.
MySQL
If you're using a MySQL database for your Moodle installation, this error can be caused by the server's max_allowed_packet size being configured incorrectly. Increasing this value may resolve the issue.
I have tried to increase the value of max_allowed_packet 1M to 100M but still getting the same error.
Please help me
Try putting the following settings in your config.php:
#ini_set('display_errors', '1'); // NOT FOR PRODUCTION SERVERS!
$CFG->debug = 32767; // NOT FOR PRODUCTION SERVERS! // for Moodle 2.0 - 2.2, use: $CFG->debug = 38911;
$CFG->debugdisplay = true; // NOT FOR PRODUCTION SERVERS!
That should tell you more about the query that is causing the error, and help resolve your problem.
When your'e done with debugging please remove these lines, since they may cause a security risk.
Source: https://docs.moodle.org/23/en/Debugging#In_config.php
Its probably a custom plugin with an incorrect string id in the language file - https://moodle.org/mod/forum/discuss.php?d=222815
Try uninstalling any custom modules then trying again with the language customisation until you can identify which custom plugin is causing the issue.
So I'm not sure if this is related, but since I received the exact same message, I', assuming so.
I am new to Moodle, and tried to install it locally on my machine. Once I was comfortable with that I migrated it to a hosted environment. I am certain I did not execute the process correctly in terms of user permissions, since as soon as I migrated the instance I started getting the "error writing to database" exception.
After a lot of research, and after trying many solutions that did not work, I eventually found something that did. It is dangerous to do this on a database that has records already written to it, however the solution was to drop the 'id' column of the effected table and add it again with auto_increment on:
Step 1: Turn on Debugging in the Page settings of your Moodle site. Any database insert will then likely return en error like this"
Debug info: Field 'id' doesn't have a default value
INSERT INTO mdl_course_sections (course,section,summary,summaryformat,sequence,name,visible,availability,timemodified) VALUES(?,?,?,?,?,?,?,?,?)
[array (
0 => '2',
1 => 10,
2 => '',
3 => '1',
4 => '',
5 => NULL,
6 => 1,
7 => NULL,
8 => 1548419949,
)]
Error code: dmlwriteexception
Step 2: Since this gives you the table name and the effected field, you can then exectue the following SQL statement:
ALTER TABLE {table_name} DROP COLUMN {column_name};
ALTER TABLE {table_name} ADD COLUMN {column_name} INT AUTO_INCREMENT UNIQUE FIRST;
As there are no restrictions here, dropping a column of primary keys, which are likely foreign keys in other tables, I would not advise this unless it is a fresh install. But it worked well for me.