What does the "dmlreadexception" error mean on Moodle live logs when I try to pull from an external db log? And any suggestions to fix? - moodle

I am trying to use the external database log plugin in Moodle to copy over the standard log table into an external database for easier access to do some analytics work.
I activated the external db log and added all correct settings on the settings page. I clicked "test connection" and it connected successfully and returned the table column headers successfully. But if I click around and make some logs, they are visible in the standard log store but my external db table is still empty.
So I tried connecting to my external db locally in TablePlus using identical credentials as I put in the external db log store settings, and I could connect and write successfully.
Next I went into the live logs and picked standard logs, and they showed up just fine. Then I clicked on external db logs (nothing inside except for 2 manually entered rows of data), and got this error:
URL: https://ohsu.mrooms3.net/
Debug info: ERROR: syntax error at or near "{" LINE 1: SELECT COUNT('x') FROM {OpenLMSLog} WHERE courseid = $1 AND ... ^ SELECT COUNT('x') FROM {OpenLMSLog} WHERE courseid = $1 AND timecreated > $2 AND anonymous = $3 [array ( 0 => '1', 1 => 1604556625, 2 => 0, )] Error code: dmlreadexception
Stack trace:
* line 486 of /lib/dml/moodle_database.php: dml_read_exception thrown
* line 329 of /lib/dml/pgsql_native_moodle_database.php: call to moodle_database->query_end()
* line 920 of /lib/dml/pgsql_native_moodle_database.php: call to pgsql_native_moodle_database->query_end()
* line 1624 of /lib/dml/moodle_database.php: call to pgsql_native_moodle_database->get_records_sql()
* line 1697 of /lib/dml/moodle_database.php: call to moodle_database->get_record_sql()
* line 1912 of /lib/dml/moodle_database.php: call to moodle_database->get_field_sql()
* line 1895 of /lib/dml/moodle_database.php: call to moodle_database->count_records_sql()
* line 262 of /admin/tool/log/store/database/classes/log/store.php: call to moodle_database->count_records_select()
* line 329 of /report/loglive/classes/table_log.php: call to logstore_database\log\store->get_events_select_count()
* line 48 of /report/loglive/classes/table_log_ajax.php: call to report_loglive_table_log->query_db()
* line 59 of /report/loglive/classes/renderer_ajax.php: call to report_loglive_table_log_ajax->out()
* line 462 of /lib/outputrenderers.php: call to report_loglive_renderer_ajax->render_report_loglive()
* line 53 of /report/loglive/loglive_ajax.php: call to plugin_renderer_base->render()
This is the only error message I get even after turning on debugging in the developer settings. My goal is to successfully configure an external db to track logs, but due to a lack of error messages when testing the connection it is hard to debug.
Environment configuration: Open LMS 3.8 MP2 (Build: 20201008)
The external db is a postgres db so we set it on the postgres driver.

It looks like the SQL used by the external db log store plugin is not compatible with Postgres, despite using the Postgres driver in the external db log store plugin settings, as shown by the '{' error in the SQL shown in the question. See here for documentation.
To fix this, we used MariaDB instead of Postgres, and we ended up getting the external db log store working.
Another note, you have to match your columns and data types exactly to the schema in the Moodle db, and then you have to set the ID column of your db table to auto-increment. If you don't know what the schema looks like, there is an Admin SQL interface under "Reports" that lets you run SQL. On the side, there a button to view the schema for any table in the db.

Related

Query in RedShift using DBVisualizer fails in a particular instalation

I am running the following query in dBVisualizer
select top 100 * from analytical.dwh_sales where _material = '000032';
And is working propertly
But when the same query is executed in another computer (an export from my dBVisualizer installation to another machine) we are having the following error:
[Code: 500310, SQL State: 42703] [Amazon](500310) Invalid operation: column "loc_XXX " does not exist in dwh_sales;
I dont if you can notice is adding a space to loc_XXX(space) . Not sure if is why is working in a computer and not in another.

Issues while using Snowflake component In Talend

To transfer data from Ms sql server 2008 to Snowflake I used talend , but every time I get error as
java.io.IOException: net.snowflake.client.loader.Loader$ConnectionError: State: CREATE_TEMP_TABLE, SQL compilation error: error line 1 at position 68
invalid identifier '"columnname"'
at org.talend.components.snowflake.runtime.SnowflakeWriter.close(SnowflakeWriter.java:397)
at org.talend.components.snowflake.runtime.SnowflakeWriter.close(SnowflakeWriter.java:52)
at local_project.load_jobnotes_0_1.Load_Jobnotes.tMSSqlInput_1Process(Load_Jobnotes.java:2684)
at local_project.load_jobnotes_0_1.Load_Jobnotes.runJobInTOS(Load_Jobnotes.java:3435)
at local_project.load_jobnotes_0_1.Load_Jobnotes.main(Load_Jobnotes.java:2978)
Caused by: net.snowflake.client.loader.Loader$ConnectionError: State: CREATE_TEMP_TABLE, SQL compilation error: error line 1 at position 68
invalid identifier '"ID"'
at net.snowflake.client.loader.ProcessQueue.run(ProcessQueue.java:349)
at java.lang.Thread.run(Thread.java:748)
Caused by: net.snowflake.client.jdbc.SnowflakeSQLException: SQL compilation error: error line 1 at position 68
The column does exist in my Snowflake DB still I get error as column does not exist
On analysing what query Talend executing in snowflake I found that It tries to create a temporary table to store data but in doing so it selects all column from table between “ ” double quotes and hence error comes as invalid identifier '"columnname"'
If I execute the same query manually without double quotes its works fine , can you please let us know what is workaround of this issue
Query executed by talend in snowflake for your reference
CREATE TEMPORARY TABLE "Tablename_20171024_115736_814_1"
AS SELECT "column1","column2","column3"
FROM "database"."schema"."table" WHERE FALSE
The issue is most likely due to a case mismatch between the object names in Snowflake and what is being sent through the connector. On the Snowflake side, all object names are stored as UPPER CASE. Suggest you try passing COLUMN1, COLUMN2, etc and see if that works.
You can also try setting the QUOTED_IDENTIFIERS_IGNORE_CASE to true, it might help.
I found that this issue is due to mixed case database or schema names not properly being applied by Talend. I discover a hack by updating the Snowflake connector role parameter and added something such as this screenshot:

Robot framework : Database library keywords not getting executed

I recently started working with Robot framework. So I had a requirement where I needed to connect with Postgres db.
So though I am able to connect with the db but then when I try to execute queries, the flow is getting stuck. Even the test is not failing. Following is what I did:
Connect To Database psycopg2 ${DBName} ${DBUser} ${DBPass} ${DBHost} ${DBPort}
${current_row_count} = Row Count Select * from xyz
The first statement is executing fine but then it gets stuck on second statement.
Can somebody help me out on this
To Execute Query and get data from result :
Connect To Database psycopg2 ${DBName} ${DBUser} ${DBPass} ${DBHost} ${DBPort}
${output} = Query SELECT * from xyz;
Log ${output}
${DataResults}= Get from list ${output} 0
${DataResults}= Convert to list ${DataResults}
${DataResults}= Get from list ${DataResults} 0
${DataResults} convert to string ${DataResults}
Disconnect From Database
You are not executing your query.... read below a bit documentation and an example ;)
In the example you can see example variable but introduce your data ;)
Name: Connect To Database Using Custom Params
Source: DatabaseLibrary
Arguments:
[ dbapiModuleName=None | db_connect_string= ]
Loads the DB API 2.0 module given dbapiModuleName then uses it to connect to the database using the map string db_custom_param_string.
Example usage Example usage: :
Connect To Database Using Custom Params pymssql database='${db_database}' , user='${db_user}', password='${db_password}', host='${db_host}'
${queryResults} Query ${query}
Disconnect From Database

Powercenter SQL1224N error connecting DB2

Im running a workflow in powercenter that is constatnly getting an SQL1224N error.
This process execute a query against one table (POLIZA) with 800k rows, it retrieves the first 10k rows and then it start to execute to another table with 75M rows, at ths moment in DB2 an idle thread error appear but the PWC process still running retrieving the 75M rows, when it is completed (after 20 minutes) the errros comes up related with the first table:
[IBM][CLI Driver] SQL1224N A database agent could not be started to service a request, or was terminated as a result of a database system shutdown or a force command. SQLSTATE=55032
sqlstate = 40003
[IBM][CLI Driver] SQL1224N A database agent could not be started to service a request, or was terminated as a result of a database system shutdown or a force command. SQLSTATE=55032
sqlstate = 40003
Database driver error...
Function Name : Fetch
SQL Stmt : SELECT POLIZA.BSPOL_BSCODCIA, POLIZA.BSPOL_BSRAMOCO
FROM POLIZA
WHERE
EXA01.POLIZA.BSPOL_IDEMPR='0015' for read only with ur
Native error code = -1224
DB2 Fatal Error].
I have a similar process runing against the same 2 tables and it is woking fine where the only difference I can see is that the DB2 user is different.
Any idea how can i fix this?
Regards
The common causes for -1224 are:
Your instance or database has crashed, or
Something/somebody is forcing off your application (FORCE APPLICATION or equivalent)
As for the crash, I think you would know by know. This typically requires a database or instance restart. At any rate, can you please have a look into your DIAGPATH to check for any FODC* directories whose timestamp would match the timestamp of the -1224 errors?
As for the FORCE case, you should find some evidence of the -1224 in db2diag.log. Try searching for the decimal -1224, but also for its hex representation (0xFFFFFB38).

error when trying to execute to sql

I'm using Mulestudio and am trying to insert into the Postgres database some data. I am modifying the log4j.properties file and below is how it looks like:
log4j.rootLogger = DEBUG, postgres
#
log4j.appender.postgres=org.apache.log4j.jdbc.JDBCAppender
log4j.appender.postgres.layout=org.apache.log4j.PatternLayout
log4j.appender.postgres.driver=org.postgresql.Driver
log4j.appender.postgres.URL=jdbc:postgresql://127.0.0.1:5432/testing
log4j.appender.postgres.user=postgres
log4j.appender.postgres.password=pw
log4j.appender.postgres.sql=INSERT INTO LOGS VALUES ('%x', '%d{yyyy-MM-dd}','%C','%p','%m');
The error message that I get is
log4j:ERROR Failed to excute sql
org.postgresql.util.PSQLException: ERROR: syntax error at or near "edu"
'edu' is the first part of my project name (edu-stream-ucdnews). The instance of 'edu' only comes up in the title name and not in my data. I know that the error arises when I have the '%m' when I try to insert the data because when I change it to a hard-coded message like 'Hello', I don't get any error.
How do I solve this issue?
Are you sure your are connected to the database?
Try putting the fields name in your sql and lets check if we get a better error log:
log4j.appender.postgres.sql=INSERT INTO LOGS (field1, field2, ...) VALUES ('%x', '%d{yyyy-MM-dd}','%C','%p','%m');