Mediawiki migration error: relation "page" does not exist - postgresql

I have migrated mediawiki from CentOS 5 to CentOS 6, from Postgres 8.1 to Postgres 8.4.
Everything was fine, until I wanted to acces my main page.
When I do, the following error appears:
> A database error has occurred Query: SELECT
> page_id,page_namespace,page_title,page_restrictions,page_counter,page_is_redirect,page_is_new,page_random,page_touched,page_latest,page_len FROM page WHERE page_namespace = '0' AND page_title = 'Main_Page'
> LIMIT 1 Function: Article::pageData Error: 1 ERROR: relation "page"
> does not exist LINE 1: ...ge_random,page_touched,page_latest,page_len
> FROM page WHER... ^ Backtrace:
>
> #0 /var/www/html/mediawiki_svn/includes/db/Database.php(616): DatabasePostgres->reportQueryError('ERROR: relatio...', 1, 'SELECT
> page_id...', 'Article::pageDa...', false)
> #1 /var/www/html/mediawiki_svn/includes/db/Database.php(1026): Database->query('SELECT page_id...', 'Article::pageDa...')
> #2 /var/www/html/mediawiki_svn/includes/db/Database.php(1106): Database->select('page', Array, Array, 'Article::pageDa...', Array,
> Array)
> #3 /var/www/html/mediawiki_svn/includes/Article.php(369): Database->selectRow('page', Array, Array, 'Article::pageDa...')
> #4 /var/www/html/mediawiki_svn/includes/Article.php(381): Article->pageData(Object(DatabasePostgres), Array)
> #5 /var/www/html/mediawiki_svn/includes/Wiki.php(300): Article->pageDataFromTitle(Object(DatabasePostgres), Object(Title))
> #6 /var/www/html/mediawiki_svn/includes/Wiki.php(60): MediaWiki->initializeArticle(Object(Title), Object(WebRequest))
> #7 /var/www/html/mediawiki_svn/index.php(116): MediaWiki->initialize(Object(Title), NULL, Object(OutputPage),
> Object(User), Object(WebRequest))
> #8 {main}
When I checked the database I could find the tables: objectcache and page
Any ideas?

I have made the mediawiki static since it is going to be archived.

Related

Why flutter building many times?

code screenshot1
When i hot restart app flutter building 3 times
This is output result
Syncing files to device Android SDK built for x86...
Restarted application in 1,018ms.
I/flutter ( 4736): Built Count: 1
I/flutter ( 4736): Built Count: 2
I/flutter ( 4736): writing...
I/flutter ( 4736): writing...
I/flutter ( 4736): reading...:
I/flutter ( 4736): reading...:
I/flutter ( 4736): Built Count: 3
I/flutter ( 4736): writing...
I/flutter ( 4736): reading...:
Constructor of a widget could be called multiple times based on how many places it was declared. As long as you have const constructor and the values passed remains unchanged, the build method won't get trigged multiple times for every initialisation.

phpMyAdmin application crashes with "Fatal error: Uncaught ValueError: mysqli_result::data_seek()"

When I run the query on phpMyAdmin, I get the following error. What is the error in the query I am using?
Query:
SELECT hashtag, total, tarih FROM social_trend WHERE tarih > UNIX_TIMESTAMP() ORDER BY total DESC LIMIT 5
Error:
Fatal error: Uncaught ValueError: mysqli_result::data_seek(): Argument #1 ($offset) must be greater than or equal to 0 in
C:\xampp\phpMyAdmin\libraries\classes\Dbi\DbiMysqli.php:270 Stack trace:
#0 C:\xampp\phpMyAdmin\libraries\classes\Dbi\DbiMysqli.php(270): mysqli_result->data_seek(-1)
#1 C:\xampp\phpMyAdmin\libraries\classes\DatabaseInterface.php(2726): PhpMyAdmin\Dbi\DbiMysqli->dataSeek(Object(mysqli_result), -1)
#2 C:\xampp\phpMyAdmin\libraries\classes\Display\Results.php(4464): PhpMyAdmin\DatabaseInterface->dataSeek(Object(mysqli_result), -1)
#3 C:\xampp\phpMyAdmin\libraries\classes\Display\Results.php(4203): PhpMyAdmin\Display\Results->_getSortedColumnMessage(Object(mysqli_result), 'total')
#4 C:\xampp\phpMyAdmin\libraries\classes\Sql.php(1669): PhpMyAdmin\Display\Results->getTable(Object(mysqli_result), Array, Array, true)
#5 C:\xampp\phpMyAdmin\libraries\classes\Sql.php(1470): PhpMyAdmin\Sql->getHtmlForSqlQueryResultsTable(Object(PhpMyAdmin\Display\Results), './themes/pmahom...', NULL, Array, false, 0, 0, true, Object(mysqli_result), Array, true)
#6 C:\xampp\phpMyAdmin\libraries\classes\Sql.php(2255): PhpMyAdmin\Sql->getQueryResponseForNoResultsReturned(Array, '808rpg', 'social_trend', NULL, 0, Object(PhpMyAdmin\Display\Results), NULL, './themes/pmahom...', NULL, Object(mysqli_result), 'SELECT hashtag,...', NULL)
#7 C:\xampp\phpMyAdmin\import.php(758): PhpMyAdmin\Sql->executeQueryAndGetQueryResponse(Array, false, '808rpg', 'social_trend', NULL, NULL, NULL, NULL, NULL, NULL, 'db_structure.ph...', './themes/pmahom...', NULL, NULL, NULL, 'SELECT hashtag,...', NULL, NULL)
#8 {main} thrown in C:\xampp\phpMyAdmin\libraries\classes\Dbi\DbiMysqli.php on line 270
This is a bug in MySQL8.0. On an empty table it throws this error.
I turned back to MySQL 5.7 and have no issues so far ))

Problem with editing data with sqflite in flutter

What is wrong in this code?
In debug console shown write sql code, but for some reason it doesn't work
Future<void> _toggleTodoItem(TodoItem todo) async {
final int count = await this._db.rawUpdate(
/*sql=*/ '''
UPDATE $kDbTableName
SET content = ${todo.content},
SET number = ${todo.number}
WHERE id = ${todo.id};''');
print('Updated $count records in db.');
}
There is an error
E/SQLiteLog( 7167): (1) near "SET": syntax error in "UPDATE example1_tbl
E/SQLiteLog( 7167): SET content = n,
E/SQLiteLog( 7167): SET number = 1
E/SQLiteLog( 7167): WHERE id = 7;"
E/flutter ( 7167): [ERROR:flutter/lib/ui/ui_dart_state.cc(177)] Unhandled Exception: DatabaseException(near "SET": syntax error (code 1 SQLITE_ERROR): , while compiling: UPDATE example1_tbl
E/flutter ( 7167): SET content = n,
E/flutter ( 7167): SET number = 1
E/flutter ( 7167): WHERE id = 7;) sql ' UPDATE example1_tbl
E/flutter ( 7167): SET content = n,
E/flutter ( 7167): SET number = 1
E/flutter ( 7167): WHERE id = 7;' args []}
"UPDATE" syntax doesn't look like that (https://sqlite.org/lang_update.html). You want:
UPDATE example1_tbl SET content = 'n', number = 1 WHERE id = 7
And you should also be using parameters (https://github.com/tekartik/sqflite/blob/master/sqflite/doc/sql.md#parameters). Don't use text inserted into .rawUpdate unless you want to be subject to Bobby Tables attacks (https://bobby-tables.com).

Adding "LIMIT 1" to LATERAL JOIN causes Postgres 9.5.7 with mongo_fdw extension to crash

I have the following extremely simple test case. It requires Postgresql 9.5.7, a Mongo database and uses the Mongo FDW Extension. It will fail using a table that has only one record. Here's how:
First, in the mongo shell, run the following command to create some data:
db.table2.insert({sessionKey:ObjectId('555233a2af8f312d060e57be'), catalog:'test'});
Then, run the below in PSQL:
CREATE TABLE table1 AS SELECT column1 AS sid
FROM (VALUES ('555233a2af8f312d060e57be'::name)) AS data;
CREATE EXTENSION mongo_fdw;
CREATE SERVER mongo_data FOREIGN DATA WRAPPER mongo_fdw OPTIONS (address 'localhost', port '27017');
CREATE USER MAPPING FOR postgres SERVER mongo_data;
CREATE FOREIGN TABLE table2 (
"sessionKey" NAME,
"catalog" TEXT
) SERVER mongo_data OPTIONS (DATABASE 'test', COLLECTION 'table2');
When you run the following query, the server will throw a segmentation fault.
SELECT t1.sid, t2.catalog
FROM table1 t1
LEFT OUTER JOIN LATERAL (
SELECT catalog FROM table2 WHERE "sessionKey" = t1.sid LIMIT 1
) t2 ON TRUE;
However, if you run the query without the "LIMIT 1" it runs without a problem.
Looking at the execution plan, the only difference is the introduction of a filter expression.
# explain SELECT t1.sid, t2.catalog
FROM table1 t1
LEFT OUTER JOIN LATERAL (
SELECT catalog FROM table2 WHERE "sessionKey" = t1.sid
) t2 ON TRUE;
QUERY PLAN
-----------------------------------------------------------------
Nested Loop Left Join (cost=5.00..6.08 rows=1 width=96)
Join Filter: (table2."sessionKey" = t1.sid)
-> Seq Scan on table1 t1 (cost=0.00..1.01 rows=1 width=64)
-> Foreign Scan on table2 (cost=5.00..5.06 rows=1 width=96)
Foreign Namespace: test.table2
(5 rows)
# explain SELECT t1.sid, t2.catalog
FROM table1 t1
LEFT OUTER JOIN LATERAL (
SELECT catalog FROM table2 WHERE "sessionKey" = t1.sid LIMIT 1
) t2 ON TRUE;
QUERY PLAN
-----------------------------------------------------------------------
Nested Loop Left Join (cost=5.00..6.09 rows=1 width=96)
-> Seq Scan on table1 t1 (cost=0.00..1.01 rows=1 width=64)
-> Limit (cost=5.00..5.06 rows=1 width=32)
-> Foreign Scan on table2 (cost=5.00..5.06 rows=1 width=32)
Filter: ("sessionKey" = t1.sid)
Foreign Namespace: test.table2
(6 rows)
It appears that the evaluation of this filter expression is what causes the segmentation fault.
-- UPDATE 1 --
Running GDB, I received the following backtrace:
Program received signal SIGSEGV, Segmentation fault.
strlen () at ../sysdeps/x86_64/strlen.S:106
106 ../sysdeps/x86_64/strlen.S: No such file or directory.
(gdb) bt
#0 strlen () at ../sysdeps/x86_64/strlen.S:106
#1 0x000056452ce080db in MemoryContextStrdup ()
#2 0x000056452cdec0e7 in FunctionCall1Coll ()
#3 0x000056452cded6b7 in OutputFunctionCall ()
#4 0x000056452cded904 in OidOutputFunctionCall ()
#5 0x00007f056ee694f3 in AppenMongoValue
(queryDocument=queryDocument#entry=0x56452e217980,
keyName=keyName#entry=0x56452e242278 "sessionKey", value=0,
isnull=<optimized out>, id=<optimized out>) at mongo_query.c:533
#6 0x00007f056ee69e02 in AppendParamValue (scanStateNode=0x56452e23abd8,
paramNode=<optimized out>, keyName=0x56452e242278 "sessionKey",
queryDocument=0x56452e217980)
at mongo_query.c:410
#7 QueryDocument (relationId=relationId#entry=2207239, opExpressionList=
<optimized out>, scanStateNode=scanStateNode#entry=0x56452e23abd8) at
mongo_query.c:218
#8 0x00007f056ee67f9f in MongoBeginForeignScan (scanState=0x56452e23abd8,
executorFlags=<optimized out>) at mongo_fdw.c:516
#9 0x000056452cc02315 in ExecInitForeignScan ()
#10 0x000056452cbe1a43 in ExecInitNode ()
#11 0x000056452cbf75bb in ExecInitLimit ()
#12 0x000056452cbe192d in ExecInitNode ()
#13 0x000056452cbfc537 in ExecInitNestLoop ()
#14 0x000056452cbe1a29 in ExecInitNode ()
#15 0x000056452cbdffba in standard_ExecutorStart ()
#16 0x000056452ccec1af in PortalStart ()
#17 0x000056452cce938d in PostgresMain ()
#18 0x000056452ca8305c in ?? ()
#19 0x000056452cc8d1b3 in PostmasterMain ()
#20 0x000056452ca84251 in main ()
Looking at the Mongo FDW code, I can see that the segfault is happening in mongo_query.c:
/* Prepare for parameter expression evaluation */
param_expr = ExecInitExpr((Expr *) paramNode, (PlanState *)scanStateNode);
/* Evaluate the parameter expression */
param_value = ExecEvalExpr(param_expr, econtext, &isNull, NULL);
AppenMongoValue(queryDocument, keyName, param_value, isNull,
paramNode->paramtype);
I can paramNode->paramtype = 19 (NAMEOID), as expected. However, param_value = 0 (which is what is causing AppenMongoValue to crash). I'm not sure if this is a bug in the mongo_fdw extension or in Posgresql 9.5.7, but I need some help to find a way to work around this.
-- UPDATE 2 --
I dived into the PostgreSQL code (execQual.c, specifically) and I can see that there is no execution plan associated with the parameter "sessionKey".
static Datum
ExecEvalParamExec(ExprState *exprstate, ExprContext *econtext,
bool *isNull, ExprDoneCond *isDone)
{
Param *expression = (Param *) exprstate->expr;
int thisParamId = expression->paramid;
ParamExecData *prm;
if (isDone)
*isDone = ExprSingleResult;
/*
* PARAM_EXEC params (internal executor parameters) are stored in the
* ecxt_param_exec_vals array, and can be accessed by array index.
*/
prm = &(econtext->ecxt_param_exec_vals[thisParamId]);
if (prm->execPlan != NULL)
{
/* Parameter not evaluated yet, so go do it */
ExecSetParamPlan(prm->execPlan, econtext);
/* ExecSetParamPlan should have processed this param... */
Assert(prm->execPlan == NULL);
}
*isNull = prm->isnull;
return prm->value;
}
Because "prm->execPlan" is NULL, the value of the parameter is never set. I'm not sure that I can take this any further without help.

Dump and restore of PostgreSQL database with hstore comparison in view fails

I have a view which compares two hstore columns.
When I dump and restore this database, the restore fails with the following error message:
Importing /tmp/hstore_test_2014-05-12.backup...
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 172; 1259 1358132 VIEW hstore_test_view xxxx
pg_restore: [archiver (db)] could not execute query: ERROR: operator does not exist: public.hstore = public.hstore
LINE 2: SELECT NULLIF(hstore_test_table.column1, hstore_test_table....
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Command was: CREATE VIEW hstore_test_view AS
SELECT NULLIF(hstore_test_table.column1, hstore_test_table.column2) AS "nullif"
FROM hst...
pg_restore: [archiver (db)] could not execute query: ERROR: relation "hstore_test_schema.hstore_test_view" does not exist
Command was: ALTER TABLE hstore_test_schema.hstore_test_view OWNER TO xxxx;
I was able to create this error in PostgreSQL 9.3.0 with the following steps:
CREATE DATABASE hstore_test;
\c hstore_test
CREATE EXTENSION hstore WITH SCHEMA public;
CREATE SCHEMA hstore_test_schema;
CREATE TABLE hstore_test_schema.hstore_test_table(
id int,
column1 hstore,
column2 hstore,
PRIMARY KEY( id )
);
CREATE VIEW hstore_test_schema.hstore_test_view AS
SELECT NULLIF(column1, column2) AS comparison FROM hstore_test_schema.hstore_test_table;
For completeness, the dump and restore process looked like this:
pg_dump -U xxxx -h localhost -f /tmp/hstore_test_2014-05-12.backup -Fc hstore_test
psql -U xxxx -h localhost -d postgres -c "DROP DATABASE hstore_test"
psql -U xxxx -h localhost -d postgres -c "CREATE DATABASE hstore_test"
pg_restore -U xxxx -h localhost -d hstore_test /tmp/hstore_test_2014-05-12.backup
pg_restore -l /tmp/hstore_test_2014-05-12.backup suggests that the hstore extension is enabled before the view is created:
;
; Archive created at Mon May 12 11:18:32 2014
; dbname: hstore_test
; TOC Entries: 15
; Compression: -1
; Dump Version: 1.12-0
; Format: CUSTOM
; Integer: 4 bytes
; Offset: 8 bytes
; Dumped from database version: 9.3.0
; Dumped by pg_dump version: 9.3.0
;
;
; Selected TOC Entries:
;
2074; 1262 1358002 DATABASE - hstore_test xxxx
7; 2615 1358003 SCHEMA - hstore_test_schema xxxx
5; 2615 2200 SCHEMA - public postgres
2075; 0 0 COMMENT - SCHEMA public postgres
2076; 0 0 ACL - public postgres
173; 3079 11787 EXTENSION - plpgsql
2077; 0 0 COMMENT - EXTENSION plpgsql
174; 3079 1358004 EXTENSION - hstore
2078; 0 0 COMMENT - EXTENSION hstore
171; 1259 1358124 TABLE hstore_test_schema hstore_test_table xxxx
172; 1259 1358132 VIEW hstore_test_schema hstore_test_view xxxx
2069; 0 1358124 TABLE DATA hstore_test_schema hstore_test_table xxxx
1960; 2606 1358131 CONSTRAINT hstore_test_schema hstore_test_table_pkey xxxx
Incidentally, replacing the NULLIF(col1, col2) with col1 = col2 seems to make the error disappear, despite the fact it's an explicit comparison of the type pg_restore was complaining of.
This is a PostgreSQL bug. I have relayed your report to the pgsql-bugs list.
What's happening is that pg_dump is setting the search_path to exclude public when creating tables in your schema. This is normal. When it dumps objects that refer to things that aren't on the search_path, it explicitly schema-qualifies them so they work.
It works for the = case because pg_dump sees that = is actually OPERATOR(public.=) in this case, and dumps it in that form:
CREATE VIEW hstore_test_view AS
SELECT (hstore_test_table.column1 OPERATOR(public.=) hstore_test_table.column2) AS comparison
FROM hstore_test_table;
however, pg_dump fails to do this for the operator implicitly used via the nullif pseudo-function. That results in the following bogus command sequence:
CREATE EXTENSION IF NOT EXISTS hstore WITH SCHEMA public;
...
SET search_path = hstore_test_schema, pg_catalog;
...
CREATE VIEW hstore_test_view AS
SELECT NULLIF(hstore_test_table.column1, hstore_test_table.column2) AS comparison
FROM hstore_test_table;
pg_dump just uses the pg_catalog.pg_get_viewdef function to dump the view, so this probably requires a server backend fix.
The simplest workaround is not to use nullif, replacing it with a more verbose but equivalent case:
CASE WHEN column1 = column2 THEN NULL ELSE column1 END;
The syntax doesn't provide a way to schema-qualify the nullif pseudo-function's operator like we do with explicit OPERATOR(public.=), so the fix doesn't appear to be trivial.
I expected the same issue to affect GREATEST and LEAST, perhaps also DISTINCT, but it doesn't. Both seem to find their required operators even when they aren't on the search_path at runtime, but don't fail if the operator isn't on the search_path at view definition time. That suggests they're probably using the type's b-tree operator class to look up the operators, via the type's entry in the catalogs as found via the table's attributes. (Update: checked the sources and yes, that's what they do). Presumably nullif should also be doing this, but isn't.
Instead it dies in:
hstore_test=# \set VERBOSITY verbose
hstore_test=# CREATE VIEW hstore_test_schema.hstore_test_view AS
SELECT NULLIF(column1, column2) AS comparison FROM hstore_test_schema.hstore_test_table;
ERROR: 42883: operator does not exist: public.hstore = public.hstore
LINE 2: SELECT NULLIF(column1, column2) AS comparison FROM hstore_te...
^
HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts.
LOCATION: op_error, parse_oper.c:722
which when I set a breakpoint there, traps at:
Breakpoint 1, op_error (pstate=pstate#entry=0x1189f38, op=op#entry=0x1189c10, oprkind=oprkind#entry=98 'b', arg1=arg1#entry=97207, arg2=arg2#entry=97207,
fdresult=FUNCDETAIL_NOTFOUND, location=location#entry=58) at parse_oper.c:706
706 {
(gdb) bt
#0 op_error (pstate=pstate#entry=0x1189f38, op=op#entry=0x1189c10, oprkind=oprkind#entry=98 'b', arg1=arg1#entry=97207, arg2=arg2#entry=97207, fdresult=FUNCDETAIL_NOTFOUND,
location=location#entry=58) at parse_oper.c:706
#1 0x000000000051a81b in oper (pstate=pstate#entry=0x1189f38, opname=opname#entry=0x1189c10, ltypeId=ltypeId#entry=97207, rtypeId=rtypeId#entry=97207,
noError=noError#entry=0 '\000', location=location#entry=58) at parse_oper.c:440
#2 0x000000000051ad34 in make_op (pstate=pstate#entry=0x1189f38, opname=0x1189c10, ltree=ltree#entry=0x118a528, rtree=0x118a590, location=58) at parse_oper.c:770
#3 0x00000000005155e1 in transformAExprNullIf (a=0x1189bc0, pstate=0x1189f38) at parse_expr.c:1021
#4 transformExprRecurse (pstate=pstate#entry=0x1189f38, expr=0x1189bc0) at parse_expr.c:244
#5 0x0000000000517484 in transformExpr (pstate=0x1189f38, expr=<optimized out>, exprKind=exprKind#entry=EXPR_KIND_SELECT_TARGET) at parse_expr.c:116
#6 0x000000000051ff30 in transformTargetEntry (pstate=pstate#entry=0x1189f38, node=0x1189bc0, expr=expr#entry=0x0, exprKind=exprKind#entry=EXPR_KIND_SELECT_TARGET,
colname=0x1189ba0 "comparison", resjunk=resjunk#entry=0 '\000') at parse_target.c:94
#7 0x00000000005212df in transformTargetList (pstate=pstate#entry=0x1189f38, targetlist=<optimized out>, exprKind=exprKind#entry=EXPR_KIND_SELECT_TARGET)
at parse_target.c:167
#8 0x00000000004ef594 in transformSelectStmt (stmt=0x11899f0, pstate=0x1189f38) at analyze.c:942
#9 transformStmt (pstate=0x1189f38, parseTree=0x11899f0) at analyze.c:243
#10 0x00000000004f0a2d in parse_analyze (parseTree=0x11899f0,
sourceText=sourceText#entry=0x114e6b0 "CREATE VIEW hstore_test_schema.hstore_test_view AS\nSELECT NULLIF(column1, column2) AS comparison FROM hstore_test_schema.hstore_test_table;", paramTypes=paramTypes#entry=0x0, numParams=numParams#entry=0) at analyze.c:100
#11 0x000000000057cc4e in DefineView (stmt=stmt#entry=0x114f7e8,
queryString=queryString#entry=0x114e6b0 "CREATE VIEW hstore_test_schema.hstore_test_view AS\nSELECT NULLIF(column1, column2) AS comparison FROM hstore_test_schema.hstore_test_table;") at view.c:385
#12 0x000000000065b1cf in ProcessUtilitySlow (parsetree=parsetree#entry=0x114f7e8,
queryString=0x114e6b0 "CREATE VIEW hstore_test_schema.hstore_test_view AS\nSELECT NULLIF(column1, column2) AS comparison FROM hstore_test_schema.hstore_test_table;",
context=<optimized out>, params=params#entry=0x0, completionTag=completionTag#entry=0x7fffc98c9990 "", dest=<optimized out>) at utility.c:1207
#13 0x000000000065a54e in standard_ProcessUtility (parsetree=0x114f7e8, queryString=<optimized out>, context=<optimized out>, params=0x0, dest=<optimized out>,
completionTag=0x7fffc98c9990 "") at utility.c:829
so the immediate issue looks like transformAExprNullIf failing to look up the operator using the type of its operand via the b-tree opclass and the typecache.