Sphinx Error: (type='index') already exists - sphinx

I have my sphinx configuration broken out into 3 files as I have several different indexes that each use common files.
So I have a master sphinx.com that defines paths for where source and index are
Then each index has a source file (sql select, fields) and an index file with all the wordform paths and any rules for that specific index of data.
So I've been making changes to one of the indexes but all of a sudden am getting this error:
ERROR: section 'MyIndex_0' (type='index') already exists in /etc/sphinxsearch/sphinx.conf line 17863 col 19
Yet I did not touch the sphinx.conf file.
Update: As a test I found an old version of the index file I am trying to rotate so I still get the error. So it is not changes I made to this file (once in a while when making changes I get an error which is always due to some type or another).
Can some file have gotten corrupted?
I did stop and start sphinx to no avail.

Related

Liquibase changesets can not be rerun

Our Liquibase script can not be rerun because underlying column is already gone
Consider the following changesets:
A table "foo" is created, and "domain" is one of the columns in this table;
A constraint (in form of an index) is placed on the column "domain";
Column "domain" is dropped from the table "foo".
Now when we try to rerun all liquibase scripts (over already existing DB structure), changeset 2 fails with
[ERROR] Reason: liquibase.exception.DatabaseException: ERROR: column "domain" named in key does not exist
all because the "domain" column in the actual DB is already gone before changeset 2 is run.
Is there any better way to make these changesets runnable other than recreating the "domain" column in the table so that all 3 changesets can run?
there are hundreds of changesets in the system besides the 2 above;
the solution is strongly preferred to avoid any manual steps because there are dozens of environments in which the changesets must be rerun;
In a perfect world, a developer would have placed a preConditions on changeset 2 to check that not only the index is missing, but the underlying column exists, but we have to deal with what we have. It is my understanding that rewriting existing changesets is strongly discouraged in liquibase.
You can always add a preCondition to the changeSets #2 and #3 to check that the domain column exists, e.g.:
<preConditions onFail="MARK_RAN">
<columnExists tableName="foo" columnName="domain"/>
</preConditions>
If these changeSets will start to fail with the "different checksum error", than you can always provide the new checksum or just add <validCheckSum>ANY</validCheckSum>.
This way you'll be able to run these changeSets in all environments you need.
Rewriting the changeSets is discouraged, but writing preConditions for the changeSets is quire encouraged.
According to the comment, your problem is changing liquibase scripts directory location.
What actually happens is liquibase will compare liquibase script's relative path when executing these changesets in your scripts. You can find this relative path in the databasechangelog table and under column filename.
First thing you should understand is problem is not with the checksum. So it will not solve your problem.
The easiest thing that you can do is change the values in column filename in table databasechangelog. If you have more than one liquibase script files I suggest you to change them one by one. Simple sql query like this can do the job.
update databasechangelog set filename='<new_filename>' where filename='<old_filename>'
Note: You can make the situation worse if you did it wrong. Make sure you double check everything before you make any changes.

Postgres syntax error at or near "VALUESNSERT"

We are trying to load the data from one postgres table to another postgres table in the same database using informatica. And we are having the following issue -
The error message is as follows:
Message Code: WRT_8229
Message: Database errors occurred:
FnName: Execute -- [Informatica][ODBC PostgreSQL Wire Protocol driver][PostgreSQL]ERROR: VERROR; syntax error at or near "VALUESNSERT"(Position 135; File scan.l; Line 1134; Routine scanner_yyerror; ) Error in parameter 6.
FnName: Execute -- [Informatica][ODBC PostgreSQL Wire Protocol driver][PostgreSQL]Failed transaction. The current transaction rolled back. Error in parameter 6.
FnName: Execute -- [DataDirect][ODBC lib] Function sequence error
It is working fine if we are not loading one of the string column which is of 3000 bytes. Can anyone please shed some light on this issue -
Note: There are no reserved/keywords in our table structure
if you have already identified the error-causing column then, you can follow below steps to find the root cause -
1. You can check the data type of the column in informatica - if it is matching to the target in DB in terms of length and data type.
2. Make sure you import the target from database. Creating target from other process or adding column to existing target can lead to such error.
3. run in verbose mode or debug to see where exactly its causing issue. Check if its reading, transforming, and loading data properly etc.
4. remove postgres target and attach a flat file - if this works then there is issue in database table. Check for index, constraints etc. which can lead to this issue.
5. Check ODBC version as well which may have lots of limitations like data type, length handling. ODBC is also not good at generating errors so you may have to do some guesswork etc to find out.
Thanks everyone. My issue got resolved after implementing Informatica PDO.

TYPO3 - BUG There is no entry in the $TCA array

I have a bug in my TYPO3 4.5 website :
Core: Exception handler (WEB): Uncaught TYPO3 Exception: #1283790586:
There is no entry in the $TCA array for the table
"pages_language_overlay". This means that the function enableFields()
is called with an invalid table name as argument. |
InvalidArgumentException thrown in file /t3lib/class.t3lib_page.php in
line 1150
I don't understand what happens, but my backend is still available.
How to fix it ?
I assume you do not know much about TYPO3 so I try to make clear how TYPO3 is working (with regards to the old version).
TYPO3 has definitions of the tables and fields in the database.
First part are the MySQL definitions (since 8 it might be other databases than MySQL).
The second part (TCA = TYPO3 Configuration Array) are the definitions how these tables build the BackEnd(BE) Interface for an editor.
As these informations can be enhanced by extensions, each extension can add it's information to a (cached) pool and this pool is considered a reference.
The database definitions are located in files ext_tables.sql. The TCA was generated in ext_localconf.php and ext_tables.php. Today TCA modifications should be done in Configuration/TCA/tablename.php (for new tables) or Configuration/TCA/Override/tablename.php (for modification of existing tables).
Before all these files are included and executed for every call they are collected and stored as one resulting PHP-file.
Your problem might occur because there is a syntax error in the collected file and up to the error all information is build up, but everything after the error is missing.
Try to clean up your installation and remove these caches: in pre 6 versions there are files temp_CACHED_<hash>_ext_tables.php and temp_CACHED_<hash>_ext_localconf.php in your typo3conf/ folder. Remove them all. The next call to TYPO3 (FE or BE) will rebuild two files. Make sure these have no syntax errors.
In the install-tool (<domain>/typo3/install/) you can clear all caches and compare the existing database with the gathered definition from all active(!) extensions. If there are differences the database can be 'corrected'. Be sure to have a database backup before you change anything.
The usal answer: TYPO3 4.5 is outdated. Upgarade your installation to a newer version. Maybe the Bug is allready solved.
If an Update is not possible then the question is what you have doing that the error is thrown. What changes were done at latest? What extension was shortly installed or updated?

How to debug sphinx search using the indextool

I ran indextool on an index that crashes sphinx when I use indexer on it.
The output of indextool shows many failures such as:
FAILED, string offset out of bounds (row=18, stringattr=3, docid=3317, index=896070)
Can someone help me understand what the parameters (row, stringattr, docid, index) relate to so I can inspect the index csv file to try and see what's causing the failure?
Those are offsets within the generated index. Not in the original source dataset.
But also as far as I know indextool is only inspecting existing indexes. Running indexer is trying to create a new version of the index from the 'source' data. So if indexer is 'crashing' a proper index is NOT being built.
So indextool is inspecting some previous version, rather than the partly built index when indexer crashed! THat early version was already corrupted.
In short using indextool is a non starter. Need to debug using indexer instead.
Maybe try --dump-rows and/or --verbose options to indexer maybe will reveal something useful just before the crash happens?

Does DBIx::Class::Schema::Loader cache its moniker map?

Recently we added a "audit_logs" table to the database, and after some frustration I realised that there was already an "auditlog" table in the database for some reason. It wasn't being used so I dropped it. I deleted the Auditlog.pm and AuditLogs.pm files from my schema, and then regenerated. For some reason DCSL again created AuditLogs.pm for the "audit_logs" table, even though there was no longer an "auditlog" table or Auditlog.pm file that would conflict with it.
I have tried just about everything I can think of to get it to generate Log.pm without success. The only thing that I can figure is that it is caching the moniker map somewhere, and I cannot seem to reset it.
I eventually tracked this problem down to an issue with the Lingua inflector. It was picking up "logs" as a singular verb instead of a plural noun. This happened because it followed the word "audit" which ends with "it." Basically, I had to write a custom moniker_map function that added an exception for audit_logs.