How to resolve the conflicting field in a backing index in elastic - elastic-stack

I have a conflicting field that is part of a backing index of a data stream. I tried reindexing the backing index but the new index that gets created is not a backing index so I am not able to attach it to a data stream so, it is no longer useful. I also tried rollover which created a new backing index but the problem still exists in the new index. I deleted the former backing index as well. Is there a way to resolve a conflicting field for a backing index?

Related

Does build createIndex() a new index over the whole collection?

What happens when I add a new document to my collection and run the createIndex() function. Does MongoDB build a new index over the whole collection (time and resource consuming)? Or is MongoDB just updating the index for the single document (time and resource saving)? I am not sure because I found this in the documentation (3.0):
By default, MongoDB builds indexes in the foreground, which prevents all read and write operations to the database while the index builds. Also, no operation that requires a read or write lock on all databases (e.g. listDatabases) can occur during a foreground index build.
I am asking because I need a dynamic 2dsphere index which will be updated continuous. If it needs to build the index everytime over the whole collection it would take too much time.
Building an index indexes all existing documents and also cause all future documents to get indexed as well. So if you insert new documents into an already indexed collection (or update the indexed fields of existing documents), the indexes will get updated.
When you call createIndex and an index of the same type over the same fields already exists, nothing happens.

How to update lucene index synchronously?

I have created a Lucene index (using Lucene.net) and the search is working fine.
My concern is as follows:
I used data from my SQL database to create an index. Now the thing is, this data is growing and I am unable to find a way to modify the index without deleting and recreating it. Please let me know if there is a way of modifying the lucene index without the delete-recreate process.
IndexWriter has methods like addDocument, updateDocument, and deleteDocuments, which are used to modify data in the index. Updating a document does require the document to be deleted and reindexed behind the scenes, but it shouldn't require you to recreate the entire index.

mongoDB alternatives for foreign key constraints

I have created a SQL DB and examined the integrity. Now I wanted to put these tables in mongoDB, and I've kept it in the mapping rules. Table = collection, row = doc, and so on.
But how does one set about following in mongoDB:
create table pruefen
( MatrNr integer references Studenten on delete cascade,
VorlNr integer references Vorlesungen,
PersNr integer references Professoren on delete set null,
Note numeric(2,1) check (Note between 0.7 and 5.0),
primary key (MatrNr, VorlNr));
DBRef, I've tried but is not a foreign key replacement.
And if the application is to take over as it would look then?
MongoDB has no cascading deletes. When your application deletes data, it is also responsible for removing any referenced objects itself and any references to the deleted document. But usually when you use on delete in a relational database, you have a case of composition where one parent object owns one or more child objects, and the child objects are meaningless without the parent. In that situation, MongoDB encourages embedding instead of referencing. That means that you create an array in the parent object, and put the complete child documents into that array instead of keeping them in an own collection. That way they will be deleted together with the parent, because they are a part of it.
While keeping more than one value in a field is an absolute no-go in SQL, there is nothing wrong with that in MongoDB. That's because the MongoDB query language can easily work with arrays and embedded objects. You can even create indices on fields of sub-documents in arrays, so you can easily search for objects which are embedded in other objects.
When you still want to reference objects from another collection, you can either use a DBRef, or you can also use any other unique identifier (uniqueness is one of the few things which can be enforced by MongoDB. To do so, create an unique index with the createIndex command). But MongoDB does not enforce consistency in this case. You can create DBRefs which point to non-existing ObjectIds and when the document the DBRef points to is deleted, nothing will happen. The application is responsible for making sure that when it deletes a document, all documents which reference it are updated.
Constraints can not be enforced by MongoDB either. It can't even enforce a specific type for a field, due to the schemaless nature of MongoDB. Again, your application is responsible for making sure that the data it puts into mongodb is following specific specifications. When you want to automatize this, there are object-relational mapping frameworks for MongoDB for many programming languages available.
To wrap it all up: MongoDB is not as "smart" as SQL databases. It doesn't do much on its own. It does what it is told to do by the application, not more and not less. But that's the reason why it's so fast (no expensive consistency checks) and flexible (no database modifications necessary to implement new features).
One of the great things about relational database is that it is really good at keeping the data consistent within the database. One of the ways it does that is by using foreign keys. A foreign key constraint is that let's say there's a table with some column which will have a foreign key column with values from another table's column. In MongoDB, there's no guarantee that foreign keys will be preserved. It's upto the programmer to make sure that the data is consistent in that manner. This maybe possible in future versions of MongoDB but today, there's no such option. The alternative for foreign key constraints is embedding data.

How to avoid dropping existing index when reindexing

When reindexing Sunspot, the existing index is cleared/dropped first. This means user will see blank search results for a short time, which is bad for production environment. Is there a way to do reindexing without clearing existing indexes?
The clearing occurs when I call rake task, and when I call solr_reindex in console.
By looking into the code, I think doing a Model.solr_index is enough. When indexing is complete, one can start searching into new indexed fields.
The searchable schema is not something shared across all records from one model. Therefore indexing a single record will update the searchable schema of that record.

Cassandra does not show or apply index when adding secondary index

Cassandra seems to have automatically deleted an important secondary index so I have to restore it.
I used both PHPcassa and the Cassandra CLI to restore the index.
Both reported the index was applied, but when I went to try and retrieve the data according to the index, it complains that the indexed column is not there.
This is the latest Cassandra version 1.1.2.
Thanks and help quick!