How do I Re-Rank the results generated by IBM Retrieve and rank service to get the optimal answer as i am unable to find any tutorial related to re-rank?
I'm not sure I completely understand your question. Do you mean the initial ranking by Retrieve and Rank of answers retrieved from Solr, or refining the ranking of already-ranked results? These specific links might be of help:
Preparing training data. This covers how to train rankers.
Reranking results. This covers how to refine the results produced by a ranker.
Related
I want to store featured products like staff picks, featured products of each category in my system that will hold at most 10 documents. My priority is read performance over write performance but I also want to have an efficient storage system and I have three ways to do it in my mind:
Create a boolean field such as is_bestseller, is_staffpick in Products schema and query for it.
I think this is the simplest way to do it but I think it would require an additional query to check if the at most 10 limit has been reached.
Create a FeaturedProducts schema that holds references of product ids.
This is useful in the sense that if I want to add some additional info such as a featured product within the featured products then I could simply add a field in this schema. It would also be easy to check the at most 10 documents limit. I think this makes it more scalable but at the cost of performance?
Create a FeaturedProducts schema that will hold all the needed data.
I think performance wise this would be the best but I'm not sure if this is an efficient way to store data. Basically, I would just duplicate the data of a product and store it. Obviously, if I have to update product details then I have to update it in two places now but the read-to-write ratio heavily favors reading so I am willing to do this even if it's gonna require more logic regarding updating and deleting products. Also it would be easy to set at most 10 documents limit.
I tried to look for some examples regarding featured products but couldn't find anything useful. I am not sure what the best practice is here and which way to go about so any kind of help is appreciated.
The rule of thumb when modeling your data in MongoDB is:
Data that is accessed together should be stored together.
Havin that in mind I considered The Extended Reference Pattern a great options for you use case, here is a example from the MongoDB Blog.
Considere an e-commerce application where you have user collection, order collection and others. Where users and orders has a 1-N relation, embedding all of the information about a customer for each order just to reduce the JOIN operation results in a lot of duplicated information.
Instead of duplicating all of the information on the customer, we only copy the fields we access frequently.
This schema will have height read performance, because all the information will be store in a single document, at the cost of some duplicate data, but that is not completely bad considering that it can sever as history data.
Useful information:
Patterns
Design Anti-pattern
A potential solution is to use an index here so that you can maximize your query performance. You would create an additional boolean flag (as you indicated in your first solution) then index that query, with a cursor that limits the number of returned values.
For more ways to increase your query performance check out the official Mongo docs here. If you're curious as to how much more performant your queries become, you can use Mongo's explain() method to get benchmarks (more info here) and compare approaches.
Best of luck!
I have tweets retrieved using the Twitter API and need to group the tweets into 2 categories. To do the grouping I used doc2vec to represent the tweets into numerical form and then performed DBSCAN algorithm clustering. However, how do I know what category a cluster belongs to? My output is just tweets assigned to different clusters.
For example, I need to know which tweet indicates the needs of the people and which tweets indicate that people have help to offer.
How can I make out which cluster has what type of tweets?
Thank you!
Probably neither cluster is a either of these two things.
Clustering is unsupervised. You don't get to control what it finds. It could be tweets that contain the f... word vs. tweets that don't.
If you want something specific such as "needs" and "offers", then you absolutely need to train a supervised algorithm from labeled data.
I would like to upon user request graph median values of many documents. I'd prefer not to transfer entire documents from the database to my application solely for purposes of determining median values.
I understand that development is still planned for a median aggregator in MongoDB, however I see that currently the following operations are supported:
sort
count
limit
Short of editing mongo source code, Is there any reasonable way I can combine these operations to obtain median values; for example, to sort values, count them, and limit to return median values?
It appears that editing Mongo source code is the only solution.
I'm getting ready to start a project where I will be building a recommendation engine for restaurants. I have been waffling between neo4j (graph db) and mongodb (document db). my nodes/documents will be things like restaurant and person. i know i will want some edges, something like person->likes->restaurant, or person->ate_at->restaurant. my main query, however, will be to find restaurants within X miles of location Y.
if i have 20 restaurant's within X miles of Y, but not connected by any edges, how will neo4j be able to handle the spatial query? i know with mongodb i can index on lat/long and query all restaurant types. does neo4j offer the same functionality in a disconnected graph?
when it comes to answering questions like, 'which restaurants do my friends eat at most often?', is neo4j (graph db) the way to go? or will mongodb (document db) provide me similar functionality?
Neo4j Spatial introduces a Spatial RTree (or other means) index that is part of the graph itself. That means, even disconnected domain entities will be found via the spatial search, if you index them (that is relationships will connect the Spatial index to the Restaurants). Also, this is flexible enough that you can combine the Raw BBox search in the RTree with other things like check on the restaurants categories in the same go, since you can hop out and in the different parts of the graph.
This way, neo4j Spatial is supporting the full range of search capabilities that you would expect form a full Topology, like combined searches and searches on polygons with holes etc.
Be aware that Neo4j Spatial is in 0.7, so be gentle and ask on http://groups.google.com/group/neo4j/about :)
I'm not that familiar with Neo4J Spatial but it would seem that MongoDB is at the very least a good fit since it's the database Foursquare uses with exactly the purpose you describe. MongoDB geo indexing is extremely fast and scales up nicely.
Another possible solution is to use CouchBase. It uses a document model as well - though you need to be much more comfortable with MapReduce for queries. It has better spatial capabilities right now thank MongoDB but that may change over time.
Suggestion aside, I agree that of the two choices you have given Mongo will suit your needs fine and probably more appropriate for your spatial queries.
Neo4j geospatial doesn't scale up that good. I created a geospatial layer in neo4j and added nodes to this layer. Beyond 10,000 nodes the addition of nodes to the layer becomes very slow even when using neo4j2.0
On the other hand, mongodb geo-location works comparatively much faster and is more scalable.
So, I've been mulling over these concepts for some time, and my understanding is very basic. Information retrieval seems to be a topic seldom covered in the wild...
My questions stem from the process of clustering documents. Let's say I start off with a collection of documents containing only interesting words. What is the first step here? Parse the words from each document and create a giant 'bag-of-words' type model? Do I then proceed to create vectors of word counts for each document? How do I compare these documents using something like the K-means clustering?
Try Tf-idf for starters.
If you read Python, look at
"Clustering text documents using MiniBatchKmeans"
in scikit-learn:
"an example showing how the scikit-learn can be used to cluster
documents by topics using a bag-of-words approach".
Then feature_extraction/text.py in the source has very nice classes.