Price (Numeric value) is not setting in facet Meilisearch - algolia

I have rangeInput field, which has Min and Max input, where user can add price to search, I have below code for price search.
customRangeInput({
container: document.querySelector('#range-input'),
attribute: 'price',
}),
and I am adding as facet like below:
curl \
-X POST 'https://search.example.com/indexes/maps/settings' \
--data '{
"searchableAttributes": [ "price" ]
}' -H "X-Meili-API-Key: xxxxxx"
but it says Price is not set as facet. can we not se numeric value in facet?
(Attribute price is not set as facet)
any help would be great.
Thanks
Sanjay

In the current version of MeiliSearch you can perform faceting on attributes that are either String or [String]. The whole core team is currently working on a new search engine that will be released with v0.21.0 and will accept numeric values too :)

Related

Optapy : Use toList function on group by from two joined classes - TypeError : No matching overloads

I am currently working on an employee rostering problem in which one constraint's goal is to avoid gaps in an employee schedule using a constraint stream. The idea is to get for each employee the timeslots they are assigned to along with their availability, retrieve those informations in a list, then perform the check on the returned list.
The constraint stream is as following :
def continuous_shifts(constraint_factory: ConstraintFactory, score = HardSoftScore.ONE_HARD):
return constraint_factory \
.for_each(timeslot_assignment_class) \
.join(availability_class, [
Joiners.equal(
lambda timeslot_assignment : timeslot_assignment.resource.resource_id,
lambda availability : availability.resource.resource_id
)
]) \
.group_by(lambda timeslot_assignment, availability : timeslot_assignment.resource.resource_id,
lambda timeslot_assignment, availability : availability.resource.resource_id,
ConstraintCollectors.to_list()) \
.penalize("holes in schedule",score,lambda timeslot_list : holes_in_list(timeslot_list))
What I want to do is to join the timeslot_assignment with the availabilities based on the resource_id attribute, group them by resource (i.e. employee) then return those groups in lists in which I can test gaps in schedule in the penalize part.
I have to use a join on the availability class because they are not contained as an attribute in resources and are stored separately for navigation purposes.
The main struggle I am having is in returning a list in the group by function. In the showed case, I got this error :
TypeError: No matching overloads found for org.optaplanner.constraint.streams.drools.bi.DroolsAbstractBiConstraintStream.groupBy(proxy.PythonBiFunction,proxy.PythonBiFunction,org.optaplanner.core.api.score.stream.DefaultUniConstraintCollector),
Followed by a list of suitable options for ConstraintCollectors.
As I read in other posts, I understood that there can be overload issues with functions used with ConstraintCollectors and that the types might have to be manually specified. I tried other combinations like casting the lambda functions to java BiFunctions, or changing the group by function like this :
.group_by(lambda timeslot_assignment, availability : (timeslot_assignment.resource.resource_id,availability.resource.resource_id),
ConstraintCollectors.to_list())
which changed the error message according to the modified classes/stream cardinality but with no improvement. I also tried swapping the to_list function for toList which resulted in no change.
I can't figure out if the problem comes from the way I joined/grouped or if this is more of a type issue in which some types have to be specified.
Any help on the matter would be greatly appreciated.
The problem is to_list() is actually to_list(lambda x: x), which is a UniConstraintCollector, and thus can only be used with UniConstraintStream (that is, a constraint stream of only one element). You have a BiConstraintStream, so to_list() will not work. What will work is to_list(lambda x, y: (x, y)), which is a BiConstraintCollector (and thus will work with BiConstraintStream). Thus, the constraint stream should look like this:
def continuous_shifts(constraint_factory: ConstraintFactory, score = HardSoftScore.ONE_HARD):
return constraint_factory \
.for_each(TimeslotAssignment) \
.join(Availability,
Joiners.equal(
lambda timeslot_assignment : timeslot_assignment.resource.resource_id,
lambda availability : availability.resource.resource_id
)
) \
.group_by(ConstraintCollectors.to_list(lambda timeslot_assignment, availability : (timeslot_assignment.resource.resource_id,availability.resource.resource_id))) \
.penalize("holes in schedule",score,lambda timeslot_list : holes_in_list(timeslot_list))
Note: optapy no longer require usage of get_class; you can directly use the relevant Python classes. Additionally, casting is no longer required for group by's, and you don't need to pass Joiners as a list. (although the old code will still work).
def continuous_shifts(constraint_factory: ConstraintFactory, score = HardSoftScore.ONE_HARD):
return constraint_factory \
.for_each(TimeslotAssignment) \
.join(Availability,
Joiners.equal(
lambda timeslot_assignment : timeslot_assignment.resource.resource_id,
lambda availability : availability.resource.resource_id
)
) \
.group_by(ConstraintCollectors.to_list(lambda timeslot_assignment, availability : (timeslot_assignment.resource.resource_id,availability.resource.resource_id))) \
.penalize("holes in schedule",score,lambda timeslot_list : holes_in_list(timeslot_list))
Like Lukas said, to_list is a performance killer. If you are looking for singular gaps, I suggest looking at the OptaPy employee scheduling quickstart, in particular the "at least two hours between two shifts" constraint.
In regards to finding (global) consecutive shifts and breaks in shifts (i.e. need to penalize/reward based on a non-linear function on the number of consecutive shifts/breaks); that is best done using a ConstraintCollector; there is currently an experimental one in optaplanner-examples, but that is not accessible to optapy. Once that collector is available in optaplanner-core, it will also be available in optapy. You cannot create your own custom ConstraintCollector currently, but it will be available in a future version of optapy.

Refine Search on WIQL

WIQL SEARCH:
{
"query": "SELECT [System.Id] FROM WorkItems WHERE [System.Title] Contains Words 'midserver' AND [System.AreaPath] = 'XXXXX' AND [System.WorkItemType]='Issue' AND [System.State]<>'Done' ORDER BY [System.Id]"
},
can you pls help me with a query which refines the search i.e. the query should search the exact words and not CONTAINS ([System.Title] CONTAINS 'Search Text') –
something like IS ([System.Title], i have tried that but it doesn't recognize the query i think "IS" is not recognized
for e.g. story contains following names "rahul 1", and "rahul 2"..but if Iam searching with only "rahul" it should not display "rahul1" and "rahul2" instead it should say something like not found
Observation: its not working if there is space in the user story when i use Contains Words
so basically searching for the exact text if it is there or not but not search with contains
Since you want to search the exact words, why not just use a " = ". Change your WIQL like this:
SELECT
[System.Id]
FROM WorkItems
WHERE
[System.Title] = 'midserver'
AND [System.WorkItemType]='Issue'
AND [System.State]<>'Done'
ORDER BY [System.Id]
Actually the detail supported operators you are using such as =, Contains, Under is based on Field type.
Not all operators could used for each field type. For details, you could take a look at this screenshot:
If you are using System.Title which is a string field
Title
A short description that summarizes what the work item is and helps
team members distinguish it from other work items in a list. Reference
name=System.Title, Data type=String
So instead of Contains Words, you could directly use "=" in your case. For other kind of fields, you need to follow above operators.

significance of $ and "" in mongodb

I am learning MongoDB. Getting confused on usage of "$"
I have collection as below schema:
{
_id: 1,
"name": "test",
"city": "gr",
"sector": "IT",
"salary":1000
}
I find below output on executing below query:
Query Result
db.user.find({salary:2000}); Works
db.user.find({$salary:2000}); does not work(unknown top level operator: $salary)
db.user.aggregate({$group:{_id:null,avg:{$avg:"$salary"}}}); Works
db.user.aggregate({$group:{_id:null,avg:{$avg:$salary}}}); does not work($salary is not defined)
db.user.aggregate({$group:{_id:null,avg:{$avg:"salary"}}}); gives wrong output.
Can anyone please explain,what is the syntactical significance of "" and $ in mongoDB.
Hi lets look at these queries
1- db.user.find({salary:2000});
2- db.user.find({$salary:2000});
Take a look at this for find.
According to this find takes {field: value}, your first query works because salary is valid field.
Your second query doesn't work becuase there is no field $salary
3- db.user.aggregate({$group:{_id:null,avg:{$avg:"$salary"}}});
4- db.user.aggregate({$group:{_id:null,avg:{$avg:$salary}}});
5- db.user.aggregate({$group:{_id:null,avg:{$avg:"salary"}}});
For aggregation, lets take a look at this $avg.
Here it says that $avg takes {$avg: expression}. So you are actually keeping expression over there not a field.
Now take a look at this for expression.
Expression can be field paths and system variables, literals, expression objects, and expression operators.
Query numbers 3,4,5 aren't expression objects or expression operators. So lets eliminate these options.
Now lets take a look at $literal.
It states that literals can be of any type, however MongoDB parses literals that start with a dollar sign as a path to a field.
Finally take a look at Field Path and System variables.
It states "To specify a field path, use a string that prefixes with a dollar sign $ ... For example, "$user" to specify the field path for the user field or "$user.name" to specify the field path to "user.name" field."
That means you are specifying $salary as path to the field in $avg:"$salary" and query number 3 works.
Query number 4 doesn't work because $salary is an invalid expression.
This should explain the significance of ""
Query number 5 is not working because again it doesn't find any field to average on. Though it works because its a valid query it simply returns null.
You could have had
db.user.aggregate({$group:{_id:null,avg:{$avg:"some_non_existent_field"}}});
And the query will still run fine but you will get null for your results.
I hope this helps, this was a lot of fun to gather.

How to filter by category in Magento 2's API?

In Magento 2's REST API, there is an option to search a product using various search criteria. As you know, an example is given below:
http://magentohost/rest/V1/products?searchCriteria[filter_groups][0][filters][0][field]=name& searchCriteria[filter_groups][0][filters][0][value]=%macbook%& searchCriteria[filter_groups][0][filters][0][condition_type]=like
But I have not found an option to search by category.
How can we do that?
To search by category, it is simple. You have to just pass category_id as a field. Have a look at below example:
http://magentohost/rest/V1/products?searchCriteria[filterGroups][0][filters][0][field]=category_id& searchCriteria[filterGroups][0][filters][0][value]=4& searchCriteria[filterGroups][0][filters][0][conditionType]=eq&searchCriteria[sortOrders][0][field]=created_at& searchCriteria[sortOrders][0][direction]=DESC& searchCriteria[pageSize]=10& searchCriteria[currentPage]=1
You can also target multiple categories at once:
searchCriteria[filter_groups][0][filters][0][field]=category_id&searchCriteria[filter_groups][0][filters][0][value]=1,2,3&searchCriteria[filter_groups][0][filters][0][condition_type]=in&searchCriteria[sort_orders][0][field]=created_at&searchCriteria[sort_orders][0][direction]=DESC&searchCriteria[current_page]=1&searchCriteria[page_size]=10
I have a little lib - https://github.com/dsheiko/magentosearchquerybuilder, which helps me building such queries
$builder = new SearchCriteria();
$builder
->filterGroup([
[ "category_id", implode(",", $categories), SearchCriteria::OP_IN ],
])
->sortOrder( "created_at", "DESC")
->limit(1, 10);
echo $builder->toString();

Add shp. name as attribute, looping through dates

I have been stuck on this for a few days. I have a folder with hundreds of shapefiles. I want to add an attribute field to the shapefiles giving the shapefile's name as a date. The shapefile name includes Landsat path/row, year, and Julien date ('1800742003032.shp). I want just the date '2003032' to be added under a "Date" field.
Here's what I have so far:
arcpy.env.workspace = r"C:\Users\mkelly\Documents\Namibia\Raster_Water\1993\Polygons"
for fc in arcpy.ListFeatureClasses("*", "ALL"):
print str("processing" + fc)
field = "DATE"
expression = str(fc)[6:13]
arcpy.AddField_management(fc, field, "TEXT")
arcpy.CalculateField_management(fc, field, "expression", "PYTHON")
Results:
processing1800742003032.shp
processing1800742009136.shp
processing1820732010289.shp
end Processing...
It runs perfectly (on a sample 3 shapefiles) but the problem is that when I open the shapefiles in Arcmap, they all have the same date. The results show that it processed each of the 3 shapefiles, and the add field management must have worked because all of the fields are populated. So there is an issue with either the expression, or the Calculate field command.
How can I get it to populate the specific date for each shapefile, and not just have all of them be '2003032'?? There are no error messages.
Thanks in advance!
I figured it out! For calculate field management, expression should not be in quotes. It should be: arcpy.CalculateField_management(fc, field, expression, "PYTHON")
This post may have been a waste of time, but at least maybe it will help someone with a similar problem in the future.