Sensenet length filter Not working - rest

I want to query the Field which are empty and which are not empty using Sensenet Odata Rest API. Their documentation mentions a filter function called 'length'. I have tried to query the field with the length operation but it fails with the error.
This is the filter I have used
$filter=length(Name) eq 2
Sense/Net 6.5.4.9496
Exception
"code": "NotSpecified",
"exceptiontype": "SnNotSupportedException",
"message": {
"lang": "en-us",
"value": "Unknown method: length"
},
Wiki Link http://wiki.sensenet.com/OData_REST_API

The length operation was included in the list of supported methods incorrectly, we apologise for that. SenseNet compiles these filters to Lucene queries and it is not possible to compose such a query in Lucene that performs an operation on a field.
(the remaining methods, like substringof or startswith can be compiled to a wildcard expression easily, so that should work)
Unfortunately 'empty' expressions are also not supported by Lucene, because of their document/term structure. So the following expression does not work either:
Description eq ''
Edit: as a workaround, developers may create a custom field index handler.
For every field you want to check for emptiness (e.g. Description), you may create a technical hidden bool field (IsDescriptionEmpty) in the content type definition. The only thing you have to create and define is a custom field index handler class. In your case it would inherit from the built-in bool field index handler and you could return a boolean index value based on whether the target field (in this case Description) is empty or not.
After this you would be able to define search exressions like the following:
+Type:File +IsDescriptionEmpty:true
Please check the wiki article below and the source code for index handler examples.
How to create a field indexhandler

Related

How to allow leading wild cards in custom smart search web part (Kentico 10)

I have a custom index for my products and I am using the Subset Analyzer. This Analyzer works great, but if you do field searches, it does not work.
For example, I have a document with the following fields:
"documentname", "My-Document-Name"
"tags", "1234,5678,9101"
"documentdescription", "This is a great Document, My-Document-Name."
When I just search "name AND tags:(1234)", I get this document in my results because it searches +_content:name.
-- However:
When I search "documentname:(name)^3.0 AND tags:(1234)", I do not get this document in my results.
Of course, when I do "documentname:(*name*)^3.0" I get a parse error saying: '*' or '?' not allowed as first character in WildcardQuery.
How can I either enable wildcard query in my custom CMS.Search webpart?
First of all you have to make sure that a field you checking is in the index with proper name. documentname might not be in the index it can be called _title, depends how you index is set up. Get lukeall and check your index (it should be in \CMS\App_Data\CMSModules\SmartSearch\YourIndexName). You can use luke to test your searches as well.
For examples there is no tags but there is documenttags field.
P.S. Wildcards are working and you are right you can't use them as a first character by default (lucene documentation says: You cannot use a * or ? symbol as the first character of a search), but there is a way to set it up in lucene.net, although i dont know if there are setting for that in Kentico. But i dont think you need wildcards, so your query should be (assuming you have documentname and documenttags in the index):
+(documentname:"My-Name" AND documenttags:"tag1")

in the Overpass API is there a way to use logical operators on tag existence?

The Overpass API language guide does allow for logical operators when matching a tag value... for example:["name"~"holtorf|Gielgen"] will return whatever object has either name=holtorf or name=Gielgen.
You can also combine conditions and they will become an AND... so for example:
["name"]["name"="holtorf"]. Means to search for things that have the tag "name" and that the tag name is equal to "holtorf".
But what I want is an OR operator... something like:
["name"="holtorf"]|["name:eng"holtorf"]
In my specific application, I just want to know if there is ANY tag that start with "name"... so what I would like to do is put this into the API: ["^name"] (cause in this API "^" means "starts with"). But of course it searches for literal "^name" and returns nothing.
Is there some workaround?
There is no OR operation, but you can use UNION
(
way["name"="holtorf"];
way["name:eng"=holtorf"]
);
There is also a DIFFERENCE and negotiation http://wiki.openstreetmap.org/wiki/Overpass_API/Overpass_QL#Difference
And in your particular case, you could use key-value regexpressions matching. http://wiki.openstreetmap.org/wiki/Overpass_API/Overpass_QL#Key.2Fvalue_matches_regular_expression_.28.7E.22key_regex.22.7E.22value_regex.22.29
[~"^name.*$"~"^holtorf$"];
//or only for key
[~"^name.*$"="Holtorf"];

Entity cannot be found by elasticsearch

I have the following entity in ElasticSearch:
{
"id": 123,
"entity-id": 1019,
"entity-name": "aaa",
"status": "New",
"creation-date": "2014-08-06",
"author": "bubu"
}
I try to query for all entities with status=New, so the above entity should appear there.
I run this code:
qesponse.setQuery(QueryBuilders.termQuery("status", "New"));
return qResponse.setFrom(start).setSize(size).execute().actionGet().toString();
But it return no result.
If I use this code (general search, not of specific field) I get the above entity.
qResponse.setQuery(QueryBuilders.queryString("New");
return qResponse.setFrom(start).setSize(size).execute().actionGet().toString();
Why?
The problem is a mismatch between a Term Query and using the Standard Analyzer when you index. The Standard Analyzer, among other things, lowercases the field when it's indexed:
Standard Analyzer
An analyzer of type standard is built using the Standard Tokenizer
with the Standard Token Filter, Lower Case Token Filter, and Stop
Token Filter.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-standard-analyzer.html
The Term query, however, matches without analysis:
Term Query
Matches documents that have fields that contain a term (not analyzed).
The term query maps to Lucene TermQuery.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-term-query.html
So in your case when you index the field status it becomes "new". But when you search with a Term Query it's looking for "New" - they don't match. They do match with a general search it works because the general search also uses the Standard Analyzer.
The default value of index for a string field is analyzed . So, when you write "status" = "New" , it will use standard_analyzer, and after analyzing it will write as "new" .
So, term Query doesn't seems to be working, If you wish to query like you specified ,write mapping for the field as "not_analyzed".
For more info. link

How do I add a 'where not' to a QueryBuilder Query

I want to search the entire content tree but not specific tress that have a 'Do Not Search' property at their base.
The Query Builder API page does not reference anything besides AND and OR.
Is it possible to exclude paths from the search or can I only explicitly include paths?
The first three lines are "/content AND /content/path/es". I want "/content AND NOT(/content/path/es)"
map.put("group.1_path", "/content");
map.put("group.2_path", "/content/path/es");
map.put("group.p.or","false");
I have tried the next two both true and false and they have no effect.
map.put("group.2_path.p.not", "true");
map.put("group.2_path.not", "true");
map.put("group.2_path", "not('/content/path/es')");
I can't find any documentation that mentions any other name that 'not' or '!' might be used instead.
Yes it is possible. But not exactly in the way you are trying.
You can exclude the pages with certain properties using the property predicate evaluator.
For ex. If you want to exclude pages which have the property "donotsearch" in its jcr:content node, then you can query it using property operation as exists
map.put("path", "/content/geometrixx/en/toolbar");
map.put("type", "cq:Page");
/* Relative path to the property to check for */
map.put("property", "jcr:content/donotsearch");
/* Operation to perform on the value of the prop, in this case existence check */
map.put("property.operation", "exists");
/* Value for the prop, false = not, by default it is true */
map.put("property.value", "false");
This would result in the following XPath Query
/jcr:root/content/geometrixx/en/toolbar//element(*, cq:Page)
[
not(jcr:content/#donotsearch)
]
But in case you would like to exclude pages with certain value for the property donotsearch, then you can change the above query as shown below
map.put("property", "jcr:content/donotsearch"); //the property to check for
map.put("property.operation", "equals"); // or unequals or like etc..
map.put("property.value", "/*the value of the property*/");
You can find a lot other info regarding querying by referring to the docs.
I'm not sure what version of CQ you're using (you linked to the 5.4 docs), but in 5.5 and above, the PredicateGroup class has a setNegated method to exclude results that would match the group defined.
You can't set negation on an individual Predicate, but there would be nothing to stop you creating a group with just the predicate that you wish to negate:
Predicate pathPredicate = new Predicate("path").set("path", "/content/path/es");
PredicateGroup doNotSearchGroup = new PredicateGroup();
doNotSearchGroup.setNegated(true);
doNotSearchGroup.add(pathPredicate);
Query query = queryBuilder.createQuery(doNotSearchGroup);
EDIT: Just to update in relation to your comment, you should be able to add a PredicateGroup to another PredicateGroup (as PredicateGroup is a subclass of Predicate). So once you have your negated group, combine it with the path search:
Predicate pathPredicate = new Predicate("path");
pathPredicate.set("path", "/content");
PredicateGroup combinedPredicate = new PredicateGroup();
combinedPredicate.add(pathPredicate);
combinedPredicate.add(doNotSearchGroup);
Query query - queryBuilder.createQuery(combinedPredicate);
It is pretty straightforward implementation.
Use
map.put("group.p.not",true)
map.put("group.1_path","/path1/where/you/donot/want/to/search")
map.put("group.2_path","/path2/where/you/donot/want/to/search")
I've run into the same problem and while I couldn't fully solve it I was able to come up with a workaround using groups and the unequals operator. Something like:
path=/var/xxx
1_property=jcr:primaryType
1_property.value=rep:ACL
1_property.operation=unequals
2_property=jcr:primaryType
2_property.value=rep:GrantACE
2_property.operation=unequals
Btw, map.put("group.p.not",true) did not work for me.
This link has a lot of useful information: https://hashimkhan.in/2015/12/02/query-builder/

How do I dynamically build a search block in sunspot?

I am converting a Rails app from using acts_as_solr to sunspot.
The app uses the field search capability in solr that was exposed in acts_as_solr. You could give it a query string like this:
title:"The thing to search"
and it would search for that string in the title field.
In converting to sunspot I am parsing out field specific portions of the query string and I need to dynamically generate the search block. Something like this:
Sunspot.search(table_clazz) do
keywords(first_string, :fields => :title)
keywords(second_string, :fields => :description)
...
paginate(:page => page, :per_page => per_page)
end
This is complicated by also needing to do duration (seconds, integer) ranges and negation if the query requires it.
On the current system users can search for something in the title, excluding records with something else in another field and scoping by duration.
In a nutshell, how do I generate these blocks dynamically?
I recently did this kind of thing using instance_eval to evaluate procs (created elsewhere) in the context of the Sunspot search block.
The advantage is that these procs can be created anywhere in your application yet you can write them with the same syntax as if you were inside a sunspot search block.
Here's a quick example to get you started for your particular case:
def build_sunspot_query(conditions)
condition_procs = conditions.map{|c| build_condition c}
Sunspot.search(table_clazz) do
condition_procs.each{|c| instance_eval &c}
paginate(:page => page, :per_page => per_page)
end
end
def build_condition(condition)
Proc.new do
# write this code as if it was inside the sunspot search block
keywords condition['words'], :fields => condition[:field].to_sym
end
end
conditions = [{words: "tasty pizza", field: "title"},
{words: "cheap", field: "description"}]
build_sunspot_query conditions
By the way, if you need to, you can even instance_eval a proc inside of another proc (in my case I composed arbitrarily-nested 'and'/'or' conditions).
Sunspot provides a method called Sunspot.new_search which lets you build the search conditions incrementally and execute it on demand.
An example provided by the Sunspot's source code:
search = Sunspot.new_search do
with(:blog_id, 1)
end
search.build do
keywords('some keywords')
end
search.build do
order_by(:published_at, :desc)
end
search.execute
# This is equivalent to:
Sunspot.search do
with(:blog_id, 1)
keywords('some keywords')
order_by(:published_at, :desc)
end
With this flexibility, you should be able to build your query dynamically. Also, you can extract common conditions to a method, like so:
def blog_facets
lambda { |s|
s.facet(:published_year)
s.facet(:author)
}
end
search = Sunspot.new_search(Blog)
search.build(&blog_facets)
search.execute
I have solved this myself. The solution I used was to compiled the required scopes as strings, concatenate them, and then eval them inside the search block.
This required a separate query builder library that interrogates the solr indexes to ensure that a scope is not created for a non existent index field.
The code is very specific to my project, and too long to post in full, but this is what I do:
1. Split the search terms
this gives me an array of the terms or terms plus fields:
['field:term', 'non field terms']
2. This is passed to the query builder.
The builder converts the array to scopes, based on what indexes are available. This method is an example that takes the model class, field and value and returns the scope if the field is indexed.
def convert_text_query_to_search_scope(model_clazz, field, value)
if field_is_indexed?(model_clazz, field)
escaped_value = value.gsub(/'/, "\\\\'")
"keywords('#{escaped_value}', :fields => [:#{field}])"
else
""
end
end
3. Join all the scopes
The generated scopes are joined join("\n") and that is evaled.
This approach allows the user to selected the models they want to search, and optionally to do field specific searching. The system will then only search the models with any specified fields (or common fields), ignoring the rest.
The method to check if the field is indexed is:
# based on http://blog.locomotivellc.com/post/6321969631/sunspot-introspection
def field_is_indexed?(model_clazz, field)
# first part returns an array of all indexed fields - text and other types - plus ':class'
Sunspot::Setup.for(model_clazz).all_field_factories.map(&:name).include?(field.to_sym)
end
And if anyone needs it, a check for sortability:
def field_is_sortable?(classes_to_check, field)
if field.present?
classes_to_check.each do |table_clazz|
return false if ! Sunspot::Setup.for(table_clazz).field_factories.map(&:name).include?(field.to_sym)
end
return true
end
false
end