Jira Cloud search via REST API for the issue with multiple special characters - rest

We have a couple of issues in Jira Cloud having names that contain multiple special characters. Examples:
My i$$ue
#nother issue
R&D related issue
s#me issue
s###me issue
$simple issue
Looking for a way of searching issues using REST API.
First I tried simple GET search like this: akceptor.atlassian.net/rest/api/3/issue/picker?query=s#me
It returns issues with 's#me' clause in the name but if you use partial name in search i.e. ?query=s# - the issue with name containing ### won't be found. Also does not work for &, $ and some other characters.
The next thing I tried was POST search using JQL. I.e. hitting akceptor.atlassian.net/rest/api/3/search resource with the following body:
{
"expand": [
"names"
],
"jql": "text ~ \"s#\"",
"maxResults": 15,
"fieldsByKeys": false,
"fields": [
"summary",
"status",
"assignee"
],
"startAt": 0
}
This found 's###me issue' but not 's#me issue'.
Worked better for issue names containing & and $ characters but still requires a full word to be included in the JQL query in some cases.
Documentation available gives a list of unsupported special characters but it looks like there is an issue with words containing chains of supported characters as well.
Any ideas how to properly search for both 's#me' and 's###me'?
Especially in cases when we don't want to specify the beginning of word (i.e. we are interested in something ending with '#me')

Contacted Atlassian supports and they confirmed a bug:
If you perform a search using special character in the quick search, it will return no results.
Affected characters:
/ _ - &
https://jira.atlassian.com/browse/JRACLOUD-71066

Related

Ranking rules for camelCase attributes

I'm building an Algolia index to search through user-created communities on my site.
Just like for subreddits, the name of the communities can't contain spaces and are therefore often written by users in camelCase.
Here is an example of an object in my index:
{
"name": "headphoneAdvice",
"description": "This community is dedicated to enthusiasts and newcomers. We are all about making the right decision when purchasing new headphones."
}
Both name and description are set to be searchable attributes and i'm currently using these ranking rules :
["typo","geo","words","filters","proximity","attribute","exact","custom"]
However, this does not seem to work well with the camelCase name. For example, if I type "advice" in the search, the object above with "name": "headphonesAdvice" isn't found.
I'm guessing this is because words in camelCase are considered single words and thus do not match.
I've looked online for rules that allow indexing of camelCase attributes but couldn't find anything really.
Any ideas?
Cheers!
After asking around, found that someone at algolia thought of this and added https://www.algolia.com/doc/api-reference/api-parameters/camelCaseAttributes/ kudos!

Core Reporting API - Advanced filter

I'm trying to run the same query that I use on the Google Analytics web application with The google analytics reporting API.
I want an advanced filter to check pageviews for urls with a specific folder (/folder/) and must ignore pageviews that have a specific source (ignore).
I have this:
{
"reportRequests":
[
{
"viewId": "xxxx",
"dateRanges": [{"startDate": "2020-02-01", "endDate": "2020-02-21"}],
"metrics": [{"expression": "ga:pageviews"}],
"dimensionFilterClauses": [{"filters": [{"dimensionName": "ga:pagePath","operator": "BEGINS_WITH","expressions": ["/folder/"]}]},{"filters": [{"dimensionName": "ga:source","operator": "!=","expressions": ["ignore"]}]}]
}
]
}
The /folder/ part is ok. But I don't know how to exclude the ignore.
Could you help me?
I've set up filters before similar to how you're wanting to use them here.
I use essentially a combination of the following:
Dimension Filters:
== Exact match
!= Does not match
=# Contains substring
!# Does not contain substring
=~ Contains a match for the regular expression
!~ Does not match regular expression
Combining Filters with “And” and “Or” Operators:
And “ ; ”
Or “ , “
I have a repo with example code of report requests that you can repurpose if that assists you here: https://github.com/jessfeliciano/aggregateGoogleAnalyticsReporting/blob/master/objectQueryWithFilter.js
I hope this helps, let me know if you have any follow up questions.

How to allow leading wild cards in custom smart search web part (Kentico 10)

I have a custom index for my products and I am using the Subset Analyzer. This Analyzer works great, but if you do field searches, it does not work.
For example, I have a document with the following fields:
"documentname", "My-Document-Name"
"tags", "1234,5678,9101"
"documentdescription", "This is a great Document, My-Document-Name."
When I just search "name AND tags:(1234)", I get this document in my results because it searches +_content:name.
-- However:
When I search "documentname:(name)^3.0 AND tags:(1234)", I do not get this document in my results.
Of course, when I do "documentname:(*name*)^3.0" I get a parse error saying: '*' or '?' not allowed as first character in WildcardQuery.
How can I either enable wildcard query in my custom CMS.Search webpart?
First of all you have to make sure that a field you checking is in the index with proper name. documentname might not be in the index it can be called _title, depends how you index is set up. Get lukeall and check your index (it should be in \CMS\App_Data\CMSModules\SmartSearch\YourIndexName). You can use luke to test your searches as well.
For examples there is no tags but there is documenttags field.
P.S. Wildcards are working and you are right you can't use them as a first character by default (lucene documentation says: You cannot use a * or ? symbol as the first character of a search), but there is a way to set it up in lucene.net, although i dont know if there are setting for that in Kentico. But i dont think you need wildcards, so your query should be (assuming you have documentname and documenttags in the index):
+(documentname:"My-Name" AND documenttags:"tag1")

whoosh doesn't search for short words like "C#"

i am using whoosh to index over 200,000 books. but i have encountered some problems with it.
the whoosh query parser returns NullQuery for words like "C#", "C++" with meta-characters in them and also for some other short words. this words are used in the title and body of some documents so i am not using keyword type for them. i guess the problem is in the analysis or query-parsing phase of searching or indexing but i can't touch my data blindly. can anyone help me to correct this issue. Tnx.
i fixed the problem by creating a StandardAnalyzer with a regex pattern that meets my requirements,here is the regex pattern:
'\w+[#+.\w]*'
this will make tokenizing of fields to be done successfully, and also the searching goes well.
but when i use queries like "some query++*" or "some##*" the parsed query will be a single Every query, just the '*'. also i found that this is not related to my analyzer and this is the Whoosh's default behavior. so here is my new question: is this behavior correct or it is a bug??
note: removing the WildcardPlugin from the query-parser solves this problem but i also need the WildcardPlugin.
now i am using the following code:
from whoosh.util import rcompile
#for matching words like: '.NET', 'C++' and 'C#'
word_pattern = rcompile('(\.|[\w]+)(\.?\w+|#|\+\+)*')
#i don't need words shorter that two characters so i don't change the minsize default
analyzer = analysis.StandardAnalyzer(expression=word_pattern)
... now in my schema:
...
title = fields.TEXT(analyzer=analyzer),
...
this will solve my first problem, yes. but the main problem is in searching. i don't want to let users to search using the Every query or *. but when i parse queries like C++* i end up an Every(*) query. i know that there is some problem but i can't figure out what it is.
I had the same issue and found out that StandardAnalyzer() uses minsize=2 by default. So in your schema, you have to tell it otherwise.
schema = whoosh.fields.Schema(
name = whoosh.fields.TEXT(stored=True, analyzer=whoosh.analysis.StandardAnalyzer(minsize=1)),
# ...
)

How to use keywords include ampersand(&) in Facebook Search API

I want to use some keywords that include special characters like & in Facebook search api. I tried the query below but I cannot get useful results. Is there any chance for this usage in search api? How should I build my search query?
My example queries and keywords are "H&M", "marks & spencer",
http://graph.facebook.com/search?type=post&limit=25&q="H&M"
http://graph.facebook.com/search?type=post&limit=25&q="marks & spencer"
My team worked on this forever, ended up finding this as a solution that provides relevant results for a query with an ampersand, such as 'H&M'.
%26amp%3b
This is the hex equivilent to &
So your example link would be
http://graph.facebook.com/search?type=post&limit=25&q="H%26amp%3bM"
We found the solution thanks to Creative Jar
You want %26 which is the URL encode for ampersand so
http://graph.facebook.com/search?type=post&limit=25&q="H%26M" http://graph.facebook.com/search?type=post&limit=25&q="marks %26 spencer"
Depending on your language, it may have a URL encoding function or you can just use string replacement.
It seems, that all of solutions suggested here are not working any more.
Searching for q=H%26%bM returns empty data set. The same for q=H%26M.
It must have changed recently, in last 2 months.
If you try to search for postings about H&M on Facebook site (type H&M in search, then "Show me more results" on the bottom of list and then public posts on menu on the left side) the list is empty.
The only query that returns any results is q=H&M but it is not helpful, as the results are irrelevant for that query.