Sphinx(Search) - documents which matches keyword twice (thrice, etc.) - sphinx

Is there a way to only output documents which contains n matches of a search term in it?
F.e. I want to output all documents containing the search term "Pablo Picasso" | "Picasso Pablo" at least two (three, n) times.
How would such a query look like?
My current query is:
SELECT * FROM myIndex WHERE MATCH('"Pablo Picasso" | "Picasso Pablo"');

You could do it with filtering by weight (ie results with it multiple time wil rank higher)
But a useful trick is the Strict order operator...
MATCH('Pablo << Pablo')
would require the word twice (ie one before the other!)
You can also use the primoxity operator to simplify your original query, it just wants the words near each other, which is more conise than two phrase operators
MATCH('"Pablo Picasso"~1')
... ie within 1 word of each other - ie adjent.
Combine the two..
MATCH('"Pablo Picasso"~1 << "Pablo Picasso"~1')
and for theree occurances
MATCH('"Pablo Picasso"~1 << "Pablo Picasso"~1 << "Pablo Picasso"~1')

Related

postgres regexp_matches strange behavior

Following the short docs on regexp_matches:
Return all captured substrings resulting from matching a POSIX regular expression against the string.
Example: regexp_matches('foobarbequebaz', '(bar)(beque)') returns {bar,beque}
With that in mind, I'd expect the result of regexp_matches('barbarbar', '(bar)') to be {bar,bar,bar}
However, only {bar} is returned.
Is this the expected behavior? Am I missing something?
Note:
calling regexp_matches('barbarbar', '(bar)', 'g') does return all 3 bars, but in table form:
regexp_matches text[]
{bar}
{bar}
{bar}
This behavior is described more in details in 9.7.3. POSIX Regular Expressions :
The regexp_matches function returns a set of text arrays of captured
substring(s) resulting from matching a POSIX regular expression
pattern to a string. It has the same syntax as regexp_match. This
function returns no rows if there is no match, one row if there is a
match and the g flag is not given, or N rows if there are N matches
and the g flag is given. Each returned row is a text array containing
the whole matched substring or the substrings matching parenthesized
subexpressions of the pattern, just as described above for
regexp_match. regexp_matches accepts all the flags shown in Table
9.24, plus the g flag which commands it to return all matches, not just the first one.
This is expected behavior. The function returns a set of text[] which means that multiple matches are presented in multiple rows. Why is it organized this way? The goal is to make it possible to find more than one token from a single match. In this case, they are presented in the form of an array. The documentation delivers a telling example:
SELECT regexp_matches('foobarbequebazilbarfbonk', '(b[^b]+)(b[^b]+)', 'g');
regexp_matches
----------------
{bar,beque}
{bazil,barf}
(2 rows)
The query returns two matches, each of them containing two tokens found.

Prefix/wildcard searches with 'websearch_to_tsquery' in PostgreSQL Full Text Search?

I'm currently using the websearch_to_tsquery function for full text search in PostgreSQL. It all works well except for the fact that I no longer seem to be able to do partial matches.
SELECT ts_headline('english', q.\"Content\", websearch_to_tsquery('english', {request.Text}), 'MaxFragments=3,MaxWords=25,MinWords=2') Highlight, *
FROM (
SELECT ts_rank_cd(f.\"SearchVector\", websearch_to_tsquery('english', {request.Text})) AS Rank, *
FROM public.\"FileExtracts\" f, websearch_to_tsquery('english', {request.Text}) as tsq
WHERE f.\"SearchVector\" ## tsq
ORDER BY rank DESC
) q
Searches for customer work but cust* and cust:* do not.
I've had a look through the documentation and a number of articles but I can't find a lot of info on it. I haven't worked with it before so hopefully it's just something simple that I'm doing wrong?
You can't do this with websearch_to_tsquery but you can do it with to_tsquery (because ts_query allows to add a :* wildcard) and add the websearch syntax yourself in in your backend.
For example in a node.js environment you could do smth. like this:
let trimmedSearch = req.query.search.trim()
let searchArray = trimmedSearch.split(/\s+/) //split on every whitespace and remove whitespace
let searchWithStar = searchArray.join(' & ' ) + ':*' //join word back together adds AND sign in between an star on last word
let escapedSearch = yourEscapeFunction(searchWithStar)
and than use it in your SQL
search_column ## to_tsquery('english', ${escapedSearch})
You need to write the tsquery directly if you want to use partial matching. plainto_tsquery doesn't pass through partial match notation either, so what were you doing before you switched to websearch_to_tsquery?
Anything that applies a stemmer is going to have hard time handling partial match. What is it supposed to do, take off the notation, stem the part, then add it back on again? Not do stemming on the whole string? Not do stemming on just the token containing the partial match indicator? And how would it even know partial match was intended, rather than just being another piece of punctuation?
To add something on top of the other good answers here, you can also compose your query with both websearch_to_tsquery and to_tsquery to have everything from both worlds:
select * from your_table where ts_vector_col ## to_tsquery('simple', websearch_to_tsquery('simple', 'partial query')::text || ':*')
Another solution I have come up with is to do the text transform as part of the query so building the tsquery looks like this
to_tsquery(concat(regexp_replace(trim(' all the search terms here '), '\W+', ':* & '), ':*'));
(trim) Removes leading/trailing whitespace
(regexp_replace) Splits the search string on non word chars and adds trailing wildcards to each term, then ANDs the terms (:* & )
(concat) Adds a trailing wildcard to the final term
(to_tsquery) Converts to a ts_query
You can test the string manipulation by running
SELECT concat(regexp_replace(trim(' all the search terms here '), '\W+', ':* & ', 'gm'), ':*')
the result should be
all:* & the:* & search:* & terms:* & here:*
So you have multi word partial matches e.g. searching spi ma would return results matching spider man

Find rows where string contains certain character at specific place

I have a field in my database, that contains 10 characters:
Fx: 1234567891
I want to look for the rows where the field has eg. the numbers 8 and 9 in places 5 and 6
So for example,
if the rows are
a) 1234567891
b) 1234897891
c) 1234877891
I only want b) returned in my select.
The type of the field is string/character varying.
I have tried using:
where field like '%89%'
but that won't work, because I need it to be 89 at a specific place in the string.
The fastest solution would be
WHERE substr(field, 8, 2) = '89'
If the positions are not adjacent, you end up with two conditions joined with AND.
You should be able to evaluate the single character using the underscore(_) character. So you should be able to use it as follows.
where field like '____89%'

Wildcard searching between words with CRC mode in Sphinx

I use sphinx with CRC mode and min_infix_length = 1 and I want to use wildcard searching between character of a keyword. Assume I have some data like these in my index files:
name
-------
mickel
mick
mickol
mickil
micknil
nickol
nickal
and when I search for all record that their's name start with 'mick' and end with 'l':
select * from all where match ('mick*l')
I expect the results should be like this:
name
-------
mickel
mickol
mickil
micknil
but nothing returned. How can I do that?
I know that I can do this in dict=keywords mode but I should use crc mode for some reasons.
I also used '^' and '$' operators and didn't work.
You can't use 'middle' wildcards with CRC. One of the reaons for dict=keywords, the wildcards it can support are much more flexible.
With CRC, it 'precomputes' all the wildcard combinations, and injects them as seperate keywords in index, eg for
eg mickel as a document word, and with min_prefix_len=1, indexer willl create the words:
mickel
mickel*
micke*
mick*
mic*
mi*
m*
... as words in index, so all the combinations can match. If using min_infix_len, it also has to do all the combinations at the start as well (so (word_length)^2 + 1 combinations)
... if it had to precompute all the combinations for wildcards in the middle, would be a lot more again. Particularly if then allows all for middle AND start/end combinations as well)
Although having said that, you can rewrite
select * from all where match ('mick*l')
as
select * from all where match ('mick* *l')
because with min_infix_len, the start and end will be indexed as sperate words. Jus need to insist that both match. (although can't think how to make them bot match the same word!)

Sphinx query before and after a term

Is it possible to set up a query in sphinx with a term that has to either also match a word before OR after?
(TermBefore) (Term) (TermAfter)
so that both
TermBefore Term
Term TermAfter
would match but
Term
does not?
The proximity search operator is pretty much designed for this
"Term TermAfter"~2
http://sphinxsearch.com/docs/current.html#extended-syntax
Ah, I thought you meant 'TermAfter' to be actully be the same word, just that it can be before or after.
But if two different terms, possibly the easiest is just to do:
"TermBefore Term" | "Term TermAfter"
Just simple phrase operator, where either phrase must match.
Edit again:
If dont want the matchs adjecent use Strict order operator, rather htna phrase operator...
(TermBefore << Term) | (Term << TermAfter)