how to go to the first matched row when using less - less-unix

i am using less to do a log analysis,
after i search for some string, there are a lot of match,
and then i use n or N to go through the matched lines, but sometimes i want to go the the first matched line or the last matched line, how to go there ?
i searched in google, but seems there is no answer for such a question, is it possible to do that ?

g will bring you to the first line and then n will bring you to the first match, using both uppercase (first G then N) will bring to the last match

Related

How to use OR condition in LibreOffice?

I am trying to use the formula below to set conditions in LibreOffice but I keep getting an error. What am I doing wrong with the statement below:
=IF(G2<=2,'negative',IF(OR(G2>2 & G2<=3,'neutral',IF(OR(G2>=4,'positive))))))
Thanks
It seems, that in your formula is missing the last ':
'positive))))))
should be 'positive'))))))
Also the
&
is string-concatenation in LibreOffice, so you need here the equivalent to OR() and that is AND().
But you can simplify your formula to
=IF(G2<=2,'negative',IF(AND(G2>2,G2<=3),'neutral','positive'))
The first test is if the number is lower than 2 (negative),
the second test is if the number is between 2 and 3 (neutral)
and then there is no further test needed as it is the only remainig possiblity.
For a different locale, a slightly shorter, and I'd say simpler, version that also avoids the need for OR/AND:
=IF(G2<=2,"negative",IF(G2<=3,"neutral","positive"))
Once <=2 first test is handled (either but outputting negative or by proceeding to the 'result if FALSE') there is no longer the possibility of 2 or less, so the AND is not necessary.
The above though does fill a gap left by OP between 3 and 4.

Sphinx exact match to column string

I thought that
Match('^Word$')
Would only find records that are exactly 'Word'
However although this does work for single words it does not for mutliple:
Match ('^Final Word$')
Finds 'Final Word' and 'Final and Last Word'.
as does
Match ('^"Final Word"$')
How do I tell Sphinx to only find an exact match?
Update: After some testing the best I can do is weighting/ranker and w/o ""
MATCH('^Final Word$') order by weight() desc limit 1 desc OPTION ranker=PROXIMITY_BM25
So I forced an exact match with ranking and limit, still would be nice to know how to actually say 'only return exact matches'.
One issue with above is if I do not have 'Final Word' in the table it will find all others e.g. 'Final and Last Word' which is behavior I do not want.
You just got your operators in wrong order :)
Match('"^Final Word$ "')
(having a space after $ helps with some mysterious sphinx bug!)
So the issue turned out to be that in my efforts to make this work one step had been to specify the ranker
Option Ranker=PROXIMITY_BM25
which had worked for me up to then. What actually works is
Match('^Final Word$')
and then not specifying ranker or specifying extended if the ranker in config is defined otherwise (it is extended by default).

UIMA Ruta. Retrieve phrases separated by WS (spaces, breaks, etc.)

I'm going to retrieve phrases separated by spaces, breaks and other punctuation symbols.
I've spent a lot of time trying to find out the best way to do that.
Option 1. The easiest way.
DECLARE T1, T2;
"cool rules" -> T1;
"cool rule" -> T2;
Input: "123cool rules".
Result: T1 and T2 are triggered;
Option 2. Using WORDLIST and WORDTABLE.
Let wordlist 1.txt contains 2 rows:
cool rules
cool
code for extraction is the following
WORDLIST WList = '1.txt';
DECLARE W1;
Document{-> MARKFAST(W1, WList, true, 2)};
Input: "cool rules".
Result: only first row is extracted. I guess that in this case intersected rules are not triggered.
Option 3. Mark combination of two tokens
DECLARE T1;
("cool" "rule") {-> T1};
Input: "cool rules cool rule 1cool rule"
Result: 2 annotations: cool rule + 1cool rule. Loss of extraction speed in 10 times.
Option 4. REGEXP matching
Maybe it is possible to match such pattern "cool\\srule", but I have no idea how to define the type expression. SW*{REGEXP("cool\\srule")->T1} does not provide results.
As you see, I'm trying to solve a very simple task, but did not succeed yet. The option 3 is a really good way to do that, but extraction process becomes slower in 10 times.
If you want to identify specific phrases, you should use a dictionary lookup, not directly rules.
Therefore, I'd recommend the MARKFAST option 2. However, there are two problems: (a) only longest matches are supported and (b) you either need to change the segmentation (tokenization) or do some postprocessing.
(a) This cannot be solved. If this is really required, a different dictionary annotator should be used. See e.g., the UIMA mailing lists.
(b) The MARKFAST works on RutaBasic annotations which are automatically created for each smallest part. Because of the default seeder, the token "1cool" consists of two RutaBasics, one for the NUM, one for the SW. If you do not want to change the preprocessing, you can simply apply a rule that fixed that like
RETAINTYPE(WS);
ANY{-PARTOF(WS)} t:#T1{-> UNMARK(t)};
btw, option 4 won't work because the REGEXP condition checks on the covered text of the matched annotation SW which only represents one token. If you do something like (SW+){REGEXP("cool\\srule")->T1}, then the rule wont match if there is another SW afterwards.
DISCLAIMER: I am a developer Of UIMA Ruta

Will this code allow me to assign a number to a list and incrementally increase the number?

[ set list N = 1 () set list N = 1
lput number-of-patches destination origin list N N + 1]
I wish to be able to store information about collections of patches and when the criteria for the filling of the list is met the number of the list will be increased. Will this code work?
Just looking at it, it will give you several syntax errors, regardless of whether the structure will do what you want. For example, the way you should construct a list with element '1' and name 'N' (which is what I think the first line is supposed to do) would be set N (list 1). You can test this by writing code as below and running test (eg by typing test in the command center at the bottom of the interface).
globals [N]
to test
set N (list 1)
print N
end
When writing code, your life is a lot easier if you build up the code in pieces, testing each one as you go either by inspecting agents to see if their property values change as you expect and/or putting print statements in lots of places to see what happens to your variables. This way you are introducing and fixing only a small number of errors in each step. Also, this means you are never writing code that you can't test immediately.

Which regexp is faster

I am using PCRE|^/foo/(.*?)(?::(?:bar)?)?$| or |^/foo/(.*?)(?::bar)?:?$| this will be a replace so we want to strip : and :bar from the end while doing the replacement. I know the two are not exactly the same but it does not matter much here.
I would use the first one as it only has to check for : once. The second one could match the first three characters of :bat before having to backtrack, then check for : again. Also, the second one could match :bar: whereas the first one can't. The actual speed difference would be tiny. The second way would be better written as /^\/foo\/(.*?)(?::bar|:)?$/
Try not to use regex metacharacters as delimiters!