ERROR:root:Error running query: RequestError(400, u'parsing_exception', u'no [query] registered for [not]') - elasticsearch-6

I am using metricbeat-6.4.0, elasticsearch-6.4.0, elastalert-0.1.35
I want to set email alerts when any process, suppose notepad++.exe is stopped.
Below is my rule:
realert:
minutes: 60
from_addr: appmonProcess#company.com
alert_text: "Dear User,\n\t notepad++.exe is not running on server IN-MUM-EADMTOOL for the last 15 minutes"
es_host: linux-a2adm.in.company.com
index: metricbeat-6.4.0-*
smtp_host: ismtp.corp.company.com
type: frequency
es_port: 29200
filter:
term:
beat.hostname: IN-MUM-EADMTOOL
not:
term:
system.process.name: notepad++.exe
timeframe:
minutes: 15
alert: email
name: 93__server__IN-MUM-EADMTOOL__system.process.name__eqnotepad++.exe__1__15
email: ["aviral.srivastava#company.com"]
num_events: 1
I am getting below error:
INFO:elastalert:Starting up
WARNING:elasticsearch:GET http://linux-a2adm.in.company:29200/metricbeat-6.4.0-*/_search?_source_include=%40timestamp%2C%2A&ignore_unavailable=true&scroll=30s&size=10000 [status:400 request:0.035s]
ERROR:root:Error running query: RequestError(400, u'parsing_exception', u'no [query] registered for [not]')
INFO:elastalert:Ran 93__server__IN-MUM-EADMTOOL__system.process.name__eqnotepad++.exe__1__15 from 2018-11-09 17:18 India Standard Time to 2018-11-09 17:29 India Standard Time: 0 query hits (0 already seen), 0 matches, 0 alerts sent
INFO:elastalert:Sleeping for 59.895 seconds

Issue with the filter, try using:
filter:
-and:
- term:
beat.hostname: "IN-MUM-EADMTOOL"
- not:
term:
system.process.name: "notepad++.exe"

For Elasticsearch Version > 5, this filter will not work writing queries embedding filters in and, not, or keywords. Instead, you can use query key to write the complete query.
filter:
- query:
query_string:
query: " beat.hostname: IN-MUM-EADMTOOL AND NOT system.process.name: notepad++.exe"
Relevant Documentation states that the kind of filter you have written using keywords like and, not only works for Elasticsearch 2.X or earlier

Related

zap tool showing security vulnerability but we can't find those vulnerability in our source code?

vulnerability showing as:-
SQL Injection - SQLite
Method: GET
Parameter: query
Attack: ' | case randomblob(10000000) when not null then "" else "" end --
Evidence: The query time is controllable using parameter value [' | case randomblob(10000000) when not null then "" else "" end --], which caused the request to take [542] milliseconds, parameter value [' | case randomblob(100000000) when not null then "" else "" end --], which caused the request to take [900] milliseconds, when the original unmodified query with value [query] took [167] milliseconds.
SQL Injection - Oracle - Time Based
Method: GET
Parameter: query
Attack: field: [query], value [query and exists (SELECT UTL_INADDR.get_host_name('10.0.0.1') from dual union SELECT UTL_INADDR.get_host_name('10.0.0.2') from dual union SELECT UTL_INADDR.get_host_name('10.0.0.3') from dual union SELECT UTL_INADDR.get_host_name('10.0.0.4') from dual union SELECT UTL_INADDR.get_host_name('10.0.0.5') from dual) -- ]
Advanced SQL Injection - Oracle AND time-based blind
Method: GET
Parameter: query
Attack: query AND 2972=DBMS_PIPE.RECEIVE_MESSAGE(CHR(113)||CHR(65)||CHR(80)||CHR(114),5)
SQL Injection - MsSQL
Method: GET
Parameter: query
Attack: query WAITFOR DELAY '0:0:15' --
SQL Injection - Hypersonic SQL - Time Based
Method: GET
Parameter: query
Attack: field: [query], value ["; select "java.lang.Thread.sleep"(15000) from INFORMATION_SCHEMA.SYSTEM_COLUMNS where TABLE_NAME = 'SYSTEM_COLUMNS' and COLUMN_NAME = 'TABLE_NAME' -- ]
SQL Injection - PostgreSQL - Time Based
Method: GET
Parameter: query
Attack: field: [query], value [case when cast(pg_sleep(15) as varchar) > '' then 0 else 1 end]
SQL Injection - MySQL
Method: GET
Parameter: query
Attack: query / sleep(15)
Advanced SQL Injection - PostgreSQL > 8.1 stacked queries (comment)
Method: GET
Parameter: query
Attack: query;SELECT PG_SLEEP(5)--
Advanced SQL Injection - Oracle stacked queries (DBMS_PIPE.RECEIVE_MESSAGE - comment)
Method: GET
Parameter: query
Attack: Feb 2018;SELECT DBMS_PIPE.RECEIVE_MESSAGE(CHR(105)||CHR(122)||CHR(102)||CHR(108),5) FROM DUAL--
Advanced SQL Injection - Microsoft SQL Server/Sybase time-based blind.
Method: GET
Parameter: query
Attack: query) WAITFOR DELAY CHAR(48)+CHAR(58)+CHAR(48)+CHAR(58)+CHAR(91)+CHAR(83)+CHAR(76)+CHAR(69)+CHAR(69)+CHAR(80)+CHAR(84)+CHAR(73)+CHAR(77)+CHAR(69)+CHAR(93) AND (1972=1972
All of our source code following the given Example:-
public interface UserRepository extends JpaRepository<User, Long> {
#Query("select u from User u where u.firstname = :firstname or u.lastname = :lastname")
User findByLastnameOrFirstname(#Param("lastname") String lastname,
#Param("firstname") String firstname);
}
Pick one of the timebased attacks and rerun it - you can do that by rightclicking on the alert in ZAP and selecting 'Open/Resend with Request Editor'.
Check to see how long the request took (its shown at the bottom of the dialog) - was it the same time (or a bit more) than the delay that the attack is using?
If so try increasing the delay and resending - is it now taking the longer period of time?
If the time is being affected by the time specified in the attack then you will have an SQL injection vulnerability.
Why havnt I said anything about the source code you posted? Thats because I have no idea if thats all of the relevant code :)
You might also want to try using a static analyser on your code - it will probably show loads of false positives, but you can just focus on any SQL injection vulnerabilities it reports.

Update rows returned by a complex SQL query with data from query result

I have a multi-table join and want to update a table based on the result of that join. The join table produces both the scope of the update (only those rows whose effort.id appears in the result should be updated) and the data for the update (a new column should be set to the value of a calculated column).
I've made progress but can't quite make it work. Here's my statement:
UPDATE
efforts
SET
dropped_int = jt.split
FROM
(
SELECT
ef.id,
s.id split,
s.kind,
s.distance_from_start,
s.sub_order,
max(s.distance_from_start + s.sub_order)
OVER (PARTITION BY ef.id) AS max_dist
FROM
split_times st
LEFT JOIN splits s ON s.id = st.split_id
LEFT JOIN efforts ef ON ef.id = st.effort_id
) jt
WHERE
((jt.distance_from_start + jt.sub_order) = max_dist)
AND
kind <> 1;
The SELECT produces the correct join table:
id split kind dfs sub max_dist dropped dropped_int
403 33 2 152404 1 152405 TRUE 33
404 33 2 152404 1 152405 TRUE 33
405 31 2 143392 1 143393 TRUE 33
406 31 2 143392 1 143393 TRUE 33
407 29 2 132127 1 132128 TRUE 33
408 29 2 132127 1 132128 TRUE 33
409 29 2 132127 1 132128 TRUE 33
and does indeed update the efforts.id column, but there are two problems: First, it updates all efforts, not just those that are produced from the query, and second, it sets effort.id to the split value of the first row in the query result, but I need it to set each effort to the associated split value.
If this were non-SQL, it might look something like:
jt_rows.each do |jt_row|
efforts[jt_row].dropped_int = jt[jt_row].split
end
But I don't know how to do that in SQL. It seems like this should be a fairly common problem, but after a couple of hours of searching I'm coming up short.
How should I modify my statement to produce the described result? If it matters, this is Postgres 9.5. Thanks in advance for any suggestions.
EDIT:
I did not get a workable answer but ended up solving this with a mixture of SQL and native code (Ruby/Rails):
dropped_splits = SplitTime.joins(:split).joins(:effort)
.select('DISTINCT ON (efforts.id) split_times.effort_id, split_times.split_id')
.where(efforts: {dropped: true})
.order('efforts.id, splits.distance_from_start DESC, splits.sub_order DESC')
update_hash = Hash[dropped_splits.map { |x| [x.effort_id, {dropped_split_id: x.split_id, updated_at: Time.now}] }]
Effort.update(update_hash.keys, update_hash.values)
Use a condition in the WHERE clause that relates efforts table with a subquery:
efforts.id = jt.id
that is:
WHERE
((jt.distance_from_start + jt.sub_order) = max_dist)
AND
kind <> 1
AND
efforts.id = jt.id

Selecting rows only if meeting criteria

I am new to PostgreSQL and to database queries in general.
I have a list of user_id with university courses taken, date started and finished.
Some users have multiple entries and sometimes the start date or finish date (or both) are missing.
I need to retrieve the longest course taken by a user or, if start date is missing, the latest.
If multiple choices are still available, then pick random among the multiple options.
For example
on user 2 (below) I want to get only "Economics and Politics" because it has the latest date;
on user 6, only "Electrical and Electronics Engineering" because it is the longer course.
The query I did doesn't work (and I think I am off-track):
(SELECT Q.user_id, min(Q.started_at) as Started_on, max(Q.ended_at) as Completed_on,
q.field_of_study
FROM
(select distinct(user_id),started_at, Ended_at, field_of_study
from educations
) as Q
group by Q.user_id, q.field_of_study )
order by q.user_id
as the result is:
User_id Started_on Completed_on Field_of_studies
2 "2001-01-01" "" "International Economics"
2 "" "2002-01-01" "Economics and Politics"
3 "1992-01-01" "1999-01-01" "Economics, Management of ..."
5 "2012-01-01" "2016-01-01" ""
6 "2005-01-01" "2009-01-01" "Electrical and Electronics Engineering"
6 "2011-01-01" "2012-01-01" "Finance, General"
6 "" "" ""
6 "2010-01-01" "2012-01-01" "Financial Mathematics"
I think this query should do what you need, it relies on calculating the difference in days between ended_at and started_at, and uses 0001-01-01 if the started_at is null (making it a really long interval):
select
educations.user_id,
max(educations.started_at) started_at,
max(educations.ended_at) ended_at,
max(educations.field_of_study) field_of_study
from educations
join (
select
user_id,
max(
ended_at::date
-
coalesce(started_at, '0001-01-01')::date
) max_length
from educations
where (started_at is not null or ended_at is not null)
group by user_id
) x on educations.user_id = x.user_id
and ended_at::date
-
coalesce(started_at, '0001-01-01')::date
= x.max_length
group by educations.user_id
;
Sample SQL Fiddle

PlayFramework 2 + Ebean - raw Sql Update query - makes no effect on db

I have a play framework 2.0.4 application that wants to modify rows in db.
I need to update 'few' messages in db to status "opened" (read messages)
I did it like below
String sql = " UPDATE message SET opened = true, opened_date = now() "
+" WHERE id_profile_to = :id1 AND id_profile_from = :id2 AND opened IS NOT true";
SqlUpdate update = Ebean.createSqlUpdate(sql);
update.setParameter("id1", myProfileId);
update.setParameter("id2", conversationProfileId);
int modifiedCount = update.execute();
I have modified the postgresql to log all the queries.
modifiedCount is the actual number of modified rows - but the query is in transaction.
After the query is done in the db there is ROLLBACK - so the UPDATE is not made.
I have tried to change db to H2 - with the same result.
This is the query from postgres audit log
2012-12-18 00:21:17 CET : S_1: BEGIN
2012-12-18 00:21:17 CET : <unnamed>: UPDATE message SET opened = true, opened_date = now() WHERE id_profile_to = $1 AND id_profile_from = $2 AND opened IS NOT true
2012-12-18 00:21:17 CET : parameters: $1 = '1', $2 = '2'
2012-12-18 00:21:17 CET : S_2: ROLLBACK
..........
Play Framework documentation and Ebean docs - states that there is no transaction /if not declared or transient if needed per query/.
So... I have made the trick
Ebean.beginTransaction();
int modifiedCount = update.execute();
Ebean.commitTransaction();
Ebean.endTransaction();
Logger.info("update mod = " + modifiedCount);
But this makes no difference - the same behavior ...
Ebean.execute(update);
Again - the same ..
Next step i did - I annontated the method with
#Transactional(type=TxType.NEVER)
and
#Transactional(type=TxType.MANDATORY)
None of them made a difference.
I am so frustrated with Ebean :(
Anybody can help, please ?
BTW.
I set
Ebean.getServer(null).getAdminLogging().setDebugGeneratedSql(true);
Ebean.getServer(null).getAdminLogging().setDebugLazyLoad(true);
Ebean.getServer(null).getAdminLogging().setLogLevel(LogLevel.SQL);
to see in Play console the query - other queries are logged - this update - not
just remove the initial space...Yes..I couldn't believe it either...
change from " UPDATE... to "UPDATE...
And thats all...
i think you have to use raw sql instead of createSqlUpdate statement.

Speeding up postgres query

I have table 'auctions' with 60k records.
It has a vector column that contains tsearch vectors like below
auctions.tsvector_content_tsearch
107658 | '-75':75 '-83':81 '0.265':49 '0.50':140 '1':62 '1000':61 '1080':38 '16':39 '160':91 '170':86 '1920':36 '1920x1080':65,69 '2':154 '219':129 '23':3,20 '23.0':31 '236v3lsb':6,23 '24':164 '24.75':134 '250':58 '3.190':117 '30':80 '426':127 '5':54 '5.0':99 '56':74 '566':125 '9':40 'black':45 'cal':32 'cd/m2':59 'compatible':158 'czarny':46 'czas':51 'czuwać':139 'częstotliwość':71,77 'd':110,114 'd-sub':109 'dodatkowy':146,152 'dvi':7,24,113 'dvi-d':112 'ekran':30 'energia':131,137 'energy':97 'epeat':100 'ergonomics':104 'full':41 'g':120 'gs':106 'gwarancja':153,161 'hd':42 'hz':76 'informacja':151 'jasność':56 'kabel':147,149 'kensington':156 'kg':116,118 'khz':82 'kolor':43 'kontrast':60 'kąt':83,88 'lcd':2,15,19 'led':4,21 'lina':28 'lock':157 'maksymalny':64 'matryca':34,53,57 'miejsce':165 'miesiąc':163 'mm':50 'monitor':1,14,18,96 'mś':55 'nazwać':12 'norma':93 'obudowa':44 'odchylać':72,78 'ogólny':17 'okres':159 'opis':16 'optymalny':68 'philips':5,11,22 'piksel':66,70 'pionowy':73,85 'plamka':48 'pobór':130,136 'poziom':90 'poziomy':79,90 'producent':10 'przekątna':29 'przeć':133 'reakcja':52 'rodzina':25 'rohs':102 'rozdzielczość':63,67 'rękojmia':160 'serwis':167 'serwisować':166 'silver':101 'specyfikacja':8 'spełniać':94 'star':98 'stopień':87,92 'sub':111 'techniczny':9 'tryb':138 'tryba':138 'tuv':103,105 'typ':13,33 'v':27 'v-line':26 'vga':150 'waga':115 'wbudować':142 'widzenia':84,89 'widzenie':84,89 'widzieć':84,89 'wielkość':47 'wuxga':35 'wymiar':119 'wyposażenie':145 'wyposażyć':145 'x':37,121,123,126,128 'zasilacz':143 'zasilać':148 'zewn':108 'zewnętrzny':168 'złączać':107 'złącze':107 'łat':155 'ś':122
Table auction has an index on that column:
"auctions_tsvector_content_tsearch_idx" gin (tsvector_content_tsearch)
When I search for some matching vectors query takes about 4000-5000ms; that is too long.
Is there any way to speed things up here?
EXPLAIN SELECT auctions.id FROM auctions WHERE (auctions.tsvector_content_tsearch ## to_tsquery('polish', 'lcd'));
QUERY PLAN
--------------------------------------------------------------
Seq Scan on auctions (cost=0.00..6598.02 rows=7762 width=4)
Filter: (tsvector_content_tsearch ## '''lcd'''::tsquery)
(2 rows)
_ EDIT __
Ok I think I found a problem: polish dictionary.
Using standard postgres dictionary fix long time problem.
Thanks for tips
Apparently, the planner estimated that sequential scan is going to be faster than using the index. Try the following:
SET enable_seqscan=off (useful for test, however - do not use it in production)
raising the stats target
That behaviour sometimes occurs with GIN indices. Check this thread on PostgreSQL mailing list. You can also consult the official PostgreSQL documentation about this issue.