How to configure the rate filter in snort for windows environment? - snort

I have installed and configured the snort software for windows environment. As per their documentation, the threshold is deprecated and I have to use other filters. I need to use rate_filter in my application, however, I don't know how to set it inside my snort software.
I have read all the documentation and internet resources, and I have added the example codes of rate_filter directly to my snort.conf file, but still I can't get what I want.
Am I missing something?

You may need to share your filter to best help here. An example of layout here:
Example 1 - allow a maximum of 100 connection attempts per second from any one IP address, and block further connection attempts from that IP address for 10 seconds:
rate_filter \
gen_id 135, sig_id 1, \
track by_src, \
count 100, seconds 1, \
new_action drop, timeout 10

Related

How can I see who is hogging all of the resources on sungrid engine?

At my job we using sungrid qstat, qsub, etc.
Is there a way to see the percentage of resources currently used by each user? I know there is qhost -u "*" but this is a bit more difficult to interpret b/c it doesn't show how many resources are being used with respect to what is available.
If this is out of scope for SO then I will remove.
Are there are any built in tools that do this or public scripts on GitHub that can achieve this functionality?
The command qstat -u "*" -nenv -j "*" outputs job details, including a line with job's usage:
usage 1: wallclock=44:12:05:42, cpu=1:10:40:01, mem=9284973.79642 GBs, io=631.16018 GB, iow=65.130 s, ioops=22213570, vmem=284.719M, maxvmem=65.121G, rss=14.435M, ..., maxrss=61.611G, maxpss=68.641G
I am not aware of a public script that would parse it and cross reference the output of qhost to retrieve hosts resources.
I think I should be working on this over the weekend. :)

How do you access a MongoDB database from two Openshift apps?

I want to be able to access my MongoDB database from 2 Openshift apps- one app is an interactive database maintenance app via the browser, the other is the principle web application which runs on mobile devices via an Openshift app. As I see it in Openshift, MongoDB gets set up within a particular app's folder space, not independent of that space.
What would be the method to accomplish this multiple app access to the database ?
It's not ideal but is my only choice to merge the functionality of both Openshift apps into one ? That's tastes like a bad plate of spaghetti.
2018 update: this applies to Openshift 2. Version 3 is very different, and however the general rules of linux and scaling apply, the details got obsolete.
Although #MartinB answer was timely and correct, it's just a link, so let me put the essentials here.
Assuming that setting up a non-shared DB is already done, you need to find it's host and port. You can ssh to your app (the one with the DB) or use the rhc:
rhc ssh -a appwithdb
env | grep MONGODB
env brings all the environment variables, and grep filters them to show only Mongo-related ones. You should see something like:
OPENSHIFT_MONGODB_DB_HOST=xxxxx-yyyyy.apps.osecloud.com
OPENSHIFT_MONGODB_DB_PORT=zzzzz
xxxxx is the ID of the gear that Mongo sits on
yyyyy is your domain/namespace
zzzzz is MongoDB port
Now, you can use these to create a connection to the DB from anywhere in your Openshift environment. Another application has to use the xxxxx-yyyyy:zzzzz URL. You can store them in custom variables to make maintenance easier.
$ rhc env-set \
MYOWN_DB_HOST=xxxxx-yyyyy \
MYOWN_DB_PORT=zzzzz \
MYOWN_DB_PASSWORD=****** \
MYOWN_DB_USERNAME=admin..... \
MYOWN_DB_NAME=dbname...
And then use the environment variables instead of the standard ones. Just remember they don't get updated automatically when the DB moves away.
Please read the following article from the open shift blog: https://blog.openshift.com/sharing-database-across-applications/

ElasticSearch with Play 2 configuration

I am trying to use the ElasticSearch module (https://github.com/cleverage/play2-elasticsearch) with my Play 2 application. In the readme, it says I should add the following to my application.conf:
## define local mode or not
elasticsearch.local=false
## list clients
elasticsearch.client="192.168.0.46:9300"
# ex : elasticsearch.client="192.168.0.46:9300,192.168.0.47:9300"
What is local mode? What is my client URL supposed to be? I can not find any information on what these options should be. With my current options, I get a NoNodeAvailableException.
Some people suggest:
elasticsearch.local=false elasticsearch.client=mynode1:9200,mynode2:9200
But what is mynode1 and mynode2? It doesn't work with my application. Can anyone help? Thanks
What is local mode?
If elaticsearch.local=true, a elasticsearch node is started in your application ( embedded )
What is my client URL supposed to be?
It's your host:port, but the port is the tcp transport define on your elasticsearch node.
By default, the port start on 9300 ( http://www.elasticsearch.org/guide/reference/modules/transport.html )
I can not find any information on what these options should be. With my current options, I get a NoNodeAvailableException.
I think you have a problem on port number.
mynode1 and mynode2 are elasticsearch nodes.
Do you have any Elasticsearch node running?
On which IP address?
Can you try to connect on these nodes using curl, for example:
curl localhost:9200
Or
curl YOURIPADDRESS:9200
If one of this is successful, then configure your play app using YOURIPADDRESS:9300 as Nicolas Boire wrote before.
If no one is successful, check that you have installed Elasticsearch and launched it before.
HTH
I've just had the same problem, be sure that you respect the version requirements written in the table : https://github.com/cleverage/play2-elasticsearch
At the beginning, I set up the latest version of the plugin 0.8.1 but my ElasticSearch version was 1.0.2.
By starting ES with version 0.9.13, it worked.

Is there a api for ganglia?

Hello I would like to enquire if there is an API that can be used to retrieve Ganglia stats for all clients from a single ganglia server?
The Ganglia gmetad component listens on ports 8651 and 8652 by default and replies with XML metric data. The XML data type definition can be seen on GitHub here.
Gmetad needs to be configured to allow XML replies to be sent to specific hosts or all hosts. By default only localhost is allowed. This can be changed in /etc/ganglia/gmetad.conf.
Connecting to port 8651 will get you a default XML report of all metrics as a response.
Port 8652 is the interactive port which allows for customized queries. Gmetad will recognize raw text queries sent to this port, i.e. not HTTP requests.
Here are examples of some queries:
/?filter=summary (returns a summary of the whole grid, i.e. all clusters)
/clusterName (returns raw data of a cluster called "clusterName")
/clusterName/hostName (returns raw data for host "hostName" in cluster "clusterName")
/clusterName?filter=summary (returns a summary of only cluster "clusterName")
The ?filter=summary parameter changes the output to contain the sum of each metric value over all hosts. The number of hosts is also provided for each metric so that the mean value may be calculated.
Yes, there's an API for Ganglia: https://github.com/guardian/ganglia-api
You should check this presentation from 2012 Velocity Europe - it was really a great talk: http://www.guardian.co.uk/info/developer-blog/2012/oct/04/winning-the-metrics-battle
There is also an API you can install from pypi with 'pip install gangliarest' and sets up a configurable API backed with a Redis cache and indexer to improve performance.
https://pypi.python.org/pypi/gangliarest

haproxy - which configuration files

I have an HAProxy install which was configured by someone who left the company. It runs on Ubuntu 10.04 and it seems to use 3 configuration files in the directory /etc/haproxy
haproxy.cfg
haproxy.http.cfg
haproxy.https.cfg
I don't see the point in using the haproxy.https.cfg file as I believe (in our configuration) it can all be configured from a single haproxy.http.cfg file but when I remove that httpS file it complains bitterly and refuses to run. My question
Is this the standard configuration haproxy uses or if not, I can't find a reference to the "S" file anywhere. Can anyone suggest how HAProxy concludes it should use it?
Thanks
The very answer to your question: your haproxy is simply launched with those three config files ( -f haproxy.cfg -f haproxy.http.cfg -f haproxy.https.cfg, maybe from /etc/init.d/haproxy but mileage varies depending on your distribution ).
If you remove the file, of course it will complain.
This is not particularly standard, but ain't bad either, it helps structuring the conf rather than having a very long file.
The task of the .https version will certainly be to redirect the https traffic towards a service that can handle HTTPS (stunnel or nginx usually), since haproxy cannot terminate ssl connections. (stunnel has to be patched, see on the haproxy page)
If you want you can merge those files into one or two, just find out how haproxy is launched (check for init.d or let us know which distribution) and fix it appropriately.
I believe that it is only /etc/haproxy/haproxy.cfg that is used by default.
This may be of use to you (1.4 configuration reference):
http://haproxy.1wt.eu/download/1.4/doc/configuration.txt