I have been working with Snort IDS and I have successfully managed to generate some test logs. The problem I am facing has to do with their format(alert_fast). Some example logs are provided below.
07/23-20:08:56.631567 [] [1:2002911:4] ET SCAN Potential VNC Scan
5900-5920 [] [Classification: Attempted Information Leak] [Priority:
2] {TCP} 10.42.42.253:58606 -> 10.42.42.25:5906
07/23-20:08:56.685455 [] [1:2010937:2] ET POLICY Suspicious inbound
to mySQL port 3306 [] [Classification: Potentially Bad Traffic]
[Priority: 2] {TCP} 10.42.42.253:40328 -> 10.42.42.56:3306
Syslog-ng appends some sort of header to it giving:
Jul 23 20:08:56 SOME_IP 07/23-20:08:56.685455 [] [1:2010937:2] ET
POLICY Suspicious inbound to mySQL port 3306 [] [Classification:
Potentially Bad Traffic] [Priority: 2] {TCP} 10.42.42.253:40328 ->
10.42.42.56:3306
I need a way to get rid of that initial data. I tried using destination d_file { file(“/var/log/file.log” template(“$MSG\n”)); }; but then it yields:
08:56.685455 [] [1:2010937:2] ET POLICY Suspicious inbound to mySQL
port 3306 [] [Classification: Potentially Bad Traffic] [Priority: 2]
{TCP} 10.42.42.253:40328 -> 10.42.42.56:3306
As you can see some of the original log is also removed.
Please note that I want to avoid changing to a different Snort log format at all costs. Surely there must be some way to fix this?
syslog-ng is appending a syslog header to the messages because they do not seem to be well-formatted syslog messages, and syslog-ng does not parse them correctly.
Try to use a separate source for these messages, and set the flags(no-parse) option for the source. Then the template(“$MSG\n”) in your destination should give you the result you want.
Regards,
Robert Fekete
Thanks for responding Robert. Unfortunately I already had flags(no-parse) as part of my original setup. Here's what fixed it:
template my_template {
template("$MSGHDR$MSG\n");
template_escape(no);
};
...
destination some_name {
file("/var/log/snort/alert" template(my_template));
};
Related
We changed the configuration of our WebLogic servers to use HTTPS and T3S for connections and use the secure encrypted port 9002 instead of cleartext port 7001. However when using the Web Logic Scripting Tool (WLST)'s connect() function, errors are thrown. One such error is as follows:
WLSTException: Error occurred while performing connect : Cannot connect via t3s or https. If using demo certs, verify that the -Dweblogic.security.TrustKeyStore=DemoTrust system property is set. : t3s://DatServer:9002: Destination 10.10.100.3, 9002 unreachable; nested exception is:
javax.net.ssl.SSLHandshakeException: General SSLEngine problem; No available router to destination
Use dumpStack() to view the full stacktrace :
The syntax of the connect function is: connect('user', 'password', 't3s://host:9002')
This connect() function works fine before the switch from HTTP to HTTPS. Now we cannot connect to the remote admin server using the connect command. Does anyone have any idea how to fix this?
I read some interesting help options but none of them seemed to work. These help suggestions and tips are located here: https://community.oracle.com/thread/1036828
We were able to connect to the remote host and port via telnet. We saw that the port is open and listening for connections on the loop back address with netstat. We tried adding these options to the script invocation: java -cp /path/to/weblogic.jar weblogic.WLST -Dweblogic.security.TrustKeyStore=DemoTrust -Dssl.debug=true Dweblogic.security.SSL.ignoreHostnameVerification=true -Djava.security.egd=file:/dev/./urandom but this also did not work.
We enabled tunneling in the General tab of WebLogic but not in the HTTP tab. I am not the one in control of the server so I just have to suggest things and hope that the instructions are followed.
I get it running in 12.2. by adding to
../oracle_common/common/bin/setWlstEnv_internal.sh
at the end the following lines (youu need to customize line 5 und 6, the values in brackets):
JAVA_OPTIONS="-Dweblogic.ssl.JSSEEnabled=true ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.SSL.enableJSSE="true" ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.SSL.ignoreHostnameVerification=true ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.TrustKeyStore=CustomTrust ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.CustomTrustKeyStoreFileName= ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.CustomTrustKeyStorePassPhrase= ${JAVA_OPTIONS}"
JAVA_OPTIONS="-Dweblogic.security.CustomTrustKeyStoreType=JKS ${JAVA_OPTIONS}"
export JAVA_OPTIONS
and modifying in
../oracle_common/common/bin/wlst_internal.sh
the line starting with
eval '"${JAVA_HOME}/bin/java"' ${JVM_ARGS} ...
by adding ${JAVA_OPTIONS}
so that it looks as follows:
eval '"${JAVA_HOME}/bin/java"' ${JVM_ARGS} ${JAVA_OPTIONS} weblogic.WLST '"$#"'
Hope this helps, allthough modifying scripts that are named "..internal.." doesn´t give me a good feeling
export this before running wlst.sh
export WLST_PROPERTIES=" -Dweblogic.security.TrustKeyStore=CustomTrust -Dweblogic.security.CustomTrustKeyStoreFileName=/u01/oracle/properties/truststore.jks -Dweblogic.security.CustomTrustKeyStoreType=jks -Dweblogic.security.CustomTrustKeyStorePassPhrase=qaz#1234 " ;
I installed and configured suricata to give errors. It gave me error like
Jan 13 11:22:18 201612317 01/13/2017-11:22:18.308560 [] [1:2001219:20] ET SCAN Potential SSH Scan [] [Classification: Attempted Information Leak] [Priority: 2] {TCP}
I wanted to know what does this [1:2001219:20] mean in this rules ?
I found the answer. It is
1 is the classtype
2001219 is the alert id
20 is the revision
UPDATE:
It's not memcached, it's a lot of sockets in TIME_WAIT state:
% ss -s
Total: 2494 (kernel 2784)
TCP: 43323 (estab 2314, closed 40983, orphaned 0, synrecv 0, timewait 40982/0), ports 16756
BTW, I have modified previous version (below) to use Brad Fitz's memcache client and to reuse the same memcache connection:
http://dpaste.com/1387307/
OLD VERSION:
I have thrown together the most basic webserver in Go that has handler function doing only one thing:
retrieving a key from memcached
sending it as http response to client
Here's the code: http://dpaste.com/1386559/
The problem is I'm getting a lot of connection resets on memcached:
2013/09/18 20:20:11 http: panic serving [::1]:19990: dial tcp 127.0.0.1:11211: connection reset by peer
goroutine 20995 [running]:
net/http.func·007()
/usr/local/go/src/pkg/net/http/server.go:1022 +0xac
main.maybe_panic(0xc200d2e570, 0xc2014ebd80)
/root/go/src/http_server.go:19 +0x4d
main.get_memc_val(0x615200, 0x7, 0x60b5c0, 0x6, 0x42ee58, ...)
/root/go/src/http_server.go:25 +0x64
main.func·001(0xc200149b40, 0xc2017b3380, 0xc201888b60)
/root/go/src/http_server.go:41 +0x35
net/http.HandlerFunc.ServeHTTP(0x65e950, 0xc200149b40, 0xc2017b3380, 0xc201888b60)
/usr/local/go/src/pkg/net/http/server.go:1149 +0x3e
net/http.serverHandler.ServeHTTP(0xc200095410, 0xc200149b40, 0xc2017b3380, 0xc201888b60)
/usr/local/go/src/pkg/net/http/server.go:1517 +0x16c
net/http.(*conn).serve(0xc201b9b2d0)
/usr/local/go/src/pkg/net/http/server.go:1096 +0x765
created by net/http.(*Server).Serve
/usr/local/go/src/pkg/net/http/server.go:1564 +0x266
I have taken care to set Linux kernel networking in such way as not to get in the way (turning off SYN flooding protection etc).
...
...
And yet on testing with "ab" (below) I'm getting those errors.
ab -c 1000 -n 50000 "http://localhost:8000/"
There is no sign whatsoever anywhere I looked that it's the kernel (dmesg, /var/log).
I would guess that is because you are running out of sockets - you never close the memc here. Check with netstat while your program is running.
func get_memc_val(k string) []byte {
memc, err := gomemcache.Connect(mc_ip, mc_port)
maybe_panic(err)
val, _, _ := memc.Get(k)
return val
}
I'd use this go memcache interface if I were you - it was written by the author of memcached who now works for Google on Go related things.
Try memcache client from YBC library. Unlike gomemcache it opens and re-uses only a few connections to memcache server irregardless of the number of concurrent requests issued via the client. It achieves high performance by pipelining concurrent requests over a small number of open connections to the memcache server.
The number of connections to the memcache server can configured via ClientConfig.ConnectionsCount.
The precedence section in chapter 9.3.1 tells me that I should apply the special cases first and then the general ones.
[General]
*.host[0].waitTime = 5ms # specifics come first
*.host[3].waitTime = 6ms
*.host[*].waitTime = 10ms # catch-all comes last
I have following lines in the omnetpp.ini file:
**.server[*].tcpApp[0].port = 1000
**.pods[0..1].**.server[*].tcpApp[0].port = 80
**.pods[2..3].**.server[*].tcpApp[0].port = 21
This code works, but every server has 1000 when checking the parameter and not the special cases 80 and 21. So I want it to look like that:
**.pods[0..1].**.server[*].tcpApp[0].port = 80
**.pods[2..3].**.server[*].tcpApp[0].port = 21
**.server[*].tcpApp[0].port = 1000
Yet, this creates an error, which consists of a null pointer exception in the TCP module of the StandardHost module my server is build on.
In the ned file, the parameter is declared like this:
int port = default(1000); // port number to listen on
Leaving that catch-all line out causes the error too. Only taking the last line above the other two makes it possible to let the simulation run through.
An example for the port parameter can be found in TCPServerHostApp.ned from INET. I want to assign different ports for different services which should run on the servers.
What is your advice for me, to apply those parameters correctly? Is there an error in the way I set the parameters, or do I need to set the ports somewhere during the initialization process myself (which would make no sense to me)?
Edit:
The karma system does not allow me to answer the question yet, so here is the cause of my problem:
Well, the problem was at another place. When connecting a new socket with connect(ipaddr, port), I got the wrong port from the job request message.
At the traffic generation module, I read the wrong port for connection to the server, which caused an error, as the port being used was always the default = 1000 instead of 80 or 21.
The servers expected 80 or 21, causing a crash when the socket tried to connect with port 1000.
Just want to add this here as well, so everybody sees I found the error.
Well, the problem was at another place. When connecting a new socket with connect(ipaddr, port), I got the wrong port from a job request message.
At the traffic generation module, I read the wrong port for connection to the server, which caused an error, as the port being used was always the default (= 1000) instead of 80 or 21.
The servers expected 80 or 21, causing a crash, when the socket tried to connect with 1000.
I'm currently testing extreme condition on a piece of code written with Erlang.
I have implemented learnyousomeerlang.com's technique of supervisor to have multiple accept capability.
Here the code slightly modified to handle SSL connections of the supervisor:
-module(mymodule).
-behaviour(supervisor).
-export([start/0, start_socket/0]).
-define(SSL_OPTIONS, [{active, true},
{mode, list},
{reuseaddr, true},
{cacertfile, "./ssl_key/server/gd_bundle.crt"},
{certfile, "./ssl_key/server/cert.pem"},
{keyfile, "./ssl_key/server/key.pem"},
{password, "********"}
]).
-export([init/1]).
start_link() ->
application:start(crypto),
crypto:start(),
application:start(public_key),
application:start(ssl),
supervisor:start_link({local, ?MODULE}, ?MODULE, []).
init([]) ->
{ok, LSocket} = ssl:listen(4242, ?SSL_OPTIONS),
spawn_link(fun empty_listeners/0),
{ok, {{simple_one_for_one, 60, 3600},
[{socket,
{mymodule_serv, start_link, [LSocket]}, % pass the socket!
temporary, 1000, worker, [mymodule_serv]}
]}}.
empty_listeners() ->
[start_socket() || _ <- lists:seq(1,100)],
ok.
start_socket() ->
supervisor:start_child(?MODULE, []).
Here's the code for gen_server which will represent every client connecting :
-module(mymodule_serv).
-behaviour(gen_server).
-export([start_link/1]).
-export([init/1, handle_call/3, handle_cast/2, terminate/2, code_change/3, handle_info/2]).
start_link(Socket) ->
gen_server:start_link(?MODULE, Socket, []).
init(Socket) ->
gen_server:cast(self(), accept),
{ok, #client{socket=Socket, pid=self()}}.
handle_call(_E, _From, Client) ->
{noreply, Client}.
handle_cast(accept, C = #client{socket=ListenSocket}) ->
{ok, AcceptSocket} = ssl:transport_accept(ListenSocket),
mymodule:start_socket(),
ssl:ssl_accept(AcceptSocket),
ssl:setopts(AcceptSocket, [{active, true}, {mode, list}]),
{noreply, C#client{socket=AcceptSocket, state=connecting}}.
[...]
I have the ability to launch close to 10.000 connections at once from multiple server.
While it will take 10 second to a ssl accepting bit of C++ code to accept all of them (which don't even have multiple accept pending), in Erlang this is quite different. It will accept at most 20 connections a second (according to netstat info, whilst C++ accept more like 1K connection per seconds)
While the 10K connections are awaiting for acceptance, I'm manually trying to connect as well.
openssl s_client -ssl3 -ign_eof -connect myserver.com:4242
3 cases happen when I do :
Connection simply timeout
Connection will connect after waiting for it 30 sec. at least
Connection will occur almost directly
When I try connecting manually with 2 consoles, the first done handshaking will not always be the first which tried to connect... Which I found particular.
The server configuration is :
2 x Intel® Xeon® E5620
8x 2.4GHz
24 Go RAM
I'm starting the Erlang shell with :
$erl +S 8:8
EDIT 1:
I have even tried to accept the connection with gen_tcp, and upgrading afterwards the connection to a SSL one. Still the same issue, it won't accept more than 10 connections a second... Is ssl:ssl_accept is doing this ? does it lock anything that would prevent Erlang to scale this ?
EDIT 2:
After looking around on other SSL server created in erlang, it seems that they use some kind of driver for SSL/TLS connection, my examples are RabbitMQ and EjabberD.
Nowhere there is ssl:ssl_accept in their Erlang code, I haven't investigate a lot, but it seems they have created their own driver in order to upgrade the TCP Socket to a SSL/TLS one.
Is that because there is an issue with Erlang's module SSL ? Does anyone know why they are using custom driver for SSL/TLS ?
Any thoughts on this ?
Actually it was not the SSL accept or handshake that was slowing the whole thing.
We found on the erlang question list that it was the backlog.
Backlog is set to 5 by default. I have set it to SOMAXCONN and everything works fine now !