how to filter by data using BPF [closed] - filtering

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I am trying to filter a pcap file using bpf syntax. I need it to return only GET requests from HTTP that contains a certain word in the GET request, is it possible to do it? I managed to get the GET requests from HTTP but I can't find how to filter by the data of the packet.

What you've been asked to do is tricky, difficult, and impractical unless Wireshark or TCPDump do not have a protocol parser for some weird protocol that you are using.
A way to grab GET requests using BPF only would be as follows:
dst port 80 and tcp[(tcp[12]>>2):4]=0x47455420
The reason it must be done in this way is that you must account for the possibility of changing TCP options and, as a result, changing locations for where the data offset begins. This figures out where the data starts and examines the first four bytes for the string "GET ".
You may also note that I am taking a shortcut for the TCP data offset value in byte 12. It would be better practice to do this:
(tcp[12]>>4)*4
or this:
(tcp[12]&0xf0 >> 2)
This would account for any bits in the reserved lower nibble being enabled.

It is late for an answer but anyway.
You can filter the GET or any other HTTP requests with BPF.
The next example from bpfcc-tools shows the similar task implementation. It supposed to works on live network interface, not a pcap file. But I hope you can adopt it to be applied on a file.

Related

How can I perform ssl_read with ssl_pending? [duplicate]

This question already has answers here:
how to work CORRECTLY with SSL_read() and select()?
(1 answer)
SSL_pending, read and write with non blocking sockets?
(1 answer)
Closed 11 months ago.
I've seen this comment. "SSL_read should not be treated the same as read/recv since there is internal buffering (see SSL_pending) and it can actually require that the sockets gets writable to continue". So I've seen SSL_pending man page in openssl site, but I can't find how to handle ssl_pending,ssl buffereing delay, and google, stack overflow so on. Where can I find guides about this topic?

REST API best practice on handling errors [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm wondering how should errors be handled on a REST API backend, internally. Say the user gives an incomplete request payload, fails validation, or looking for something that does not exist. We'll want to return 400 or 404 for those cases.
In some frameworks (only those that I have experience with), we do this by throwing exceptions (NestJS, Spring, etc). But on Go, there's the error returned by operation methods (validation, access to db) that can indicate the error (if err != nil) and we can bubble-up the err up to the controller level and have different handling for it (return some specific status, error message, etc).
My question is, which way (or if there's another preferred way) is the best to handle errors on a backend? The problem is throwing exceptions will show on any logs monitoring tools and will show that the app have so many errors even though most might be 4XX errors (we could filter the logs to find 50X errors definitely), and using the errors return object might be cumbersome to bubble-up for every validation functions we'd have. I'll be happy if there's any repository example or article that explains the similar topic.
Thanks!
One issue I see with throwing exceptions is that we may give out some internal error to the users, which may not benefit them and will also expose our implementation.
using the errors return object might be cumbersome to bubble-up for every validation functions we'd have.
Yes propagating the error can be cumbersome, but I think that it is good to propagate required errors from the called function to the callee and let the callee decide what they want to do. For the rest call, I think it is ok if we propagate the error from the DB layer to the service layer to the rest layer.
Also, we can wrap the error at the rest layer into a standard response message
{
"type": "/errors/incorrect-user-pass",
"title": "Incorrect username or password.",
"status": 401,
"detail": "Authentication failed due to incorrect username or password.",
"instance": "/login/log/abc123"
}
We can do this by calling a wrapper function for handling error when we call Http.Handle(). This post contains example function ServeHTTP().
Credit: The serveHTTP function was taken from Zeynel Ă–zdemir and the response object example from https://www.baeldung.com/

Perl improve performance of generating 1 pixel image [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Hi I need to generate a 1x1 pixel image in perl, what can be the fastest way to generate this. Assuming i will be getting 10K connections/per second on my web server.
Currently i am using this :
print MIME::Base64::decode("iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAABGdBTUEAALGPC/xhBQAAAAZQTFRF////AAAAVcLTfgAAAAF0Uk5TAEDm2GYAAAABYktHRACIBR1IAAAACXBIWXMAAAsSAAALEgHS3X78AAAAB3RJTUUH0gQCEx05cqKA8gAAAApJREFUeJxjYAAAAAIAAUivpHEAAAAASUVORK5CYII=
I cannot host a static file, as we need to process the request for some data.
Thanks
Kathiresh Nadar
First off, for high performance perl you should be using fastcgi (via the FCGI module directly, or the CGI::Fast wrapper) or mod-perl or some other technology to make your script stick around as a persistent process in memory.
Secondly, if you're processing the request for some other data first and that involves anything like writing to a file or talking to a database or something like that, your time will be dominated by that processing. Generating the image is not the slow part.
But let's answer your question anyway: assuming that you are using some keep-the-script-in-memory technology, then the first thing you can do is move your MIME::Base64::decode call to a BEGIN block, store the result in a variable, and use that variable.
But also, sending the image over the wire is likely going to take longer than the processing on the server, so why are you sending 167 bytes of PNG when you could be sending 42 bytes of GIF? Put both of those pieces of advice together, and you get:
my $gifdata;
BEGIN { $gifdata = MIME::Base64::decode(
"R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7"); }
print $gifdata;

What is the issue of select() using so much CPU power? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am writing a network communication program using non-blocking sockets (C/C++) and select. The program is pretty big, so I cannot upload source code. In a very aggressive testing session, I use testing code to open and close both TCP and UDP frequently. It always ends up that one end does not respond and has CPU usage over 98 or 99%. Then I use gdb to attach. "bt" shows the following:
0x00007f1b71b59ac3 in __select_nocancel () at ../sysdeps/unix/syscall-template.S:82
82 ../sysdeps/unix/syscall-template.S: No such file or directory.
in ../sysdeps/unix/syscall-template.S
What type of error could it be?
$ uname -a
Linux kiosk2 2.6.32-34-generic #77-Ubuntu SMP Tue Sep 13 19:39:17 UTC 2011 x86_64 GNU/Linux
It's impossible to say without looking at the code, but often when a select-based loop starts spinning at ~100% CPU usage, it's because one or more of the sockets you told select() to watch are ready-for-read (and/or ready-for-write) so that select() returns right away instead of blocking... but then the code neglects to actually recv() (or send()) any data on that socket. After failing to read/write anything, your event loop would try to go back to sleep by calling select() again, but of course the socket's data (or buffer space, in the ready-for-write case) is still there waiting to be handled, so select() returns immediately again, the buggy code neglects to do the recv() (or send()) again, and around and around we go at top speed :)
Another possibility would be that you are passing in a timeout value to select() that is either zero or near-zero, causing select() to return very quickly even when no sockets are ready-for-anything... that often happens when people forget to re-initialize the timeval struct before each call to select(). You need to re-initialize the timeval struct each time because some implementations of select() will modify it before returning.
My suggestion is to put some printf's (or your favorite equivalent) immediately before and immediately after your call to select(), and watch that output as you reproduce the fault. That will show you whether the spinning is happening inside of a single call to select(), or if something is causing select() to return immediately over and over again.

how libevent detect that a socket is closed

if I add an event for a specific socket to event loop,
for example, a TCP connection socket.
then it may happen that the socket is closed,
then how will libevent act?
can it detect this?
thanks!
EDIT: I think I misinterpreted your question at first.
If you mean that the socket is closed from the remote end
As per the documentation you can use the event_new() and event_add() calls to register interest in a socket. Make sure you specify EV_READ since you are interested in when the socket is closed.
Remember that there is no difference in file descriptor readiness between data available for reading and a closed socket. Normally you must read the socket to find out which condition is true, but if you don't want to read the socket then you can look here for a hint.
If you mean that the socket is locally closed (the file descriptor was closed
Using a file descriptor after it has been closed is never defined and can always lead to undefined results. This is not specific to libevent. Before you close a file descriptor, you must make sure that no other thread in your program is using it, and you must make sure that no other part of your program is going to try using it in the future. That means unregistering the file descriptor from libevent at the same time that you close it.